Industry Hot News (6370)
It is the end of an era for the Business Continuity Institute as Lyndon Bird FBCI has announced he is to stand down from his role of Technical Director. Over the last 21 years, Lyndon has become an integral part of the Institute, from his role as one of the founding members, through his position as Chairman of the Institute, to his job as Technical Director.
In nine years as Technical Director at the BCI, Lyndon has ensured that the BCI continues to have an effective and consistent voice on all matters of Business Continuity Management within the business, government, regulatory and academic communities. During his time, the Good Practice Guidelines have become a well respected source of global best practice, and the BCI has contributed significantly to the development of national and international standards.
On announcing his decision, Lyndon reflected that “although the BCI's work in all of these fields is ongoing, I feel my role as the main catalyst for this has changed. The BCI has grown to the point where it is staffed by a wide range of very competent people who are more than capable of dealing with the future challenges the Institute and the discipline might face. It is therefore an ideal time for me to move on and seek other interesting and challenging projects.”
On what lies ahead for him, Lyndon explained that "the opportunities created by the emergence of a wide-scale global resilience movement are very exciting and I look forward to continuing with my diverse writing, editing, teaching, commentating and consulting activities wherever in the world such opportunities emerge. I will no doubt be working with many BCI members in the future, albeit in a different capacity, but still with the same enthusiasm and passion for our subject.”
David James-Brown FBCI, Chairman of the Institute, described Lyndon as being "intimately involved with the establishment and growth of the Institute and has dedicated an enormous amount of his time and energy to making the BCI what it is today. Lyndon is truly one of the fathers of the industry and has been an inspiration to so many."
"On behalf of the BCI Board and the Membership I would like to express our heartfelt thanks and appreciation for an exceptional contribution; not just in terms of work but the personal attributes that Lyndon has brought. Lyndon will be sorely missed around the office for his wisdom, humour and humility; for his mentoring, his support and his encouragement. He will be missed by the Board for his dependability, his insightfulness and his clear thinking."
Steve Mellish FBCI, former Chairman of the BCI, and close friend to Lyndon, said of him: "Lyndon has always been reliably consistent in his passion for the subject and has such an astute capability to analyse situations and information to see connections or trends that many just don’t see. His devotion to the BCI has been there from ‘day one’ as one of the founding members. He has probably spent more time on the Board than anyone else I know including two terms as Chairman. To this day he still talks enthusiastically about the future and how business continuity and the BCI has and will continue to drive the whole resilience agenda going forward."
"If it wasn’t for Lyndon I know that I would not have achieved half of what I have done as a business continuity professional and without doubt, never have been so involved with the Business Continuity Institute. His wise counsel and support enabled me to face and deal with many challenging situations over my 12 years on the Board."
Anyone who has ever used Business Continuity Management System (BCMS) knows that having access for your business, IT, and executive planners is essential for two critical reasons:
- YOUR SYSTEM MAY INHIBIT DATA GATHERING AND ANALYSIS: You need quite a bit of data from many sources in your organization in order to formulate your BCP. While meeting with all users is fantastic, it simply is not feasible—even in the smallest of organizations. Even though your BCMS is supposed to streamline this activity, limiting users can do the exact opposite. It FORCES YOU to gather data by going directly to the user or utilizing outside methods (e.g. spreadsheets or external survey tools). This requires extensive work outside the BCMS.
Business Continuity Planning is often theoretical. After all, we can’t really know what we’ll need until a disruption occurs (and by then, it’s too late for planning!). As a result, we have little choice but to make our best guess as to what we’ll need when something hits the proverbial fan. A previous article discussed the pitfalls of assigning Business Continuity tasks to individuals because of risks to their availability. You should also be cognizant of the limitations of those teams and individuals assigned to carry out recovery tasks.
BC Planning deals with many unknowns: what will happen, when it will happen, how severe the disruption may be. We also don’t know how long the disruption – or the recovery from it – will last. We may assume that assigned teams or individuals will stick with the recovery process until normalcy is achieved. Is that likely? Who knows? But if it isn’t (if, for example, the recovery lasts more than 3 days) what is in our Plan to account for the limitations on assigned personnel? What kinds of ‘limitations’ must be accounted for?
The Cloud Standards Customer Council has released version two of its guide to cloud security.
The abstract reads as follows:
“Much has changed in the realm of cloud computing security since the original Security for Cloud Computing whitepaper was published in August, 2012. The aim of this guide is to provide a practical reference to help enterprise information technology (IT) and business decision makers analyze the security implications of cloud computing on their business. The paper includes a list of steps, along with guidance and strategies, designed to help these decision makers evaluate and compare security offerings from different cloud providers in key areas.”
Verisk Maplecroft has published its 2015 Natural Hazards Risk Atlas, which ranks over 1300 cities in 198 countries on their exposure to natural hazards to help organizations identify and compare risks to populations, economies, business and supply chains.
According to the Atlas, the strategic markets of Philippines, China, Japan and Bangladesh are home to over half of the 100 cities most exposed to natural hazards, highlighting the potential risks to foreign business, supply chains and economic output in Asia from extreme weather events and seismic disasters. Of the 100 cities with the greatest exposure to natural hazards, 21 are located in the Philippines, 16 in China, 11 in Japan and 8 in Bangladesh. Analysis for the Natural Hazards Risk Atlas considered the combined risk posed by tropical storms and cyclones, floods, earthquakes, tsunamis, severe storms, extra-tropical cyclones, wildfires, storm surges, volcanoes and landslides.
The Philippines’ extreme exposure to a myriad of natural hazards is reflected by the inclusion of eight of the country’s cities among the ten most at risk globally: including Tuguegarao (2nd), Lucena (3rd), Manila (4th), San Fernando (5th) and Cabantuan (6th). Port Vila, Vanuatu (1st) and Taipei City, Taiwan (8th) are the only cities not located in the Philippines to feature in the top ten.
By Duncan Ford MBCI
Could you get more out of your business continuity exercises? Do you have an inner concern that last year’s exercise programme didn’t demonstrate as much as you would have liked, or that there may be alternative ways of delivering the exercise that would be more cost effective and less effort?
Guidance from the various business continuity institutes and regulators, also included in recognised standards, puts a strong emphasis, quite correctly, on the essential requirement to exercise plans and recovery procedures. However, how do you assess the quality of the exercises, as opposed to the quantity? Are different types and styles of exercises being used, within an integrated programme, to meet different business needs?
Take a couple of seconds to consider whether:
- The maximum return is being gained from the time people commit to exercises;
- Different techniques could be used to engage directors and senior managers;
- The exercise(s) sufficiently challenge the organization’s assumptions about its ability to respond and recover.
Cold snaps are the weather phenomenon most likely to damage UK business performance according to new research commissioned by cloud services company, 8x8 Solutions, to highlight the need for businesses to prepare for adverse weather to limit lost productivity. Economists from the Centre for Economics and Business Research (Cebr) examined the relationship between different weather events and economic growth across the UK’s main industries over the last decade.
They found that since 2005, periods of very cold weather have seen quarterly GDP growth on average 0.6 percentage points lower than typical levels. When minimum temperatures are one degree Celsius lower than average, quarterly GDP is on average £2.5 billion lower. This is a bigger negative effect than any other form of adverse weather, including snowfall, heat waves or flooding.
The fall in GDP results from lower output across a number of industries and lost productivity as transport links and staff availability suffer. Those who do get to work on particularly poor weather days often meet a skeleton staff, hindering productivity.
Whilst cold has the biggest negative effect on the economy, different industry sectors are impacted by different forms of extreme weather. For example, professional services and accommodation and food are the sectors that take the biggest hit from heavy rainfall. High rainfall has a big impact on office-based jobs, with just ten millimetres above average costing the economy £86 million in a single quarter. In January 2015 rainfall was 26.5mm above the 2004-2014 January average of 126.8mm – potentially costing the economy £76.3million over the quarter.
The research also explores the resilience of businesses of different sectors and sizes. The information and communications sector is one of the few to see positive growth during poor weather. Cebr concluded that this is because the sector leads the way in using cloud-based technology allowing employees to work from home. On average, nearly two thirds (65%) of all companies in this sector use some form of cloud technology compared to just 15-30% of all other businesses.
But the report warns that smaller businesses are at a disadvantage in terms of poor weather, as Scott Corfe, Head of UK Macroeconomics, Cebr explains: “Many small offices are unprepared for such events as they often lack remote access to their work due to security concerns and a lack of infrastructure. This is compounded in many cases by inadequate internet connections or computing power at staff homes. In addition SMEs tend to suffer more than their larger counterparts who can spread the setup and maintenance costs of remote working infrastructure across many more staff.”
Kevin Scott-Cowell, CEO of 8x8 Solutions, says, “Bad weather hits businesses hard, and medium-sized companies are more vulnerable than their larger counterparts. Until now, the technical infrastructure to enable remote working and guard against disruption has been out of reach for many companies, but cloud solutions are changing this. It’s now affordable for any size business to put in place a plan and deploy the right remote working technology. This can make sure it’s business as usual for customers, whatever the weather.”
The research is released in the run up to Business Continuity Awareness Week, an initiative run by the Business Continuity Institute. Lyndon Bird FBCI, Technical Director at the BCI, said, “This research is a timely reminder of the need for companies to adopt business continuity management best practice. That means having the plans and technology in place to manage risks to the smooth running of their organisation or delivery of a service, ensuring continuity of critical functions in the event of a disruption, and effective recovery afterwards.”
Did I pack socks? Check. Toothbrush? Check. Business cards, phone charger, passport? Check, check, and check. Do I know what I need to do and what not to do to protect myself, my devices and the company’s data while I’m on the road and traveling for work? [awkward silence, crickets chirping]
S&R pros, how would employees and executives at your firm answer that last question? It’s an increasingly important one. Items like socks and toothbrushes can be replaced if lost or forgotten; the same can’t be said for your company’s intellectual property and sensitive information. As employees travel around the world for business and traverse through hostile countries (this includes the USA!), they present an additional point of vulnerability for your organization. Devices can be lost, stolen, or physically compromised. Employees can unwittingly connect to hostile networks, be subject to eavesdropping or wandering eyes in public areas. Employees can be targeted because they are an employee of your organization, or simply because they are a foreign business traveler.
(TNS) — Army researchers in a lab outside Washington worked for years on a software tool to help soldiers understand how hackers were targeting military computers.
Late last year they did something unusual: They released their project for anyone on the Internet to poke and prod.
William Glodek, the leader of the project, said the Army Research Lab hopes that if his team gives something, they'll get something.
"The Army is open and willing to collaborate," he said. "Hopefully, we can attract some bright talent to contribute to the project."
The federal government is looking for ways to improve the security of the nation's computers, but its plan to share information about threats faces legal obstacles before it can get moving. By offering up code, rather than data, Glodek's team has been able to take a step forward — and join a growing movement among military and intelligence community coders to share what they make.
Cybersecurity is a priority for enterprise executives and their boards, but a serious disconnect also exists in the C-suite on what the risk priorities should be and why, according to recent research. Some of the gap can be attributed to the day-to-day focus of different executive functions, but much of it goes far deeper into problems with culture and communication.
When consulting firm Protiviti and the Enterprise Risk Management (ERM) Initiative at the North Carolina University Poole College of Management recently conducted the third annual survey of business executives for “Executive Perspectives on Top Risks for 2015,” and examined the ranking of 27 risks by job function, they found that CFOs and chief audit executives (CAEs) perceived a riskier business environment than CEOs and the board. And CEOs and board members each had their own focus on the types of risks they perceived as most important.
Protiviti examined the relationship between the job functions of the executives it surveyed and whether they ranked macroeconomic, strategic or operational risks as of highest concern, and a pattern emerged. Board of directors members collectively named four strategic risks among their top five concerns, along with one macroeconomic issue; CEOs collectively named four macroeconomic risks among their top five, along with one strategic risk. And other executives named more operational risks to their top five lists.
Two of my favorite bloggers, Tony Jaques in Australia and Jonathan and Erik Bernstein from California, had excellent posts and two of the most important topics: rumor management and apologies.
Tony tells the story of a hepatitis A scare in Australia that got linked to a frozen berry product. The company out of an abundance of caution as they like to say, voluntarily recalled their product without verification their product was the cause. From there as you will see the media did their thing and the company apparently did not do enough to correct the misreporting.
The lesson is clear: a lie (or error) repeated often enough becomes the truth. The only way I know to deal with this is to loudly, clearly over and over and over tell the truth and correct the misinformation.
For many of our readers and the organizations where they work, any kind of supply chain disruption could easily qualify as a serious incident and one that would easily have been discussed and included in their disaster preparedness planning process.
With that thought in mind, our staff recommends reading and potentially adding a recent EventWatch™ 2014 Supply Chain Disruption report to your organization’s business continuity and disaster preparedness team’s reading resource library.
This report This report was funded and supported by Resilinc’s database of over 40,000 suppliers and over 400,000 parts which are tracked in its cloud supplier intelligence repository, and, analyzed incidents by risk type, industry, geography, severity, and seasonality and compared 2014 data in these categories with 2013.
Disaster recovery planning for your IT installations may use automated procedures for a number of situations. Virtual machines can often be switched or re-started in case of server failure, and network communications can be rerouted without human intervention. For other requirements, people will be involved in getting IT systems up and running properly after an incident. But people do not switch into auto-run modes like a machine. They can be affected by the surprise factor of an IT disaster and by the pressure to bring things back to normal. Five aspects of usability may need to be designed into your DR planning if you want the best chances of a satisfactory recovery.
Risk management and risk transfer must work together to make organizations more resilient, as firms become more exposed to major disasters and subsequent business interruptions as a result of their increasingly complex global networks. Traditional property damage/business interruption policies were never designed to meet the risks faced by organizations today, and the business interruption insurance market has not kept pace with these rapid changes, according to Marsh.
In a new Marsh Risk Management Research report, the firm highlights how the limitations of existing business interruption insurance, including gaps in cover and inaccurate valuations, are resulting in less than optimal coverage for clients and makes the case for insurance modernisation.
Based on concerns raised by colleagues, clients, loss adjusters, lawyers and insurers, the report focuses on five core areas where Marsh believes improvement is required: insured values; indemnity periods; wide area damage scenarios; supply chain; and claims.
Caroline Woolley, Global Leader of Marsh’s Business Interruption Center of Excellence, commented: “A property damage event remains one of the major exposures any company can face, and business interruption is one of the main insurances purchased. Business interruption policies, however, have done little to evolve since the middle of the last century.
“The insurance industry needs to acknowledge the shortcomings of existing business interruption cover and build a better solution for buyers. This report is Marsh’s contribution to the debate as we seek to improve existing solutions and reshape the industry to address insurance buyers’ evolving needs.”
The report ‘Business Interruption Insurance Efficacy: Five Key Issues’ can be found after registration here.
Whilst SSD usage is up, the technology is still a cause of downtime: one third of respondents to a Kroll Ontrack survey confirm they have experienced some sort of SSD technology malfunction.
According to a recent solid state disk (SSD) technology use survey by Kroll Ontrack, while nearly 90 percent of respondents leverage the performance and reliability benefits of SSD technology within their organisation, one-third confirmed they experienced some sort of SSD technology malfunction. Of those who did, 61 percent lost data and fewer than 20 percent were successful in recovering their data, highlighting the known complexity of SSD data recovery.
In the UK, 27 per cent of respondents had experienced a failure of their SSD technology and of these 56 per cent experienced data loss as a result. A slightly higher number than the global figure (26 per cent) were able to recover their data following a failure.
What can managed service providers (MSPs) and their customers learn from these IT security newsmakers? Check out this week's list of IT security stories to watch to find out:
(TNS) — Emergency personnel responding to an oil train derailment in West Virginia last week applied lessons learned from a rail disaster more than three decades ago, and likely prevented a bad situation from becoming much worse.
This week marks 37 years since a deadly explosion in Waverly, Tenn. On Feb. 24, 1978, a derailed tank car carrying liquid propane violently ruptured, killing 16 people, including the small town’s police and fire chiefs.
Emergency response and training has changed dramatically in decades since the tragedy.
Buddy Frazier, the city manager of Waverly, about 65 miles west of Nashville, who was a young police officer when he witnessed the 1978 explosion, said that emergency responders are better trained and better equipped today. Still, he understands the challenges they face.
Virtualization has been changing the business IT landscape since the first hypervisor solution debuted in 1999. The technology initially targeted large enterprises and data center operators that could take advantage of its ability to add capacity and scale without physical components or the power and cooling costs required by hardware assets. During the past several years, though, virtualization has made significant in-roads in the SMB market due to a reduction in upfront investment costs, improved reliability and the proliferation of virtualization-dependent cloud services.
Industry research points to the continued growth of virtualization, and, according to social business platform provider Spiceworks’ 2014 State of IT Report, the adoption of virtualization among IT pros is currently at 74 percent worldwide. The Spiceworks report found that just over half of SMBs with fewer than 20 employees are currently leveraging virtualization, while 70 percent of SMBs with 20 to 99 employees and 83 percent of SMBs with 100 to 249 employees have adopted the technology for everything from productivity applications to databases to managed services.
(TNS) — Joplin, Springfield and Branson, Mo., have agreed to a set of procedures that will standardize how outdoor storm-warning sirens are activated and how they are tested.
The objective is to create a uniform standard across the region where none exists now. The adoption of the procedures by three of Southwest Missouri’s largest communities already has spurred other communities, such as Carthage, Bolivar, Pierce City and Monett, to participate in the guidelines.
The new procedures were unveiled during a news conference on Wednesday at the Springfield-Greene County Office of Emergency Management. Officials from the communities and representatives of the National Weather Service forecast office at Springfield were on hand for the announcement.
Among the many services state and local governments provide, few are as popular, as trusted or as essential as 911. Americans place roughly 240 million 911 calls each year, says the National Emergency Number Association, and access to 911 is nearly universal. Nevertheless, the system so many Americans rely on today to report emergencies and other problems stands on the brink of obsolescence.
While Americans are now accustomed to using Twitter, Facebook, Instagram and other social-media platforms for the rapid-fire sharing of news and information, most 911 systems can't handle the texts, videos, data and images that we increasingly use to communicate.
That's because in many parts of the country 911 is still rooted in the landline-telephone-based infrastructure that gave the system its start in 1968. As of November 2014, just 152 counties in 18 states even had the capability for citizens to text to 911. And only a handful of states -- such as Iowa and Vermont -- have taken the leap to Internet-enabled 911, known as "next-generation 911."
(TNS) — The tornado that struck Joplin, Mo., nearly four years ago left 161 people dead and much of the city devastated.
But the storm taught forecasters lessons that may have saved lives during subsequent disasters, including the May 2013 tornadoes in the Oklahoma City area, a National Weather Service official said Wednesday.
During a keynote address Wednesday at the National Tornado Summit in Oklahoma City, National Weather Service Deputy Director Laura Furgione discussed lessons the agency learned from a series of deadly tornadoes in the spring of 2011.
Will 2015 be the year the cloud gets past the hype? While cloud-based file sharing and other cloud services are being adopted by almost all businesses, the cloud is still in the early stages of its technological revolution. Whether it is personal computers, the internet, or 3D printing, every new technology goes through a period of hype and disillusionment before the really productive innovation takes place.
Gartner calls this the Hype Cycle of Emerging Technologies. According to Gartner, cloud computing has already passed the inflated expectations people had about it and everyone is beginning to become disillusioned by it. But that’s not a bad thing! Once the hype ends, real enlightenment can begin, and that’s where really useful and significant things get created.
So now that the hype over the cloud is over, is 2015 the year of enlightenment?
by Ben J. Carnevale
Business Continuity, Resiliency and Emergency Management Planning teams are often looking for additional ideas, programs and campaigns to help those teams be more prepared and ready to mitigate losses from potential disasters affecting the organization where they work, and the community where they work and live with their families.
Our staff believes that the America’s PrepareAthon™ campaign qualifies as one of the best resources for those teams to look for ideas and assistance for taking action to increase emergency preparedness and resilience.
America’s PrepareAthon! ™ is a grassroots campaign for action within the United States to increase community emergency preparedness and resilience through hazard-specific drills, group discussions, and exercises. Throughout the year, America’s PrepareAthon! ™ helps communities and individuals across the country to practice preparedness actions before a disaster or emergency strikes.
The Business Continuity Institute’s North America awards will take place on 24th March 2015 during the DRJ Spring World in Orlando. The awards recognise the achievements of business continuity professionals and organizations based in the USA and Canada.
The BCI has now issued the shortlist for the awards which is as follows:
Continuity and Resilience Consultant
- Robbie Atabaigi, KPMG
- Jeff Blackmon FBCI, Strategic Continuity Solutions
- Christopher Duffy, Strategic BCP
- Paul Kirvan FBCI
- Debjyoti Mukherjee, KPMG
Continuity and Resilience Newcomer
- Garrett Hatfield, MetLife, Inc.
- William Kearney, Cameron
- Tamika McLester, Crawford & Company
Continuity and Resilience Team
- Business Resiliency Office (BRO), Automatic Data Processing (ADP)
- ETS Enterprise Resiliency Department, Educational Testing Service
- TMG Health Team, TMG Health
Continuity and Resilience Provider (Service/Product)
- ClearView Continuity
- Fusion Risk Management, Inc.
- Strategic BCP
- Virtual Corporation
- xMatters, Inc.
Continuity and Resilience Innovation
- 9yahds, Inc.
- Strategic BCP
- Send Word Now
- Quorum Technologies
- Suzanne Bernier MBCI
- Christopher Duffy
- Frank Leonetti FBCI
The current global influenza situation is characterized by a number of trends that must be closely monitored, says the World Health Organization (WHO) in a recent briefing document.
According to WHO these trends include:
- An increase in the variety of animal influenza viruses co-circulating and exchanging genetic material, giving rise to novel strains;
- Continuing cases of human H7N9 infections in China; and
- A recent spurt of human H5N1 cases in Egypt.
- Changes in the H3N2 seasonal influenza viruses, which have affected the protection conferred by the current vaccine, are also of particular concern.
The highly pathogenic H5N1 avian influenza virus, which has been causing poultry outbreaks in Asia almost continuously since 2003 and is now endemic in several countries, remains the animal influenza virus of greatest concern for human health. However, over the past two years, H5N1 has been joined by newly detected H5N2, H5N3, H5N6, and H5N8 strains, all of which are currently circulating in different parts of the world. In China, H5N1, H5N2, H5N6, and H5N8 are currently co-circulating in birds together with H7N9 and H9N2.
“The diversity and geographical distribution of influenza viruses currently circulating in wild and domestic birds are unprecedented since the advent of modern tools for virus detection and characterization. The world needs to be concerned,” states WHO.
Virologists interpret the recent proliferation of emerging viruses as a sign that co-circulating influenza viruses are rapidly exchanging genetic material to form novel strains.
The emergence of so many novel viruses has created a diverse virus gene pool made especially volatile by the propensity of H5 and H9N2 viruses to exchange genes with other viruses. The consequences for animal and human health are “unpredictable yet potentially ominous” says WHO.
On many levels, the world is better prepared for an influenza pandemic than ever before, according to WHO. However, the level of alert is high and although the world is better prepared for the next pandemic than ever before, it remains highly vulnerable, especially to a pandemic that causes severe disease. Nothing about influenza is predictable, including where the next pandemic might emerge and which virus might be responsible. The world was fortunate that the 2009 pandemic was relatively mild, but such good fortune is no precedent, says WHO.
Excellent exercises take time and resources to prepare and run; but they are an essential component of a business continuity programme to prove capability and to train people. It is important to get the best out of them and make sure they deliver against the business recovery objectives.
What makes a good exercise?
With this question in mind, Corpress has created an Exercise Checklist as an aide-memoire to help business continuity, crisis management and emergency professionals develop, run and observe exercises. The document shares Corpress partners’ combined experience gained over 20 plus years delivering global programmes for testing and training.
The Exercise Checklist includes a number of new ideas and approaches to exercises and simulations, which are designed to engage senior executives, reduce development time and maximise engagement across the business.
Get the best from your exercise programme in the year ahead by downloading the Checklist after free registration using the form below:
By John Zeppos, FBCI
Business continuity management in large organizations with many different departments and diverse personalities can be a challenge at times.
When you’re trying to implement good business continuity management in a company that spans countries and time zones it gets even more complicated. Throw in cultural differences between the various regional offices on top of the business-cultural differences within each office, and it can seem like a hard road to nowhere.
As a top-level manager in a multi-national company, you will understand the challenges in getting your own staff to understand the concept of business continuity, let alone the difficulties involved in communicating these plans to managers in overseas branches: understanding business continuity jargon is hard enough in one language, but communicate you must because resilience to business disruption affects not only their own staff, but the stability of the business as a whole.
If the value that data analytics has brought to businesses can be measured in the extent to which it enables those businesses to retain their customers, it makes sense to drill down on exactly what that enabler is. Most observers would argue that the enabler is Big Data. But the real enabler just might be small data.
That was my key takeaway from a recent conversation with John Rode, senior director of demand generation at Preact, a provider of cloud-based data analytics services in San Francisco that’s focused on reducing customer churn. According to Rode, “small data” is typically CRM data, which he said is the starting point for almost every decision about customers, whether it’s targeting prospects, conversion, up-sell or retention. Rode explained the significance of that this way:
While this data is most definitely “small,” it tells a lot about the customer—how much they pay, for which product, how many employees they have, which industry they are in, their decision-making authority, and so on. Once you begin to analyze customer behavior [associated with] your product, you are essentially operating a dial that takes you from small data to Big Data, depending on the sophistication of your analysis. You can analyze the behavior of each individual separately … and apply algorithms that analyze how their behavior is trending, and thus determine whether they are a churn risk. While this is a lot of data, most folks would still characterize this as small data.
HP has published the 2015 edition of its annual Cyber Risk Report, which looks at the security threat landscape through 2014 and indicates likely trends for 2015.
Authored by HP Security Research, the report examines the data indicating the most prevalent vulnerabilities that leave organizations open to security risks. This year’s report reveals that well-known issues and misconfigurations contributed to the most formidable threats in 2014.
“Many of the biggest security risks are issues we’ve known about for decades, leaving organizations unnecessarily exposed,” said Art Gilliland, senior vice president and general manager, Enterprise Security Products, HP. “We can’t lose sight of defending against these known vulnerabilities by entrusting security to the next silver bullet technology; rather, organizations must employ fundamental security tactics to address known vulnerabilities and in turn, eliminate significant amounts of risk.”
Why are your customers using the cloud? Why aren’t others using it? As an MSP working with cloud-based file sharing, you should know what motivates your clients and prospects to either adopt or avoid the cloud.
Results from a new survey offer an interesting view into what people think of the cloud, how they use it, and what concerns you should address to bring more people into the cloud. Understanding what influences cloud sharing decisions will help you better position your services and be better prepared to handle objections.
Here are some findings from the survey that show why people either are or are not using the cloud, and how you can use that information to your advantage.
Budding tech entrepreneurs with dreams of being the next Bill Gates should look to BJ Farmer as a shining example of how to succeed in this industry.
Listen to the entire interview click here.
While he may not be quite as successful as Gates (is anyone?), Farmer has enjoyed much more success than most people who start their own tech companies. He the founder and president of CITOC, a Houston-based IT services firm that specializes in providing premium cloud services and Microsoft 365 consulting.
CITOC recently celebrated its 20th anniversary (1995 – 2015), and in that span CITOC (an acronym for Change Is the Only Constant) has received a slew of awards, most notably winning Houston’s Microsoft Partner of the Year Award in 2013 and 2014. In addition, CITOC was listed in the 2011 rendition of Inc.com’s annual Top 5000 list (ranked #3997 for its 2010 revenue of $4.6 million), and it has also been recognized as one of the Top 50 fastest growing tech companies in the Houston metro area seven years running by the Houston Business Journal.
We previously talked to Farmer about a client prospect of his that had a rotating cycle of CIOs being hired and then soon leaving, and this was costing them a lot of money. We wanted to catch up with Farmer on how he helped this client.
What worries chief information officers (CIOs) and IT professionals the most? According to a recent survey commissioned by Sungard Availability Services, security, downtime and talent acquisition weigh heaviest on their minds.
Due to the increasing frequency and complexity of cyber attacks, security ranks highest among IT concerns in the workplace for CIOs. As a result more than half of survey respondents (51%) believe security planning should be the last item to receive budget cuts in 2015.
While external security threats are top of mind for IT professionals, internal threats are often the root cause of security disasters. Nearly two thirds of the survey respondents cited leaving mobile phones or laptops in vulnerable places as their chief security concern (62%), followed by password sharing (59%). These internal security challenges created by employees, lead 60% of respondents to note that in 2015 they would enforce stricter security policies for employees.
Second to security, downtime is also a leading concern for CIOs. Two in five (42%) respondents consider the testing of their disaster recovery plans vital to their organizations and also among the last line items that should be cut from 2015 IT budgets. Not only is downtime expensive, but the damage to an enterprise’s reputation far outweighs the monetary costs.
Disaster recovery testing dramatically reduces downtime (by 75%) for enterprises deemed 'best-in-class' in disaster recovery and business continuity. In addition, according to the Aberdeen Group, those that adopt strong resiliency plans can expect 90% less downtime per event compared to the industry average.
“Today CIOs are more concerned with the resiliency of their organizations and the consequences a disaster can have on an organization’s reputation and revenue stream,” said Keith Tilley, executive vice president, Global Sales & Customer Services, Sungard AS. “The implications that information security and downtime threats place on a business have evolved and become more complex in the last several years, making it a high priority for CIOs.”
It is not just CIOs and IT professionals who are concerned about the cyber threat. According to the Business Continuity Institute's latest Horizon Scan report, cyber attacks are the biggest concern for business continuity professionals as well with 82% of respondents to a survey expressing either concern or extreme concern at the prospect of this threat materialising. Data breach came third on the list with 75%.
(TNS) — When tornado sirens went off in Logan County on May 24, 2011, three Guthrie churches that had volunteered to serve as storm shelters were quickly overrun — and not just by people.
Dogs, cats and birds were packed together in church basements with residents looking to escape the tornado, said Logan County Emergency Management Director David Ball. One man showed up to a church with a boa constrictor wrapped around him, Ball said.
While everyone else was jockeying for space, the man and his snake always seemed to have plenty of room to themselves, Ball said.
Ball spoke Tuesday at the National Tornado Summit in Oklahoma City. Since the May 2011 storm, emergency managers have increasingly concluded that public shelters can do more harm than good, he said. Convincing residents to take steps to make sure their homes are safe during tornado season can be a challenge, he said, but it’s the most viable way to keep residents safe.
Throughout the last few years social media has become a key communications strategy for emergency managers. Whether it’s for sharing preparedness messages during blue-sky times or getting crucial information out in real time during an emergency, platforms like Twitter and Facebook are now part of nearly every agency's public-outreach plan. This evolution in crisis communications has been followed by many, and a recently released study sought to understand what affected populations, response agencies and other stakeholders can expect from tweets in various types of disaster situations.
The study, What to Expect When the Unexpected Happens: Social Media Communications Across Crises (PDF), examined tweets posted during 26 emergency situations in 2012 and 2013. With the goal of measuring the prevalence of different types of tweets during various situations, the researchers examined both the information and its source.
The tweets were classified into six categories, and researchers determined the average percentage of tweets for each: affected individuals (20 percent), infrastructure and utilities (7 percent), donations and volunteering (10 percent), caution and advice (10 percent), sympathy and emotional support (20 percent), and other useful information (32 percent). Tweets classified as other useful information varied significantly, the report says. “For instance, in the Boston bombings and LA Airport shootings in 2013, there are updates about the investigation and suspects; in the West, Texas, explosion and the Spain train crash, we find details about the accidents and the follow-up inquiry; in earthquakes, we find seismological details.”
Data can really be anything, including images, geolocation figures, texts, numbers or some combination thereof.
Thanks to the Internet of Things, more of that data is actually describing a physical thing. For us sci-fi geeks, that inevitably raises the question: Can data create a virtual world to actually interact with these things?
InfoWorld reports that Space-Time Insight is exploring this idea with a pilot data project. It’s using virtual reality headsets such as the Oculus Rift as a way to interact with the data.
The company’s data has a unique physicality to it, since it’s a B2B partner for power, oil and gas, logistics and related industries. For instance, in the power industry, the company collects data about transformers. Space-Time Insight’s solution allows you to see a 3D model of a transformer, as well as any warning signals about what’s wrong. Users could even bypass another application, acting from the 3D space or calling in a work team, InfoWorld reports.
The line between consumer and business technology has gotten increasingly blurry during the past decade. Consumer devices are almost indistinguishable from enterprise gear. But the gap between software and applications in each category is far wider.
That’s a good thing to understand as wearables become more common at work. This conversation between Jim Haviland, VoxMobile’s chief strategy officer and IT Business Edge’s Don Tennant gives a good overview of the current situation with wearables. At one point, Haviland makes clear that the real action will be on the software front:
Hardware always gets the headlines, but apps are where the value creation happens in the enterprise. We have been using the mantra, ‘the right information on the right screen at the right time,’ because the key to valuable innovation with mobility is all about application success and user experience. Wearables expand the possibilities for how and when people interact with apps and data, which can lead to more dramatic successes.
The data center is dead. Long live the data center.
This may be a bit premature, but if the traditional enterprise data center is not dead yet, it certainly is approaching the twilight of its years.
The latest word from 451 Research is that enterprise data center construction is essentially flat across the globe while the new crop of cloud-facing, hyperscale facilities is on the rise. Results for the fourth quarter of 2014 have the installed base growing a paltry 0.2 percent to 4.3 million facilities, propped up only by increased activity among the cloud, service provider and multi-tenant sectors. Enterprise IT still controls an overwhelming portion of the worldwide data infrastructure, some 95 percent, and maintains about 83 percent of data center square footage, according to the report. But for now at least, the trend lines are clearly pointing away from owned-and-operated data center facilities toward more cloud- and service-based activity.
If there’s one thing a lot of SMBs have a hard time outsourcing, it’s their HR operation, simply because of its critical nature. Add the notion of allowing the management of that operation to reside in the cloud, and the reluctance, for some, may increase exponentially. But to what degree is that reluctance warranted?
I recently had the opportunity to discuss that issue with Eric Sikola, general manager of TriNet Cloud at TriNet, a human resources services provider in San Leandro, Calif. As the founder of ExpenseCloud, which TriNet acquired in May 2012, Sikola is a vocal advocate of empowering SMBs with better HR options.
“I founded ExpenseCloud in 2008 because I wanted to help companies and their employees better manage their expense process,” Sikola said. “Having personally felt the pain of the old way of managing expenses, I knew there was a better way, and I wanted to help small- and medium-sized business.”
Sikola said when TriNet acquired Expense Cloud, it gained an additional level of innovation.
Board members and C-suite executives across industries perceive the global business environment in 2015 as somewhat less risky for organizations than in the past two years. In “Executive Perspectives on Top Risks for 2015,” consulting firm Protiviti and the Enterprise Risk Management Initiative at the North Carolina State Univeristy Poole College of Management found that this is far from bad news for risk managers, as organizations are actually more likely to invest additional resources for risk management. Internal challenges like succession, attracting and retaining talent, regulation and cybersecurity are drawing the most attention, according to the report.
“Our survey findings indicate that operational risk issues are keeping many senior executives up at night,” said Mark Beasley, Deloitte Professor of Enterprise Risk Management and NC State ERM Initiative director. Indeed, for the third consecutive year, regulatory changes and heightened regulatory scrutiny ranked as the number one risk on the minds of board members and corporate executives, with 67% indicating that it will “significantly impact” their organizations. More than half of global survey respondents indicated that insufficient preparation to manage cybersecurity threats is a risk that will “significantly impact” their organizations in 2015, pushing cyberrisk up three spots from last year to the third-greatest risk.
The extreme weather that has hit much of the country this winter has been labeled “historic” in many quarters, including where I live in eastern North Carolina. While the Northeast has been battered with record-breaking snowfalls, much of the South has been experiencing ice storms and single-digit temperatures for the first time in the lives of many adults. It all begs the question of the impact all of this is having on IT professionals and the organizations they’re charged with keeping up and running.
While it may well be too late for many organizations that entered this winter ill-prepared from a data protection standpoint, what this winter has taught us is that such unexpected events as the collapse of the roof of a data center due to heavy snow and ice need to be anticipated and addressed in order to be fully prepared for next winter.
One of the major reasons for the surge in shadow IT services in recent years is that many internal IT organizations couldn’t really provide a file sharing and synchronization capability for users of mobile computing devices, which those users naturally went out and found on their own via any number of cloud computing services. Now many of those same IT organizations are building their own private clouds, which naturally require file sharing and synchronization.
To address that need, Connected Data developed file share and synchronization appliances, two more of which the company is unveiling today.
After targeting larger enterprises with previous generations of appliances, Jim Sherhart, vice president of marketing, says the Transporter 15 and 30 appliances are aimed at remote offices and small-to-medium (SMB) organizations; the solution starts under $2,500 for 8TB of storage, 6TB of which is actually usable for storing data.
Many efforts to implement ERM are unfocused, severely resourced constrained, and pushed down so far into the organization that it is difficult to establish relevance. The near-term results are “starts and stops” and ceaseless discussions to understand the objective. Risk is often an afterthought to strategy and risk management an appendage to performance management. Ultimately, the ERM implementation runs out of steam and is no longer sustainable.
While there is no one-size-fits-all, the following design principles will help overcome these issues:
Although Washington remains stuck in partisan gridlock, there is one thing that Democrats and Republicans agree on: the need to reduce gridlock in the rest of the country by bringing America's infrastructure into the 21st century.
The basis for that rare consensus is painfully clear. The nation's infrastructure has earned a grade of D+ from the American Society of Civil Engineers, which estimates that it will cost $3.6 trillion to bring our systems to a state of good repair. Across the nation, aging and deteriorating bridges and water treatment plants pose a real threat to public health and safety and a drain on economic growth.
How and when Republicans and Democrats might find common ground to fix the problem remains to be seen. But when that does come to pass, here's another idea that should win support from both sides: Our next-generation of infrastructure must be resilient.
Picture this. A main water pipe bursts and water begins to flood the warehouse, which is also where you happen to be, smartphone in pocket. To avert serious damage and downtime, you need to find the cut-off valve – quickly. At this point, two scenarios are possible. First scenario: you try to find out who can help by calling reception and trying to note the names they suggest and the phone numbers. Second scenario: you access a directory of resources directly from your smartphone, call the person concerned and turn the call into a video call from that person’s desktop so that you can be remotely guided to where the cut-off valve is and how to shut it. How do you get from scenario one to scenario two?
ContinuitySA provides advice for organizations based in areas where power supplies are unstable.
One risk that has become very real for South African businesses is load-shedding. An unstable power supply with the potential of extended periods of power outages over the next several years creates a range of risks that have to be integrated into current business continuity plans.
“We know that load-shedding is going to occur and, in order to put mitigation strategies in place, we first need to understand what the implications are,” says Michael Davies, CEO of ContinuitySA. “What are the issues that businesses should be looking at? Now is a good time to update your business continuity plans in order to assess the impact of load-shedding on your business and weigh up what your risk appetite is.”
Davies says that because electricity is now so integral to modern society, load-shedding creates a complex and interdependent set of risks over and above the task of just keeping the business’s lights on. These risks need to be understood within the context of each business's strategic plan.
Despite your best efforts – and despite the advanced levels of security in your cloud-based file sharing solution – MSPs may eventually find themselves on the wrong end of a data breach. The key question isn’t how to prevent such an incident from happening; even the world’s most security-conscious organizations suffer breaches. Rather, the key question is how much will this inevitable data breach cost you?
Today, the cost is relatively limited and abstract for MSPs. While a data breach can certainly result in a lost customer, or time spent trying to resolve the issue, the real financial costs tend to fall on the client. They are the ones who will pay the compliance violations and lose revenue. After all, it is their data.
But as data breaches increase in both frequency and severity – and as clients rely on you for more of their critical IT functions – it’s only a matter of time before someone decides that the MSP should be held responsible when things go wrong. After all, it is your solution they are using to share data.
Predictive analytics is apparently lucrative for businesses, investors and, of course, predictive analytics companies.
In a recent Forbes column, Lutz Finger noted that predictive analytics companies are attracting multi-million dollar investment deals. Most recently, a company called Blue Yonder secured $75 million in funding from a global private equity firm, which is the “biggest deal for a predictive analytics company in Europe….”
If you’re not familiar with Finger, he’s a director at LinkedIn, an expert on social media and text analytics, and the co-founder and former CEO of Fisheye Analytics. The column shares highlights of his interview with Blue Yonder’s CEO Uwe Weiss, so it’s no surprise that it makes the case for predictive analytics as a sound investment.
It’s not a hard case to make. Gartner predicts a compound annual growth rate of 34 percent from 2012 to 2017, and estimates the market will reach $48 billion. To give you an idea of how that compares, Gartner says MDM was worth $1.16 billion last summer.
Innovation has become accepted as central to competitiveness in today’s world, both in new product development and in enhancement of internal processes. Companies struggle with innovation, and there have been numerous attempts to regularize and program it. But the development of truly breakthrough ideas is difficult, and recognizing them when they do arrive can be harder still. We have processes available for vetting ideas and passing them through a series of increasingly selective gateways until they reach the point of usefulness or are discarded altogether. But we do not have good processes for stitching together new ideas and reaching that eureka moment that says a critical new idea has been found.
Some of the ways that ideas are sourced include crowdsourcing, internal suggestions, brainstorming, and the like. There are idea factories employing innovative individuals who apply diverse experience to create an “out of the box” concept. And, there are programs such as TRIZ, an innovation program developed in Russia in 1946 that seek to apply a systemic process to ideation itself, based around principles extracted from patent literature subjected to contradiction, synthesis, and new arrangement. But creation of ideas is forever thwarted by the fact that we don’t really understand the creative process and may, in fact, be generalizing a multitude of processes in a way that makes them impossible to replicate.
Unrelenting frigid weather often means frozen water pipes – one of the biggest risks of property damage. In fact, a burst pipe can cause more than $5,000 in water damage, according to IBHS research.
Structures built on slab foundations, common in southern states, frequently have water pipes running through the attic, an especially vulnerable location. By contrast, in northern states, builders recognize freezing as a threat and usually do not place water pipes in unheated portions of a building or outside of insulated areas.
Freezing temperatures can be prevented with the installation of weather stripping and seals. This offers two major benefits: keeping severe winter weather out of a structure, and increasing energy efficiency by limiting drafts and reducing the amount of cold air entering.
Deloitte Analytics Senior Advisor Tom Davenportwarned last year that data scientists waste too much time prepping data. After interviewing data scientists, Davenport concluded that they needed better tools for data integration and curating.
Now, a Ventana Research column shows that data scientists aren’t the only ones wasting enormous amounts of time on data preparation at the expense of actual analysis.
Ventana CEO Mark Smith shares research from several reports, all of which demonstrate how much of a time suck data preparation can be without the right tools.
The widespread popularity of social media and associated mobile apps, especially among young people, has potential in public safety, a new study finds.
Use of such sites as Facebook and Twitter has become so significant that universities should strongly consider utilizing them to spread information during campus emergencies, according to a study from the University at Buffalo School of Management called Factors impacting the adoption of social network sites for emergency notification purposes in universities.
Social media not only enables campus authorities to instantly reach a large percentage of students to provide timely and accurate information during crisis situations, the study states, but sending messages through social networking channels also means students are more likely to comply with emergency notifications received.
In the wake of a natural disaster, about a quarter of businesses never reopen. Whether due to primary concerns like a warehouse flooding, secondary complications like supply chain disruption, or indirect consequences like transportation shutdown that prevents employees from getting to work, there are a broad range of risks that can severely impact any business in the wake of a catastrophe that must be planned for.
Planning and securing against natural disaster risks can be daunting and exceptionally expensive, but researchers have found that every dollar invested in preparedness can prevent $7 of disaster-related economic losses. Check out more of the questions to ask and ways to mitigate the risk of natural disasters for your organization with this infographic from Boston University’s Metropolitan College Graduate Programs in Management:
Recently, we had a client pick up a new contract with a company that was escaping a relationship with a bad IT provider. The transition was a nightmare for the business. Why? Because their previous IT company had constantly kept them in the dark about the state of their technology.
How transparent are YOU with your clients?
When you say, “Honesty is the best policy,” you'd better mean it. Be as open as possible with your clients without overloading them on the technical stuff. It’s all about building trust, and you can’t do that if they think you’re keeping secrets from them. Even if something goes wrong with a bad bug or a security breach, you need to keep them in the loop. Own up to everything you do, good and bad, and if it’s bad – make it right.
(Tribune News Service) -- A hiker lost in the mountains of New Mexico called 911 repeatedly, but was routed seven times to non-emergency lines.
A 911 call made by an elderly woman from her home in Texas was picked up by an emergency dispatcher in Tennessee, some 700 miles away.
And an emergency call made last month from a middle school in Delano, Calif., after a young student collapsed and later died there, was routed to a 911 dispatcher in, of all places, Ontario, Canada.
Hundreds of millions of Americans have moved rapidly from traditional land lines to relying on various forms of wireless phone services, making the 911 emergency system ever more complex, experts say, and therefore more subject to misrouted calls or misidentified locations.
As your customers decide whether or not to move their cloud-based file sharing to a hybrid cloud, they will have many questions along the way. Of course, some questions are more common than others – and as their managed service provider, you should be prepared to answer them.
Everyone in IT is anxious to see how the cloud shakes out. When all is said and done, what will the enterprise look like when cloud computing becomes the established model for IT infrastructure?
And some are looking even farther into the future, wondering what, if anything, will come after the cloud?
To be sure, there is no shortage of predictions over how the cloud will evolve over time. IDC’s most recent assessment has hybrid infrastructure heading into 65 percent of enterprises within the year and predicts that by 2017, 20 percent of the industry will be using the public cloud as a strategic resource. As well, more than three quarters of IaaS offerings will be redesigned, rebranded or phased out over the next two years as providers concentrate on more lucrative services higher up the stack.
The utility of the cloud is beyond question at this point, so while most experts can debate the merits of the various architectures, it is hard to imagine IT in the future without a significant cloud presence. NetSuite CEO Zach Nelson told the Australian Financial Review last fall that he believes the cloud to be “the last computing architecture,” because there is no way to improve upon always-on data access from any device anywhere in the world. This may be true, but it was also true in the early 1970s that computer technology was simply too expensive and too complex for the average citizen.
In all the big news about the impact of mobile technology on small to midsize businesses (SMBs), one item that stands out is that SMBs that adopt mobile strategies outperform those that do not. This data comes from a recent study on the mobile revolution by the Boston Consulting Group and Qualcomm. Another report from Juniper Research found that in 2014, SMBs contributed $630 billion to the growing mobile industry, which is nearly triple the number from four years prior.
That kind of growth proves that SMBs are not only adopting mobile technologies; they are relying on it to fuel their business growth and change the ways that business is done.
The debate about build versus buy has raged for years. But the total cost of owning your own data center outweighs the perceived benefits, and it looks like the argument in favor of “buy” may have gained the upper hand once and for all.
Let’s talk about it, though, from the point of view of people who are considering building their own and see how their claims stand up to the current state of backup.
Well, it’s time to work on the Business Continuity Management (BCM) / Disaster Recovery (DR) program based on the maintenance schedule. You’ve got your plan all well laid out and people know it’s coming and are ready to participate…sometimes begrudgingly. Yet, for some reason your well-thought out plan isn’t going to plan at all.
Sometimes that because what one believes they have, they really don’t. For example, just because you have an executive buy in on the need for the BCM/DR program and what’s needed, doesn’t always translate to mean the same thing as having their support. For example, an executive may buy in to the idea that a specific initiative is needed and give the go ahead but no one really follows along as expected because the executive themselves doesn’t offer or provide support to the BCM/DR practitioner and when others see this they quickly realize that the BCM/DR is just a make-work effort and isn’t something the company executives really – and I mean really – supports.
The executive may see it as a checkbox on an audit report and wants it quickly to go away; to quickly have the golden checkmark in that tick box appear on a report so that BCM/DR goes away. Again, they see the need to do something but don’t provide the means, communication channels and support, resources (both physical and financial) or moral support to get it done.
As the number of platforms where enterprise IT organizations can store data proliferates, getting data in and out of those platforms quickly has become a major IT challenge.
To address that issue, Syncsort has released an update to its suite of data integration offerings that adds an “Intelligent Execution Layer” that enables users to visually design a data transformation once and then run it anywhere—across Hadoop, Linux, Windows, or Unix—on premise or the cloud.
Tendü Yoğurtçu, general manager for Big Data at Syncsort, says version 8.0 of the company’s DMX Software is designed to provide not only a consistent approach for collecting, transforming and distributing data across multiple platforms, but also one that embeds algorithms that automatically select the optimal execution path based on the type of platform, the attributes of the data and the condition of the cluster.
The goal, says Yoğurtçu, is to allow business users and data scientists to take advantage of a run-time environment that allows them to transform data in flight in a single step.
A new survey from identity and access management (IAM) solutions provider SailPoint has revealed there is a "clear disconnect" between cloud usage and IT controls in many businesses.
SailPoint's "2014 Market Pulse Survey" of at least 3,000 employees worldwide showed that one out of every four workers admitted they would take copies of corporate data with them when they leave a company.
Survey researchers also pointed out that one in five employees is "going rogue" with corporate data and has uploaded this information to a cloud application such as Dropbox or Google Docs with the intent to share it outside the company.
"The challenge with cloud applications is that IT organizations must now manage applications that are deployed – and accessed – completely outside the firewall," SailPoint President Kevin Cunningham wrote in a blog post. "Adding to the complexity, employees are starting to use consumer-oriented applications for work-related activities, creating a significant blind spot when it comes to risk."
I have recently detailed the COSO 2013 Framework in the context of a best practices compliance regime. However there is one additional step you will need to take after you design and implement your internal controls. That step is that you will need to assess against your internal controls to determine if they are working.
In its Illustrative Guide, the Committee of Sponsoring Organization of the Treadway Organization (COSO), entitled “Internal Controls – Integrated Framework, Illustrative Tools for Assessing Effectiveness of a System of Internal Controls” (herein ‘the Illustrative Guide’), laid out its views on “how to assess the effectiveness of its internal controls”. It went on to note, “An effective system of internal controls provides reasonable assurance of achievement of the entity’s objectives, relating to operations, reporting and compliance.” Moreover, there are two over-arching requirements which can only be met through such a structured post. First, each of the five components are present and function. Second, are the five components “operating together in an integrated approach”? Over the next couple of posts I will lay out what COSO itself says about assessing the effectiveness of your internal controls and tie it to your compliance related internal controls.
As the COSO Framework is designed to apply to a wider variety of corporate entities, your audit should be designed to test your internal controls. This means that if you have a multi-country or business unit organization, you need to determine how your compliance internal controls are inter-related up and down the organization. The Illustrative Guide also realizes that smaller companies may have less formal structures in place throughout the organization. Your auditing can and should reflect this business reality. Finally, if your company relies heavily on technology for your compliance function, you can leverage that technology to “support the ongoing assessment and evaluation” program going forward.
The harsh winter of 2015 shows no sign of letting up. It’s too late for enterprises to do much to protect themselves this year. The good news is that, though it doesn’t seem so now, the temperatures will moderate and snow will melt relatively soon.
But, with the uncertainty introduced by global warming, it is irresponsible to assume next year won’t be as bad – or even worse. Therefore, it is important to take special note of what can be done to prepare for next winter.
This prudence seems to be lacking, however. A poll commissioned by property insurer FM Global revealed the problem. It found that 32 percent of workers give their employers grades of “F,” “D” or “C” for winter storm preparedness. Fifty-two percent of full-time workers expressed dissatisfaction with their companies’ winter storm preparations.
It’s a terrifying but plausible scenario. You’re in an enclosed crowded place—perhaps a subway or a mall—and a terrorist organization releases lethal quantities of a nerve agent such as sarin into the air. The gas sends your nervous system into overdrive. You begin having convulsions. EMTs rush to the scene while you go into respiratory failure. If they have nerve agent antidotes with them, you may have a greater chance of living. If they don’t, you may be more likely to die. Will you survive?
Thanks to CDC’s Strategic National Stockpile CHEMPACK program, the answer is more likely to be yes.
First responders prepare for CHEMPACK training.
CHEMPACKs are deployable containers of nerve agent antidotes that work on a variety of nerve agents and can be used even if the actual agent is unknown. Traditional stockpiling and delivery would take too long because these antidotes need to be administered quickly. CDC’s CHEMPACK team solves this problem by maintaining 1,960 CHEMPACKs strategically placed in more than 1,340 locations in all states, territories, island jurisdictions, and the District of Columbia. Most are located in hospitals or fire stations selected by local authorities to support a rapid hazmat response. More than 90% of the U.S. population is within one hour of a CHEMPACK location, and if hospitals or first responders need them, they can be accessed quickly. The delivery time ranges from within a few minutes to less than 2 hours.
The medications in CHEMPACKs work by treating the symptoms of nerve agent exposure. According to Michael Adams, CHEMPACK fielding and logistics management specialist, “the CHEMPACK formulary consists of three types of drugs: one that treats the excess secretions caused by nerve agents, such as excess saliva, tears, urine, vomiting, and diarrhea; a second one that treats symptoms such as high blood pressure, rapid heart rate, weakness, muscle tremors and paralysis; and a third that treats and can prevent seizures.”
Maintaining CHEMPACKs throughout the nation is challenging, but it is an essential part of the nation’s defenses against terrorism. The CHEMPACK team must coordinate with limited manufacturers to keep the antidote supply chain functioning. CHEMPACK antidotes are regularly tested for potency and are replaced when needed. They must be maintained in ideal locations for quick use by hospitals and first responders. But, having them available is only the first step. Personnel who may use them need to know where they are and must be trained. CDC supports state and local partners as they identify CHEMPACK placement locations and conduct trainings for their responders.
2008 CHEMPACK locations across the U.S.
Terrorist nerve agent attacks are not hypothetical. The Aum Shinrikyo group in Japan used sarin gas to attack subway passengers twice: an attack in 1994 killed eight people and a second attack in 1995 killed 12. Experts agree that these attacks were amateurish and a better timed and executed attack could have killed many more people.
CDC’s CHEMPACK team is part of the rarely seen network that protects the people of the United States from unusual threats. You might not have heard much about them, but if you are ever attacked by nerve agents, they may be the reason you survive.
I recently had a conversation with someone about BYOD and security. He told me that he thought that enterprise was having BYOD fatigue and there was a growing attitude that its security problems were overblown. This person wasn’t alone in his feelings. I had read some articles and heard others repeat similar complaints about BYOD. Perhaps mobile devices weren’t as bad of a security issue as once thought?
Or maybe the threats are even worse than we realized. Some recent studies show just how much of a security risk mobile devices have become within the workplace, and this carries over into BYOD security risks as well.
First, a study conducted by Alcatel-Lucent's Motive Security Labs found that mobile malware has increased by 25 percent in 2014, and 16 million devices – mostly Androids but not exclusively – are infected. For the first time, we’re seeing infection rates of mobile devices that rival those on Windows computers. Out of the top 20 threats, six of them involved spyware meant to track location and monitor the user’s communications. The reason for all this malware, according to an eSecurity Planet article, comes down to the device owner:
Leveraging Big Data for operational analytics is generating more interest these days, despite integration concerns. Companies are always looking for ways to reduce operational expenses, and Big Data promises to help.
A recent SCM World report, “The Digital Factory: Game-Changing Technologies That Will Transform Manufacturing Industry,” asked 200 manufacturers around the globe about Big Data and other new technologies. The report is available to clients only, but Forbes recently shared some key findings.
The survey revealed that 49 percent see advanced analytics as a way to “reduce operational costs and utilize assets efficiently,” Forbes notes. It’s telling, too, that only 4 percent said they saw no use case for Big Data analytics in their future.
For many people, stepping into the office can feel like stepping back in time. In an age where so many people carry around mobile computers in their pockets, employees have become frustrated at being forced to use cumbersome technologies such as VPN and FTP to remotely access files stored on an on-premises file server. As a result, many of these employees have resorted to storing more of their data in free, non-secure cloud services like Dropbox.
How do MSPs reconcile the virtues of the file server with the benefits of cloud file sync? One way is to cloud-enable the file server. Here are three ways cloud-enabling the file server keeps the file server sexy and makes your clients happy:
The harshness and repeated ferocity of the winter of 2015 (especially in the New England states) sent many businesses scrambling to update their Business Continuity Plans. The earlier Ebola crisis in West Africa set off the same kind of frenzy. As a wise Business Continuity Management (BCM) guru once said “no good crisis should go unexploited”. What he meant was that public crises can be leveraged to stimulate interest (and funding) for BCM.
The result of the blizzard and Ebola phenomena isn’t about stimulating interest, it borders on panic – for all the wrong reasons. An earlier blog addressed the wisdom of planning for impacts, not for events. These recent snows and epidemics have served to reinforce that advice.
There are so many things that could happen to disrupt your organization. Many of them are as yet unknown (those “black swans”). But are “Scenario Plans” worth the effort? Consider that the 30-day snowfall record for Boston set in January-February 2015 (90 inches) broke the previous record (59 inches) set 37 years earlier (1978). Does it make sense to create a ‘Blizzard Plan’ – if it occurs every 30 years? Likewise, is an ‘Ebola Plan’ really necessary when that specific virus is unlikely to spread in significant numbers beyond West Africa?
Security and compliance skills were named as the top IT skills that hiring managers will be seeking in 2015, according to a survey of 405 senior-level technology professionals conducted by Cybrary.IT from late 2014 to early 2015. And that’s good news for the fledgling cybersecurity training site, which began offering its roster of free security courses a few weeks ago.
While the majority of companies represented in the survey plan to spend the same amount on IT training in 2015 that they spent in 2014, 11 percent said they have no money for IT training at all and fewer than 25 percent spend at least 10 to 20 percent of the total IT budget on training.
Billing itself as the first and only tuition-free massive open online course (MOOC) for IT and cybersecurity training, Cybrary.IT, whose founders came out of the paid IT training space, targets “unserved and underserved” individuals and aims to transform cybersecurity training as a whole, as co-founder Ryan Corey told me upon launch. The price of training is a major issue for individuals and companies, as both attempt to keep up with rapidly changing cyber threats and the growing need for specialized security skills.
No, there is no typo in the title. In today’s C-level world, CRO can stand for Chief Risk Officer, but can also mean Chief Reputation Officer. By definition, the Chief Risk Officer looks after the governance of significant risks (both menaces and opportunities). The Chief Reputation Officer supervises the management of an organisation’s reputation, brand and communications. Looking after risks and reputation are both vital functions for organisations. The question is whether specific job functions are to be created for one or both of them. The definitive answer will depend on different factors.
In the light of recent news showing that $1bn (£648m) has been stolen since 2013 in cyber-attacks on up to 100 banks and financial institutions worldwide Konrads Smelkovs of KPMG’s cyber security team says that it is time for financial institutions to be more proactive when it comes to information security.
“These attacks were unique in terms of the organization it took to execute them. However, the tools used by these cyber-crime gangs weren’t particularly sophisticated. It was the persistence and cautious approach of the criminals that netted them the prize. The banks targeted - primarily in Russia and Ukraine - suggest a selective operation in areas where tracking transactions is more complex.
“Financial institutions need to take more of a pre-emptive approach to such attacks. Playing ‘war games’ is one effective way of highlighting potential weak spots where attacks are simulated. Each organization should also look to have someone committed to defending their network, rather than someone who merely adheres to prescribed standards. The continued investment towards anti-malware technology and internal network monitoring tools remains crucial to being a step ahead of cyber criminals.”
The UAE’s National Emergency Crisis and Disasters Management Authority has published an updated version of the country’s business continuity standard.
The new UAE Business Continuity Management Standard builds upon the first version, published in 2012, and aligns the standard with international best practices and guidelines. It contains three parts:
- Specifications: sets out all the key parts and elements of the business continuity program.
- Guidelines: interprets how the elements mentioned in the Specifications work in practice.
- Toolkit: includes framework templates for developing a business continuity management system.
CHICAGO – Dangerously low temperatures and bitterly cold wind chills continue to be in the forecast for much of the Midwest this week. The U.S. Department of Homeland Security’s Federal Emergency Management Agency (FEMA) wants individuals and families to be safe when faced with the hazards of cold temperatures.
“Whether traveling or at home, subfreezing temperatures and wind chills can be dangerous and even life-threatening for people who don't take the proper precautions,” said Andrew Velasquez III, FEMA Regional Administrator. “FEMA continues to urge people throughout the Midwest to monitor their local weather reports and take steps now to stay safe.”
During cold weather, you should take the following precautions:
• Stay indoors as much as possible and limit your exposure to the cold;
• Dress in layers and keep dry;
• Check on family, friends, and neighbors who are at risk and may need additional assistance;
• Know the symptoms of cold-related health issues such as frostbite and hypothermia and seek medical attention if health conditions are severe.
• Bring your pets indoors or ensure they have a warm shelter area with unfrozen water.
• Make sure your vehicle has an emergency kit that includes an ice scraper, blanket and flashlight – and keep the fuel tank above half full.
• If you are told to stay off the roads, stay home. If you must drive, don’t travel alone; keep others informed of your schedule and stay on main roads.
You can find more information and tips on being ready for winter weather and extreme cold temperatures at http://www.ready.gov/winter-weather.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.
Follow FEMA online at twitter.com/femaregion5, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at twitter.com/craigatfema. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
For Valentine’s Day, Talend published a fun infographic, “Use Big Data to Secure the Love of Your Customers.” It lists data quality as the second leading challenge with Big Data, but perhaps more striking is the $13.3 million annual financial impact caused by data quality problems.
I’m not entirely sure from the graphic which research group provided that stat, but a 2013 Gartner research paper put the cost higher, at $14.2 million a year.
Actually, there’s no shortage of scary statistics and numbers on the high cost of bad data. For instance, this infographic by Lemonly.com and Software AG notes that bad data:
Recently, there was an online discussion where the question was raised if both Business Continuity Planning (BCP) and Disaster Recovery (DR) service and implementation can be quantified in terms of real dollar savings. I believe that to be a great question—one that anyone in those fields should be asking. And to be clear, I think the reply is a resounding “yes.”
In recent years, it would be very easy to say that dollars have become “scarce” from the standpoint of business planning and operations. Many of our clients have recently shifted their focus toward and improved cost/benefit ratio and greater overall savings in BCP and DR. This eye toward savings extends into both the tactical and—more importantly—strategic areas.
Many businesses across the US score poorly on being prepared for severe winter weather, according to a new poll of America's workforce, commissioned by FM Global.
Nearly one third of full-time American workers (32 percent) assign their employers a grade of C, D or F when it comes to preparedness for a major winter storm, the research finds. Furthermore, more than half of US workers (52 percent) employed full time indicated they are dissatisfied with their employers' preparedness, wanting their company to be better prepared for a winter storm.
"America's feedback speaks to the need for businesses to be more proactive, and overall more resilient, when it comes to winter weather," said Brion Callori, senior vice president, engineering and research, FM Global. "Insurance won't bring back lost customers, market share or fix a damaged corporate reputation for unprepared businesses. A business continuity plan which has been well-tested and communicated to employees can address such risk and help companies avoid costly physical and financial losses."
FM Global recommends the following best practices for businesses to help prevent damage in severe winter weather conditions:
A protracted labor dispute that continues to disrupt operations at U.S. West coast ports underscores the supply chain risk facing global businesses.
Disruptions have steadily worsened since October, culminating in a partial shutdown of all 29 West coast ports over the holiday weekend.
The Wall Street Journal reports that operations to load and unload cargo vessels resumed Tuesday as Labor Secretary Tom Perez met with both sides in the labor dispute in an attempt to broker a settlement amid growing concerns over the impact on the economy.
More than 40 percent of all cargo shipped into the U.S. comes through these ports, so the dispute has potential knock on effects for many businesses.
Oil is hovering around $50 per barrel. For most of the US economy this drop in oil price has provided a much-needed economic boost. One piece on the NPR website, entitled “Oil Price Dip, Global Slowdown Create Crosscurrents For U.S.”, said “economists have suggested the big drop in oil prices is a gift to consumers that will propel the economy.” Liz Ann Sonders, who is the chief investment strategist at Charles Schwab, was quoted as saying “The U.S. economy is 68 percent consumer spending, so right there you know that falling oil prices is a benefit.” Another economist said the positive effects could be “worth $400 billion” for the US economy as a whole.
But in the energy space, particularly in the city of Houston, Texas, this plunge has been devastating. It is so bad that in this past week’s issue of the Houston Business Journal (HBJ), it provided a ‘Box Score’ for energy company lay-offs. And that was before Halliburton announced a 10%-15% reduction and Hercules Offshore announced that it had laid off some 30% of its work force since last October. Nationally, for the energy industry, it will be just as bad. In the NPR piece, David R. Kotok, of Cumberland Advisors, said, “cuts in production and energy company payrolls will cost the U.S. economy up to $150 billion.” The Houston Chronicle headlined it was a “Bloodbath”.
I thought about what this plunge in the price of oil could mean for the compliance function in energy and energy related companies going forward. Many Chief Compliance Officers (CCOs) and compliance practitioners struggle with metrics to demonstrate revenue generation. Most of the time, such functions are simply viewed as non-revenue generating cost drags on business. This may lead to compliance functions being severely reduced in this downturn. However I believe such cuts would be far from short-sighted; they would actually cost energy companies far more in the short and long term.
The more IT pervades businesses, the more IT-based tools hackers have to exploit vulnerabilities. If you want your company to stay safe, you may need to ‘attack’ yourself to find out where the weak points are and fix them to prevent others from breaking in. The following list of hacker tools and techniques will give you an idea of the range of resources readily available over the Internet. Remember also that hackers may be plying their trade every day of the week. By comparison, some organisations may not have the time to run checks more once or twice a month. If you’re strapped for internal resources, consider other options like third party services to check or boost security.
The Business Continuity Institute has published its fourth annual Horizon Scan report. This year’s report has been published in association with BSI.
The BCI Horizon Scan assessed the business preparedness of 760 organizations worldwide and shows that the top three threats that business continuity managers are concerned about are:
- Cyber-attack (82 percent are concerned about this threat);
- Unplanned IT outages (81 percent);
- Data breaches similar to that suffered by Sony in 2014 (75 percent).
Supply chain disruption is seen as the fastest rising threat, climbing to fifth place in this year’s report, up from 16th in 2014. Almost half of those polled (49 percent) identified increasing supply chain complexity as a trend, leaving their organization vulnerable to disruption from conflict or natural disasters.
Despite growing fears over the resilience of their firms, the report records a shock fall in the use of trend analysis by business continuity practitioners, with a fifth of firms (21 percent) failing to invest in this protective discipline. A similar proportion (22 percent) report not employing trend analysis at all, making it a blind spot for organizations. Globally business preparedness shows variations with 8 out of 10 (82 percent) organizations in the Netherlands utilising trend analysis, while just 6 in 10 firms in the Middle East and Africa do so (63 percent).
Adoption of ISO 22301, the business continuity standard, appears to have reached a tipping point with more than half (53 percent) of organizations now relying upon this, up from 43 percent last year. Almost three quarters of firms (71 percent) intend to better align their activities with ISO 22301 over the next 24 months.
You can read the full Horizon Scan report after registration here.
Luke Bird reflects on career progression opportunities in business continuity and how the profession could improve in this area.
As a kid growing up all I ever wanted to be was a sailor in the navy and once I got to the right age there was no one going to tell me otherwise. So off I went, hell bent on passing through basic training and finally getting to wear that shiny uniform. Well done me I thought to myself…
However, it wasn’t until the Monday morning after my big passing in parade and following a weekend of celebrations with my family and friends that it finally hit me. I had absolutely no idea what I wanted to do with my career beyond that point.
It’s really only now at this stage of my career in business continuity and over 10 years later that I can draw some interesting parallels. Much like my experience during basic training in the Navy, my career as a junior professional in business continuity has often involved those long 18-hour days, those difficult superiors (occasionally) and that regular feeling of being a deer in the headlights. However, the greatest parallel I can draw from this collective experience is the way I’m feeling right now: trying to decide on my future.
By Charlie Maclean-Bristol FBCI FEPS
Defining the recovery time objectives (RTO) for your activities is one of the most critical things the business continuity manager will carry out. Get them wrong and the whole basis for your recovery strategy is flawed. Often, rather than being an objective assessment, the RTO is driven by internal politics and by managers wanting their part of the organization (and hence themselves) to be seen as important.
For a long while I have wondered if there was any scientific way, or even a rule of thumb, for defining your RTOs but I have never come across one. A while ago I reached out to the BCMIX LinkedIn Group to ask how members went about defining their RTOs. I got lots of explanations of the process for defining them but no set rule. Most people said that defining RTOs was a combination of common sense, knowledge of the organization, and experience. These are all very good but how is a beginner going to get that experience?
In the absence of any set method of defining RTOs here are my thoughts on the subject:
Cyber-attack is the top threat perceived by businesses, according to the fourth annual Horizon Scan report published today by the Business Continuity Institute (BCI), in association with BSI. Supply chain disruption is reported as the fastest rising threat, up 11 places since last year.
The annual BCI Horizon Scan assessed the business preparedness of 760 organizations worldwide and shows that three quarters (82%) of Business Continuity Managers fear the possibility of a cyber-attack, with 81% worried about the possibility of unplanned IT outages and 75% data breaches similar to that suffered by Sony in 2014. A recent industry report(i) highlights the annualized cost of cyber-crime per global company now stands at $7.6 million, a 10.4 per cent year-over-year increase.
Concerns over supply chain disruption were the fastest rising threat, climbing to fifth place in this year’s report, up from 16th in 2014. Almost half of those polled (49%) identified increasing supply chain complexity as a trend, leaving their organization vulnerable to disruption from conflict or natural disasters.
This year’s global top ten threats to business continuity are:
- Cyber-attack – up 1
- Unplanned IT and telecoms outages – down 1
- Data breach – static
- Interruption to utility supply – up 1
- Supply chain disruption – up 11
- Security incidents – up 1
- Adverse weather – down 3
- Human illness – up 3
- Fire – down 3
- Acts of terrorism – down 1
Despite growing fears over the resilience of their firms, the report records a shock fall in the use of trend analysis by business continuity practitioners, with a fifth of firms (21%) failing to invest in protective discipline. A similar proportion (22%) report not employing trend analysis at all, making it a blind spot for organizations. Globally business preparedness shows variations with 8 out of 10 (82%) organizations in the Netherlands utilising trend analysis, while just 6 in 10 firms in the Middle East and Africa do so (63%). Small businesses, evaluated for the first time in this year’s report, are seen to lag behind industry best practice with just half currently applying international standards for business continuity management.
Howard Kerr, Chief Executive at BSI, commented: “Globalization has brought the world’s conflicts, epidemics, natural disasters and crime closer to home. It is of real concern that this year’s report shows that businesses are not fully utilising information to identify and remedy blind spots in their organizational resilience strategies. Tracking near and long-term threats provides organizations of all sizes with an objective assessment of risks and how to mitigate them. Failing to apply best practice leaves organizations and their employees, business partners and customers at risk.”
The report provides the strong recommendation that the rising costs of business continuity demand greater attention from top management. Encouragingly, adoption of ISO 22301, the business continuity standard, appears to have reached a tipping point with more than half (53%) of organizations now relying upon this, up from 43% last year. Almost three quarters of firms (71%) intend to better align their activities with ISO 22301 over the next 24 months.
Lyndon Bird FBCI, Technical Director at the BCI, commented: “The world faces diverse problems from cybercrime and political unrest to supply chain vulnerabilities and health hazards. This report shows the vital importance of business continuity professionals understanding such trends. No longer can those working in the field believe they can resolve all their problems themselves. As an industry we must work together with our fellow practitioners to deal with the complexity of these threats.”
Click here to download your free copy of the Horizon Scan. If you would like to know more about the report, or perhaps ask some questions, Patrick Alcantara (BCI) and Lorraine Orr (BSI) will be hosting a webinar on Tuesday 24th February at 2pm (GMT) where they will be discussing some of the findings. Click here to register for the webinar.
The derailments this week of two trains carrying crude oil have raised new questions about the adequacy of federal efforts to improve the safety of moving oil on tank cars from new North American wells to distant refineries.
A 100-car, southbound CSX train derailed Monday in a West Virginia river valley, destroying a home and possibly contaminating the water supply for downriver residents. A thundering fireball rose hundreds of feet above the community amid an intense winter storm.
On Sunday, an eastbound oil train derailed in Ontario, Canada, near the city of Timmins, engulfing seven cars in an intense fire and disrupting passenger service between Toronto and Winnipeg.
The most recent accidents follow a long string of crashes that have occurred amid an exponential increase in the amount of crude being transported by rail, as energy production booms across the U.S. and Canada.
(TNS) — When Summer Fowler goes to sleep, the Cranberry mother of three knows computer hackers around the world are working through the night to undo the defenses she spends her days building.
Fowler, 37, is deputy technical director for cybersecurity solutions at CERT, the nation's first computer emergency response team, at Carnegie Mellon University's Software Engineering Institute. She works with Pentagon soldiers, intelligence directors and corporate titans to help them identify key electronic assets, secure them from cyberattacks and plan for what happens if someone steals them.
But at the end of the day, once her children are tucked into bed, Fowler wonders what the impact would be from a real cyber 9/11 attack on the United States.
For a while, it looked like enterprise storage was on a pretty stable development path: convert tape to disk, convert disk to solid state, and ultimately transition the storage array to modular infrastructure featuring server-side and in-memory solutions.
That plan is starting to crumble, however, as developments across multiple storage media are increasing the flexibility of previously staid solutions and even causing some to question storage’s actual role in the emerging virtual data ecosystem.
IBM's James Kobielus, for one, is backing off earlier predictions that 2015 would be a tipping point for SSDs in the enterprise. He still sees SSD dominance as inevitable, but continued investment in hard disk development is doing wonders for storage density and cost-per-bit. So while Flash solutions will likely dominate emerging applications like data mobility and the Internet of Things, tried and true magnetic media still has a lot to offer the old-line functions that many enterprises will continue to rely upon even in a cloud-dominated universe.
Agile methods allow developers to create dependable applications with repeatable results. The same type of practice can also be applied to database development to promote proper data management, which in turn reflects in successful application creation. Efficient data governance is one key toward achieving well developed software more quickly.
However, it seems that for many enterprises, there has always been tension between the development groups and those who manage the data. Developers often lament that issues with data management prohibit quick, adaptive software creation. On the other hand, data management staff feels that the tenets of Agile methodologies don’t consider the needs of data asset management. The clash isn’t new, but today’s business cycles demand software that’s created even more quickly and effectively than ever. This is why Agile development has become so important.
To help your organization achieve a tighter relationship between development and data management, author Larry Burns offers his book, “Building the Agile Database.” In his book, Burns explains the business case behind efficient data management via Agile methods. He also takes time to identify the usual stakeholders involved in application development and database development. Burns gives a detailed view of the financial stakes behind the software development process and ties that to the importance of good data management.
As an IT professional, what would you say are the top three concerns that keep you awake at night? According to the results of a recent survey, your peers listed security, downtime (disaster recovery), and talent management, in that order.
The survey was commissioned by Sungard Availability Services, a cloud computing, disaster recovery, and managed hosting services provider based in Wayne, Pa. I had the opportunity to discuss the findings with Ric Jones, CIO at LifeShare Blood Centers, a blood donation services provider in Shreveport, La., that’s a Sungard AS customer. Jones ranked disaster recovery ahead of security on his own list of concerns, but he indicated that the two are inextricably linked.
“Disaster recovery is extremely important to the success of LifeShare Blood Centers. If the primary datacenter in Shreveport experiences downtime for even a few hours, it disrupts the nonprofit’s ability to collect the data needed to gather and distribute critical, life-saving blood supply,” Jones explained. “Security couples up with disaster recovery, as data breaches are occasionally the cause for a disaster or unplanned downtime. This not only impacts an organization’s reputation, but also their ability to do business efficiently. LifeShare Blood Centers houses private information from donors, and it’s vital to our nonprofit we keep their information protected and out of hackers’ hands.”
In the last several years, there have been an increasing number of storage options. Initially we had just magnetic hard drives with a single rotational speed. Then they started to come in several varieties. Now we have a range of drive speeds starting at 15,000 rpm at the top end, followed by 10,000 rpm drives, then the ubiquitous 7,200 drives, and slower drives with speeds such as 5,900, 5,400, 4,500 and even variable speed drives.
The rotational speed of the disk drive is strong indicator of performance, price, capacity and power usage. Typically the higher the speed, the more expensive the drive. And usually high-speed drive has a smaller capacity, better performance and higher power consumption. As the drive speed comes down, the drive price decreases, the capacity increases, the performance decreases, and the power usage decreases.
There are other sources of drive variation, for example, drive cache size and physical drive size (2.5" and 3.5"). There is also the drive communication protocol such as SATA, SAS or Fiber Channel. There are also protocol speed differences such as 6 Gigabits per second (Gbps), 3 Gbps and slower (although these are older drives).
The analytics capabilities exist for Internet of Things (IoT) data — it’s the integration of systems and lack of interoperability that will challenge organizations, warns Deloitte Consulting.
Deloitte predicts that the “Analytics of Things” will be one of the top analytics trends in 2015, but also predicts that organizations may have trouble leveraging the data due to proprietary solutions and APIs.
“There needs to be more interoperability, more interconnectivity, more integration of all these devices, otherwise we’re just going to have these competing standards, competing formats and I think you’ll have disappointed customers in the end,” John Lucker, Deloitte Consulting principal and global advanced analytics and modeling market leader, said in a recent interview with IT Business Edge.
Computers are typically robust and reliable. When it comes to doing the same thing over and over again at scheduled times, they leave human beings far behind. That makes IT automation an attractive proposition for many business continuity routines or processes. Where people might forget or botch a data entry because of the monotony of a task, computers remain unaffected. They will check the status of all your branch servers every hour on the hour without fail. They will monitor manufacturing stocks and supply chains and send alerts when any out of bounds situation occurs. What could ever go wrong? Two things at least that human beings still have to help computers sort out.
Whether you've forgotten to press save, a file has become corrupted or perhaps due to something more malicious, I'm sure we've all suffered the frustration of losing data at one time or another. A new study from Kroll Ontrack has now shown just how common this is by revealing that over a 12 month period from 2013 to 2014, one in four (25%) UK workers interviewed as part of their research lost work data due to malfunction or corruption of technology. This is up from 19% just over two years ago. The report also highlights that only 68% of this data was recovered, meaning that almost a third of all work related data lost was irrecoverable.
Paul Le Messurier, Programme and Operations Manager at Kroll Ontrack commented: “The business environment is now, more than ever, data driven and digital first. It is therefore extremely alarming that data loss is on the up. If we see this trend continue to build, there is a risk that we will continue to see large scale data disasters as well as negative impacts on the provision of service level agreements to customers. Organisations must prepare for potential data disasters by developing a robust business continuity plan that includes a back-up plan, education for employees and a data disaster strategy if all else fails.”
Additional findings by Kroll Ontrack highlight that one in three UK employees (33%) used personal devices or cloud services to store work-related data in the last 12 months. Recovery rates of lost work-related data among these devices are low. One in five users successfully recovered from home desktops (19%), just 8% from personal mobile devices and 17% from laptops and tablets.
Le Messurier continued: “With the rise of BYOD the lines between personal and work-related data are being blurred. As such, organisations have to take extra considerations when devising a disaster recovery plan. This includes a full audit of what devices are holding work-related data and ensuring that these devices are being used responsibly. It is also important that businesses understand what data is critical on the device and what is not to ensure that only work related data is backed up to company servers – ignoring personal apps and music.”
NEW ORLEANS—While it may seem counterintuitive at an event that also has an expo, one speaker at the International Disaster Conference today argues that a lot of the “preparedness” products on the market are not worth the price tag—and may even work against public safety.
According to the graduate research of disaster management expert and firefighter paramedic Jay Shaw, dikes and levies reduced people’s preparedness levels by 25% for all hazards including flooding. About three quarters of respondents in his research had experience with a major flood, and 75% felt prepared for a flood. Yet 65% felt unprepared for any other disaster, and 46% did not have any emergency kit, plan or supplies. The dikes in their town, Shaw found, led to a sense of security against flooding risk, and left many unaware of other risks and how to best prepare for them.
Nationally, a 2009 FEMA study found that 57% of people claim to be prepared for a disaster for 72 hours. Under further review, however, 70% of these individuals did not know the basic components of an emergency go-bag or emergency plan.
By Jenny Gottstein
Last August, I embarked on a cross-country train trip to explore how games might be used for disaster preparedness.
In each city I met with first responders, Red Cross chapters, disaster management agencies, and community leaders. The goal was to identify ways to increase resilience through interactive games. The trip was fascinating, and exposed some core truths about our country’s relationship with disasters.
Here is what I learned:
1) The coastal cities generally feel vulnerable and unprepared. By contrast, the states in the middle of the country feel much more confident and capable. For example, everyone I spoke to in Montana was certified in some sort of disaster training, had survived 20 different avalanches or snow storms, and had impressive stockpiles of food and supplies. In other words, Montana is ready.
2) Different regions are facing different challenges in the effort to become more resilient. In Seattle, disaster preparedness professionals need help communicating safety messages to high school and college students. In Milwaukee, the main fear is extreme weather and water contamination. In New York, preparedness resources have to be translated to a population that speaks over 800 different languages. My job was to determine how game mechanics might be applied to overcome these hurdles.
3) Socio-economic factors play a huge role in the severity and impact of disasters. Therefore we can’t take a “one size fits all” approach to preparedness. Building a resilient community doesn’t start and end with emergency kits. We have to tackle larger issues of transportation, housing, and resources way before disasters happen.
4) Despite major disparities across the country, two things remain true for every individual: Confidence and kindness are essential qualities during a crisis. We might be thrown into unprecedented scenarios, but the first step is having confidence in our ability to respond, and the second step is, quite simply, to be kind to others. Kindness can go a long way in de-escalating a crisis. Which presents an interesting challenge: how do we teach this concept through gaming?
5) I’ve heard many people blame our country’s lack of preparedness on apathy. How else would you explain the fact that people still don’t have Go Bags or basic emergency plans for their family? But I don’t think “apathy” is the issue. I believe disasters are so enormous and terrifying, that people simply block them out. It is too big, it is too inaccessible. Therefore the problem isn’t apathy, it is paralysis.
6) The act of “getting prepared” can be isolating and boring. Would I rather go to the hardware store and pick out flashlights for a crisis that is too scary to think about, or spend time with my family and friends? The latter, obviously.
7) Finally, there is one thing that was true in every place I visited on my trip, one thing that united everyone in these incredibly diverse regions: people are more interested and responsive to emergency preparedness messages that are fun and engaging rather than messages focused on motivating people through fear.
So by creating interactive games, we can offer people a different entry point – an opportunity to tackle disaster preparedness in a way that is social, memorable, and fun. We can make something that is boring and isolating and turn it into something engaging and social. We can turn something that is paralyzing, into something that is accessible. We can design games that are entertaining and thought-provoking, without trivializing the disaster experience.
Over the next few years I’ll be exploring these nuances, and designing games as tools for resilience. If you find this interesting, please join me!
Jenny Gottstein is the Director of Games and a senior event producer for Go Game. Jenny has led interactive game projects, creativity trainings and design workshops around the world. Click here to read more about Jenny’s trip.
Data breaches can be terrifying; they can cost a business millions of dollars and cause long-lasting damage to a company's reputation, too.
And it often seems like no matter what companies do, data breaches are unstoppable. But is this really the case?
Let's find out...
(TNS) — When an ice storm hit Augusta, Ga., on Feb. 11, 2014, and lasted into the next morning, the city lacked disaster assessment teams to survey storm damage and had no unified effort to coordinate volunteer help. Nearly half the 57 locations approved for emergency shelter use by the American Red Cross were without backup generators or an alternate power supply.
The city’s debris removal plan was an “incomplete draft” that listed Traffic Engineering and Solid Waste as the departments in charge.
A year later, Fire Chief Chris James says Augusta’s Emergency Management Agency has overhauled its operations to address the problems it encountered.
NEW ORLEANS — Edward Gabriel, principal deputy assistant secretary for preparedness and response for the U.S. Department of Health and Human Services, told a gathering of emergency managers that every incident they respond to is in some way related to health and medical and he revealed a couple of secrets.
Gabriel delivered a keynote address at the International Disaster Conference and Expo in New Orleans on Feb. 10, and talked about some of the work his office is doing to develop resiliency to catastrophic events.
“There are things that we know that you should be aware,” Gabriel told the crowd. He was hinting at some of the dangers that could affect the U.S. regarding biological and nuclear attacks. Those threats are treated as possibilities in the offing by the Biomedical Advanced Research and Development Authority (BARDA) under his watch.
Anthem recently said hackers were able to illegally access the health insurance company's IT system, along with personal information from up to 80 million current and former members. And as a result, Anthem landed at the top of this week's list of IT security newsmakers to watch, followed by TurboTax, Trend Micro (TYO) and Avast.
What can managed service providers (MSPs) and their customers learn from these newsmakers? Check out this week's list of the biggest IT security stories to find out:
- “This mirror will give us neither knowledge or truth.”
So says Dumbledore in J.K. Rowling’s book, Harry Potter and the Sorcerer’s Stone, commenting on a mirror that shows us what our most desperate desires want us to see.
This is an apt analogy when describing the analytics available in big data solutions. When you suddenly have all the data you could want and can quickly analyze it anyway you like, unencumbered by extraneous effort that we have historically had to endure, what happens? Being human beings with a tendency to confirm what we so want to have happen or to relive what felt so good in the past, managers often drift into self-sealing and circular analysis that at first doesn’t seem so wrong. Big data has to poke through the subtle and instinctual responses of data denial.
NEW ORLEANS—At the first day of the International Disaster Conference and Expo (IDCE), one of the primary topics of areas of concern for attendees and speakers alike was the risk of pandemics and infectious diseases. In a plenary session titled “Contagious Epidemic Responses: Lessons Learned,” Dr. Clinton Lacy, director of the Institute for Emergency Preparedness and Homeland Security at Rutgers, focused on the recent and ongoing Ebola outbreak.
While only four people in the United States were diagnosed with Ebola, three of whom survived what was previously considered a death sentence, government and health officials cannot afford to ignore the crisis, Lacy warned.
“This outbreak is not just a cautionary tale, it is a warning,” Lacy said. “Ebola is our public health wakeup call.”
A slow start by the Centers for Disease Control, inadequate protective gear in healthcare facilities, and inadequate planning for screening quarantine and waste management were some of the key failings in national preparedness for Ebola. And all were clearly preventable. A significant amount has been done to improve preparedness, Lacy said, but there is still a significant amount yet to do as well.
(TNS) — Commissioners and emergency officials in Pennsylvania are calling for reform for what they say is an outdated emergency telephone services law.
The law, enacted in 1990, doesn’t sufficiently address cellphones and other wireless devices and is adversely affecting funding for 911 systems, they say.
“This is the top priority for the (County Commissioners Association of Pennsylvania) this year,” Somerset County Commissioner Pam Tokar-Ickes said.
Tokar-Ickes also is a directors board member of the statewide organization.
“Since 1990, there have been significant changes because of technology — a lot more people using wireless devices — and the legislation is a piecemeal collection.”
(TNS) — When Paul Allen picks a cause, he usually takes his time.
The Microsoft co-founder likes to convene brainstorming sessions, consult experts and recruit advisers before making major philanthropic gifts.
But when Ebola flared in West Africa last summer, Allen was among the first private donors to step up. As the toll from the disease soared, he quickly raised his commitment to $100 million — the largest from any individual and double the amount contributed by the Bill & Melinda Gates Foundation.
Now that the epidemic seems to be slowing, Allen is still moving fast.
(TNS) — What would you do with a few seconds or minutes of warning before an earthquake strikes?
When late-night comedian Conan O’Brien considered the question recently, the result was a laugh-out-loud segment with people stampeding into walls, snapping risqué selfies or cranking up the boom box for one last dance.
A more sober — and useful — range of options will be on the table next week, when a small group of businesses and agencies embark on the Northwest’s first public test of a prototype earthquake early warning system.
“Up until now, we’ve been running it and watching the results in-house only,” said John Vidale, director of the Pacific Northwest Seismic Network at the University of Washington.
Enterprise apps are a hot item. I wrote a recent feature that cited research from appFigures, Kinvey and Frost & Sullivan that, in a variety of ways, pointed to the growth in interest on the parts of both developers and their clients.
QuinStreet Enterprise, which publishes IT Business Edge, has released survey research that reveals an important finding: The user interface (UI) and related ease-of-use features are very high (if not at the top) of the list of important elements in the success of an enterprise app. The survey, “2015 Enterprise Applications Outlook: To SaaS or not to Saas” (free download with registration) said that the key features for enterprise users are easy implementation, smooth integration with existing technology and good security.
No matter what your stance on the cloud and its role in supporting critical vs. non-critical workloads, it should be clear by now that any data infrastructure that remains in the enterprise will be dramatically different from the sprawling, silo-based facilities of today.
Retaining key workloads in-house will likely be a priority for some, but that does not mean the data center isn’t ripe for an upgrade that improves data-handling while lowering capital and operational costs. And the strategy of choice at the moment is convergence.
(TNS) — As earthquakes continue to rattle Oklahomans after a record-setting year, state officials are trying to coordinate their responses and soothe fears.
Secretary of Energy and Environment Michael Teague said Friday his office will develop a website to help keep the public informed of various agency actions on earthquakes. He said it will be modeled after the Oklahoma Water Resources Board’s drought page, drought.ok.gov.
The state had 585 earthquakes greater than a 3.0 magnitude in 2014, up from 109 in 2013. Some studies have linked wastewater injection wells from oil and gas development to increased seismic activity.
“We recognize we have a problem,” said Teague, who heads the Governor’s Coordinating Council on Seismic Activity. “There’s something going on. But the science is not completely settled.
(TNS) — New Mexico hasn’t had its first zombie infection yet, but if that happens, Nick Generous and others on a Los Alamos National Laboratory team will probably map it on their new Biosurveillance Gateway website.
All epidemics — whether ebola, measles or zombie apocalypses — begin with patient zero.
“In the earliest stages of outbreak, there’s this critical period of time that officials can enact certain interventions to minimize and prevent the spread,” said Generous, a molecular biologist who helped develop the Biosurveillance Gateway. “So, how do you decide what to do?”
Quarantine, vaccinate or, in the case of that nasty zombie, just shoot its head off?
Telecommunications networks are huge users of energy. The cable industry, for instance, relies upon millions of servers, amplifiers and other network devices throughout vast networks. These all need to be powered. In homes, set-top boxes, gateways and other gadgets need juice, as well.
Cable and telcos, and the companies that support them, are taking steps to control this usage, at least in the home. In 2012, companies connected with the pay television industry entered a voluntary agreement to cut energy use in set-top boxes (STBs). Late last summer, D&R International, Ltd. on behalf of the group, published a report on the impact of the initiative on usage during 2013.
The report, according to Switchboard, the National Resources Defense staff blog, suggests very strongly that the agreement is having the desired effect. Energy use decreased 5 percent during the year and saved about $168 million. Usage of energy by STBs was 14 percent less than devices installed in 2012. The story points out that the next wave of voluntary requirements will increase savings to $1 billion annually when they are implemented in 2017.
Last week, CipherCloud revealed the results of a survey regarding the use of shadow IT. The study found that of the 1,100 cloud applications used in an enterprise setting, 86 percent of those are being used without authorization of the IT department.
Fellow IT Business Edge blogger Arthur Cole believes that, despite the high use of shadow IT within the workspace, the practice’s decline is inevitable. He wrote:
Now that the cloud has taken a firm hold in the enterprise, shadow IT will diminish naturally as internal resources gain the flexibility and availability that knowledge workers require. In fact, you could argue that shadow IT is a net positive for the enterprise because it creates the impetus to shed aging, silo-based infrastructure in favor of a more flexible, dynamic environment. And ultimately, this will allow many organizations to abolish their IT cost centers entirely in order to focus resources on more profitable endeavors.
Here’s the quick version. Hackers operating in the same cloud server hardware as you can steal your encryption keys and run off with your data/bank codes/customers/company (strike out items that do not apply – if any). Yes, behind that mouthful of a title is a scary prospect indeed. Until recently, this kind of cloud-side hacking possibility had been discussed but not observed. Now a team of computer scientists have managed to recover a private key used by one virtual machine by spying on it using another virtual machine. Therefore a hacker could conceivably do the same to your VM from another VM running on the same server. How worried should you be?
WASHINGTON – On January 30, the President issued an Executive Order 13690, “Establishing a Federal Flood Risk Management Standard and a Process for Further Soliciting and Considering Stakeholder Input.” Prior to implementation of the Federal Flood Risk Management Standard, additional input from stakeholders is being solicited and considered on how federal agencies will implement the new Standard. To carry out this process, a draft version of Implementing Guidelines is open for comment until April 6, 2015.
Floods, the most common natural disaster, damage public health and safety, as well as economic prosperity. They can also threaten national security. Between 1980 and 2013, the United States suffered more than $260 billion in flood-related damages. With climate change and other threats, flooding risks are expected to increase over time. Sea level rise, storm surge, and heavy downpours, along with extensive development in coastal areas, increase the risk of damage due to flooding. That damage can be particularly severe for infrastructure, including buildings, roads, ports, industrial facilities and even coastal military installations.
The new Executive Order amends the existing Executive Order 11988 on Floodplain Management and adopts a higher flood standard for future federal investments in and affecting floodplains, which will be required to meet the level of resilience established in the Federal Flood Risk Management Standard. This includes projects where federal funds are used to build new structures and facilities or to rebuild those that have been damaged. These projects make sure that buildings are constructed to withstand the impacts of flooding, improves the resilience of communities, and protects federal investments.
This Standard requires agencies to consider the best available, actionable science of both current and future risk when taxpayer dollars are used to build or rebuild in floodplains. On average, more people die annually from flooding than any other natural hazard. Further, the costs borne by the federal government are more than any other hazard. Water-related disasters account for approximately 85% of all disaster declarations.
The Standard establishes the flood level to which new and rebuilt federally funded structures or facilities must be resilient. In implementing the Standard, agencies will be given the flexibility to select one of three approaches for establishing the flood elevation and hazard area they use in siting, design, and construction:
- Utilizing best available, actionable data and methods that integrate current and future changes in flooding based on climate science;
- Two or three feet of elevation, depending on the criticality of the building, above the 100-year, or 1%-annual-chance, flood elevation; or
- 500-year, or 0.2%-annual-chance, flood elevation.
Prior to implementation of the Federal Flood Risk Management Standard, additional input from stakeholders is being solicited and considered. To carry out this process, FEMA, on behalf of the Mitigation Framework Leadership Group (MitFLG), published a draft version of Implementing Guidelines that is open for comment. A Federal Register Notice has been published to seek written comments, which should be submitted at www.regulations.gov under docket ID FEMA-2015-0006 for 60 days. Questions may be submitted to FEMA-FFRMS@fema.dhs.gov.
FEMA will also be holding public meetings to further solicit stakeholder input and will also host a virtual listening session in the coming months. Notice of these meetings will be published in the Federal Register. At the conclusion of the public comment period, the MitFLG will revise the draft Implementing Guidelines, based on input received, and provide recommendations to the Water Resources Council.
The Water Resources Council will, after considering the recommendations of the MitFLG, issue amended guidelines to provide guidance to federal agencies on the implementation of the Standard. Agencies will not issue or amend existing regulations or program procedures until the Water Resources Council issues amended guidelines that are informed by stakeholder input.
FEMA looks forward to participation and input in the process as part of the work towards reducing flood risk, increasing resilience, cutting future economic losses, and potentially saving lives.
FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain and improve our capability to prepare for, protect against, respond to, recover from and mitigate all hazards.
The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.
I was watching one of my favorite news shows late last night when the host came back from commercials with a breaking news story: Health-insurance company Anthem had been breached. The show’s host provided a couple of details of what the breach entailed; he said that it was personal information of customers and employees, their addresses, birthdates, Social Security numbers (emphasis was the host’s).
After that, I knew exactly what I was going to be waking up to this morning: an inbox filled with commentary on this latest high-profile breach and a topic right at hand for today’s blog post.
Much of that commentary applauded Anthem for its quick response to the breach, like this comment from Lee Weiner, SVP of products and engineering with Rapid7:
The Internet of Things is among the trends driving companies to invest in data virtualization, according to Suresh Chandrasekaran, senior VP for data virtualization vendor Denodo.
Data virtualization isn’t normally something you hear in Big Data discussions. I asked Chandrasekaran what problem data virtualization solved for IoT and other Big Data projects. Sensor data is generally pooled in a data repository or data lake, he explained, but it’s useful without context.
Data virtualization allows you to leverage sensor and other Big Data and add context using other data sources. For instance, if you’re using sensors to monitor vehicles, you might want to combine that with maintenance records to predict when parts need to be changed.
I’d just been sent a Varonis study, written by the Ponemon Institute. “Corporate Data: A Protected Asset or a Ticking Time Bomb?” couldn’t be more timely. The danger in not taking data security seriously is growing.
Let’s talk about this report against those events this week.
Is fast development the enemy of good development? Not necessarily. Agile development requires that databases are designed and built quickly enough to meet fast-based delivery schedules - but in a way that also delivers maximum business value and reuse. How can these requirements both be satisfied? This book, suitable for practitioners at all levels, will explain how to design and build enterprise-quality high-value databases within the constraints of an agile project.
Starting with an overview of the business case for good data management practices, the book defines the various stakeholder groups involved in the software development process, explains the economics of software development (including "time to market" vs. "time to money"), and describes an approach to agile database development based on the five PRISM principles.
There are 12-step programs for many personal issues, so I figured there should be a 12-Step Program for Emergency Managers. I’ve written about our addiction to Department of Homeland Security grants that are administered by FEMA. Therefore it is only natural that we look for ways to escape our addiction and gain control over our individual programs. Getting out of addictive behavior can be difficult.
Generally the concept of 12-step programs is to acknowledge a higher power and give everything over to its control. The only “higher power” that emergency managers have is FEMA, so we are in a bit of a Catch-22 in that we are trying to escape its grant clutches while at the same time giving our lives over to its control. We should at least try this 12-step program that I’ve adapted from Alcoholics Anonymous.
Sometimes it seems as if the enterprise is so caught up in preparing for the future that it fails to notice what is happening in the present.
The cloud is a prime example, with most top data executives enamored by visions of limitless, federated infrastructure able to do anyone’s bidding at the touch of a few mouse clicks. In the meantime, however, few are overly concerned by the unorganized spread of data across external cloud platforms, the so-called shadow IT, despite the significant loss of control it represents.
According to CipherCloud, about 86 percent of enterprise applications are now tied to shadow IT, especially those involved in publishing, social networking and career-based functions. This should be of particular concern to the enterprise considering the increasing sophistication of mobile malware and the ongoing spate of massive data breaches. However, many organizations are not even aware of the scope of the problem: One major enterprise in the survey claimed to have only 15 file-sharing apps in use when in reality it was nearly 70.
When you dig into data quality—and more of you are—you’ll hear a lot about “good enough” data quality. But what the heck does that mean? And how do you know if you’ve achieved it?
Data folks have long understood that data quality is a continuum. Data quality comes with an associated cost and, at some point, that cost is not worth paying to further “perfect” the data; hence, the concept of “good enough” data quality.
That may have made sense in a relational database world, but now … it’s complicated. The data isn’t just being used for reporting, but is also being leveraged in BI and analytics systems. Data has left IT and is being used to drive decisions across the organization. What’s more, data looks different—it’s now social data, sensor data, external data, Big Data.
Greetings from Venice and a big thanks to Joe Oringel at Visual Risk IQ for allowing my to post his five tips on working with data analytics while I was on holiday in this most beautiful, haunting and romantic of cities. While my wife and I have come here several times, we somehow managed to arrive on the first weekend of Carnivale, without knowing when it began. On this first weekend, the crowds were not too bad and it was more of a local’s scene than the full all out tourist scene.
As usual, Venice provides several insights for the anti-corruption compliance practitioner, whether you harbor under the Foreign Corrupt Practices Act (FCPA), UK Bribery Act, both, or some other such law. One of the first things I noticed in Venice was the large number of selfie-sticks and their use by (obviously) tourists. But the thing that struck me was the street vendors who previously sold all manner of knock-off and counterfeit purses, wallets and otherwise fake leather goods had now moved exclusively to market these selfie-sticks. Clearly these street vendors were responding to a market need and have moved quickly to fill this niche.
With faster time to market, massive economy of scale, and unparalleled agility, the cloud is entering enterprises at an unprecedented rate. As a result, hundreds of high risk cloud applications are commonly used across North American and European organizations, says a CipherCloud report. The report details the results of a comprehensive study of cloud usage and risks, compiled from enterprise users in North America and Europe.
‘Cloud Adoption & Risk Report in North America & Europe – 2014 Trends’ includes anonymised data of cloud user activity collected for the full 2014 calendar year, spanning thousands of cloud applications.
In what is being described as potentially the largest breach of a health care company to-date, health insurer Anthem has confirmed that it has been targeted in a very sophisticated external cyber attack.
The New York Times reports that hackers were able to breach a company database that contained as many as 80 million records of current and former Anthem customers, as well as employees, including its chief executive officer.
Early reports here and here suggest the attack compromised personal information such as names, birthdays, medical IDs/social security numbers, street addresses, email addresses and employment information, including income data.
(TNS) — While many coastal communities in the Tampa Bay area have been spared a catastrophic spike in flood insurance rates for now, local city leaders say they’re preparing for the worst over the long haul.
In Belleair Bluffs on Tuesday, the Florida League of Cities hosted the first in a series of meetings throughout the state to encourage city governments to invest more in flood mitigation programs that can reduce the risk of storm damage and lower federal flood premiums for local residents by an average of 20 percent.
Cities can increase those savings for nearly all residents who carry flood coverage by improving storm-water drainage, enhancing building codes, moving homes out of potentially hazardous areas and effectively communicating about storm danger and evacuation routes.
(TNS) — Colorado Springs is making a pitch to host a new state-funded center for fire research, a technology hub that could help propel Colorado to the forefront of revolutionizing how wildfires are fought.
The Colorado Springs Regional Business Alliance plans to submit a report this week detailing why El Paso County, twice victim of catastrophic wildfire, should be the new home for the fire research center.
While the public eye may have been trained on the Colorado Firefighting Air Corps created last year, a lesser-known aspect of the Centennial-based fleet — the Center for Excellence for Advanced Technology Aerial Firefighting — has been on the wish list for some Colorado Springs leaders for months.
Despite increasing attention to cybersecurity and a seemingly constant stream of high-profile data breaches, the primary security method used in businesses worldwide remains the simple password. According to a recent study, the average person now has 19 passwords to remember, so it is not surprising that the vast majority of passwords are, from a security perspective, irrefutably bad, including sequential numbers, dictionary words or a pet’s name.
A new report by software firm Software Advice found that 44% of employees are not confident about the strength of their passwords. While many felt their usage was either extremely or very secure, the group reported, “our findings suggest that users either remain unaware of the rules despite the hype, do not believe them to be good advice or simply find them too burdensome, and thus opt for less secure passwords.”
Among the biggest password sins employees commit:
Data security has become an even bigger topic in the last year following several high-profile data breaches at consumer companies. And much of the focus been protecting against the breaches themselves. But are there other ways to protect data? MSPmentor recently took a deeper look at a technology called data masking. Here's what we found.
Many banks, government agencies, hospitals, insurance companies and other organizations that manage highly sensitive information are using a technique to hide their data from cybercriminals – data masking. The technique camouflages the real data that you want to protect by interspersing other characters and/or data with it. So the data hides in plain site, but it cannot be seen or discovered.
Enterprises are scrambling to come up with ways to scale their infrastructure to meet the demands of Big Data and other high-volume initiatives. Many are turning to the cloud for support, which ultimately puts cloud providers under the gun to enable the hyperscale infrastructure that will be needed by multiple Big Data clients.
Increasingly, organizations are turning to in-memory solutions as a means to provide both the scale and flexibility of emerging database platforms like Hadoop. Heavy data loads have already seen a significant performance boost with the introduction of Flash in the storage farm and in the server itself, and the ability to harness non-volatile RAM and other forms of memory into scalable fabrics is quickly moving off the drawing board, according to Evaluator Group’s John Webster. In essence, the same cost/benefit ratio that solid state is bringing to the storage farm is working its way into the broader data infrastructure. And with platforms like SAP HANA hitting the channel, it is becoming quite a simple matter to host entire databases within memory in order to gain real-time performance and other benefits while still maintaining persistent states within traditional storage.
By Leon Adato
In the corporate environment, end users and, more worryingly, the occasional IT pro, are the first to point the finger of blame at the network when an application is sluggish, data transfer is too slow or a crucial Voice over IP (VoIP) call drops, all of which can have a wider impact on the bottom line.
Issues arise when the IT department looks to blame the network as a whole, rather than work to identify problems that are caused by an individual application running on the network. Poor design, large content and memory leaks can all cause an application to fail, yet IT departments can be slow to realise this.
Many companies are reliant on applications to drive business-critical processes. At the same time, applications are becoming increasingly complex and difficult to support, which puts additional pressure on the network. So, the question remains, when there’s an issue with application performance, is it the network or is it the application? How do you short-circuit the ‘blame game’ and determine the root-cause of an issue so it can be solved quickly and efficiently?
In the past we have often heard that people got involved with business continuity through another career, perhaps drifitng in to it from facilities management or IT security. Now we are finding that more and more people are starting off in a business continuity role; the industry has developed into a career opportunity in its own right and people are joining it straight from school, college or university. In order to develop the industry further and take it forward, we need to inspire and encourage the right people to become business continuity professionals, and where better to do this than in schools.
To meet this aim, the Business Continuity Institute has formed a new partnership with Inspiring the Future, a free service where volunteers pledge one hour a year to go into state schools and colleges and talk about their job, career, and the education route they took. Already to date, over 7,500 teachers from 4,400 schools and colleges and over 18,500 volunteers have signed up.
Everyone from Apprentices to CEOs can volunteer for Inspiring the Future. Recent graduates, school leavers, apprentices, and people in the early stages of their career can be inspirational to teenagers - being close in age they are easy to relate to; while senior staff have a wealth of knowledge and experience to share. Your insights will help to inspire and equip students for the next steps they need to take.
Inspiring the Future is currently running a campaign called Inspiring Women with the aim to get 15,000 inspirational women from Apprentices to CEOs signed up to Inspiring the Future, to go into state schools and colleges to talk to girls about the range of jobs available, and break down any barriers or stereotypes. For further information click here
Why volunteer in a local school or college?
- Going into state schools and colleges can help dispel myths about jobs and professions, and importantly, ensure that young people have a realistic view of the world of work and the routes into it.
- Getting young people interested in your job, profession or sector can help develop the talent pool and ensure a skilled workforce in the future.
To sign up to Inspiring the Future as a BCI member, simply click here and follow the steps. In the ‘My Personal Details’ section, under the heading ‘My memberships of Professional Association …’ please write Business Continuity Institute and it will appear for you to select.
By signing up, you make it easy for local schools and colleges to get in touch to see if you can help them help their pupils make better decisions about the future. You might be asked if you could take part in a careers’ fair, in career networking (speed dating about jobs) or do a lunchtime talk to sixth formers about your job and how you got it.
Volunteering for Inspiring the Future is free, easy, effective and fun. Volunteers and education providers are connected securely online, and volunteering can take place near home or work as employees specify the geographic locations that suit them. Criminal Records Bureau checks are not needed for career insights talks, as a teacher is always present.
Inspiring the Future is a UK initiative but if you know of a similar scheme in another country then get in touch and let us know. Our aim is to inspire people to become business continuity professionals all across the world.
When he speaks of that Thursday, Nov. 6, 2014, Dan Hoffman’s memory is a blur. Details come back in hazy pieces. His first recollections flash back to a headache, a throbbing pain that drove him into an afternoon nap. Next he recalls the sensations of heat, waking to a baking swelter. Next the glow of flames, a black canopy of smoke above, coughs shaking his lungs, the fire alarm shrieking, attempting to stand, to breathe, to reach for his cellphone and dial 911.
“My instinct was to get out,” Hoffman said.
He stumbled from the bedroom, to the bathroom, to the living room of his family’s home in Traverse City, Mich. The voice of a dispatcher must have spoken to him through his cellphone. He doesn’t recall it though. He only remembers listening to his own voice. He said the word “help” twice. It was the last thing he heard before collapsing, falling unconscious as his house continued to burn.
If you’re reading about the rising number of measles cases in California, you may also be thinking about pandemic risk.
First, let’s look at the status of measles cases and outbreaks in the United States.
The CDC notes that from January 1 to January 28, 2015, 84 people from 14 states were reported to have measles. Most of these cases are part of a large, ongoing outbreak linked to Disneyland in California.
On Friday (January 30, 2015), the California Department of Public Health released figures showing there are now 91 confirmed cases in the state. Of those, 58 infections have been linked to visits to Disneyland or contact with a sick person who went there.
At least six other U.S. states – Utah, Washington, Colorado, Oregon, Nebraska and Arizona—as well as Mexico have also recorded measles cases connected to Disneyland, according to this AP report.
What about last year?
Don’t think you are vulnerable to an insider threat? You might want to have a conversation with your IT department, then. According to Vormetric's 2015 Insider Threat Report, 93 percent of IT personnel think their company is at risk from an insider threat. Also, 59 percent of respondents worry about privileged users or employees who have high-level access to very sensitive data, who are considered to be the company’s greatest threat.
Thanks in part to the recent Sony hack, insider threats and the dangers they pose are getting a lot more attention than they have in the past. But as Eric Guerrino, executive vice president of the Financial Service Information Sharing and Analysis Center, was quoted in eSecurity Planet, insider threats have been a problem for a long time and a top focus area for security concerns. It’s just that now those beyond IT and security staff are beginning to grasp the severity of the issue.
A survey of New South Wales Shires and Councils has looked at risk management, business continuity, and internal audit practices and identified a number of gaps in some critical areas. Over 50 percent of NSW councils participated in the survey, which was conducted by InConsult.
“The high number of responses has provided data that we believe to be valid and paints a good picture of the current state of risk management in NSW councils” says InConsult Director Tony Harb.
“Overall, we have seen improvements across the board in risk management practices, such as developing formal risk management policies and strategies, formal risk appetite statements and maintaining comprehensive risk registers. More Councils now class their risk management in the ‘proficient’ category of risk management maturity.
Many CEOs tend to see business continuity management purely within the context of complying with governance codes. But, says Leigh-Anne van As, business development manager at ContinuitySA, CEOs also need to see how business continuity management can help them answer three key strategic questions.
Van As argues that CEOs need to be able to answer ‘yes' to three key questions:
- Do you know which products and services offered by your company are vital to ensuring its strategic objectives can be met?
- Is your organizational structure aligned to the company's strategic objectives?
- Do you know exactly which resources (including human resources) are required for the company to achieve its strategic objectives?
"Companies typically offer a multiplicity of products and services, but CEOs and their immediate teams need to understand which ones are absolutely vital to the company's ability to meet its strategic targets. They also need to understand exactly which resources are essential to delivering those products and services," she explains. "Once they have the answers, CEOs and their teams can allocate investment and attention appropriately, and optimise the company's operations."
According to a study conducted by Kaspersky Lab and B2B International, a Distributed Denial of Service (DDoS) attack on a company’s online resources might cause considerable losses – with average figures ranging from $52,000 to $444,000 depending on the size of the company. For many organizations these expenses have a serious impact on the balance sheet as well as harming the company’s reputation due to loss of access to online resources for partners and customers.
According to the study, 61% of DDoS victims temporarily lost access to critical business information; 38% of companies were unable to carry out their core business; 33% of respondents reported the loss of business opportunities and contracts. In addition, in 29% of DDoS incidents a successful attack had a negative impact on the company’s credit rating while in 26% of cases it prompted an increase in insurance premiums.
DDoS attacks are not just costly, they are also becoming more frequent and more complex. In a different study, one carried out by Arbor Networks, it was revealed that 38% of respondents to a survey experienced more than 21 attacks per month compared to just over 25% in 2013. It was also noted that we are now experiencing much larger attacks, sometimes over 100Gbps and even up to 400Gbps. Ten years ago the largest attack was 8Gbps.
With this as a backdrop, it is perhaps no surprise that cyber attacks have consistently been one of the top three threats for business continuity professionals according to the Business Continuity Institute’s annual Horizon Scan report.
“A successful DDoS attack can damage business-critical services, leading to serious consequences for the company. For example, the recent attacks on Scandinavian banks (in particular, on the Finnish OP Pohjola Group) caused a few days of disruption to online services and also interrupted the processing of bank card transactions, a frequent problem in cases like this. That’s why companies today must consider DDoS protection as an integral part of their overall IT security policy. It’s just as important as protecting against malware, targeted attacks, data leak and the like,” said Eugene Vigovsky, Head of Kaspersky DDoS Protection, Kaspersky Lab.
Most actuaries know about projections that go awry, so we have quite a bit of sympathy for the weather forecasters who missed the mark early this week, says I.I.I.’s Jim Lynch:
Weather forecasts have improved dramatically in the past generation, but this storm was odd. Usually a blizzard is huge. On a weather map, it looks like a big bear lurching toward a city.
This storm was relatively small but intense where it struck. On a map, it looked like a balloon, and the forecasters’ job was to figure out where the balloon would pop. They were 75 miles off. It turned out they over-relied on a model – the European model, which had served them well forecasting superstorm Sandy, according to this NorthJersey.com post mortem.
If you’ve ever wondered whether your data governance committee is covering the right issues, then you’ll want to read Joey Jablonski’s recent column, “12 Step Guide for Data Governance in a Cloud-First World.”
Despite the title, five of the steps are actually a great strategic discussion list for any data governance group. Jablonski says organizations should cover each of the following:
I found this – and have never seen it before:
It’s a strange thing as it appears to begin at the top of the cycle with ‘Corporate responsibility’. While I understand the definition (here’s one : Corporations have a responsibility to those groups and individuals that they can affect, i.e., its stakeholders, and to society at large. Stakeholders are usually defined as customers, suppliers, employees, communities and shareholders or other financiers. (Financial Times Lexicon)) – is it something that should be at the core of the diagram rather than part of a security management cycle? I’m not splitting hairs here – it is about separation of process from strategy I think. Further – shouldn’t ‘Understand the Organization’ come first? It does for me – unless we understand the organization how can we meet our responsibilities – corporate, security or otherwise?
A growing hazard has emerged in the cloud security space that is threatening organizations from inside of their own physical and virtual walls. As employees across multiple industries continue to adopt ‘shadow cloud’ services in the workplace, organizations and managed service providers (MSPs) need to carefully monitor its effects on security and cloud-based file sharing.
The Cloud Security Alliance’s (CSA), official definition of “shadow cloud” services is “cloud applications and services adopted by individual employees, teams, and business units with no formal involvement from the organization’s IT department.” The threat of this unsanctioned cloud usage is a potential security risk to both individuals and enterprises, alike, as the services are less protected and secured.
Robin Murphy is a leader in the field of disaster robotics, having started working on the topic in 1995 and researching how the mobile technologies have been used in 46 emergency responses worldwide. She has developed robots that have helped during responses to numerous emergencies, including 9/11 and Hurricane Katrina. As director of the Center for Robot-Assisted Search and Rescue at Texas A&M University, Murphy works to advance the technology while also traveling to disasters when called upon to help agencies determine how robots can aid the response. The center’s first deployment was in response to 9/11, which also was the first reported use of a robot during emergency response.
Emergency Management: Since 9/11, how have you seen the use of robots in disasters change?
Robin Murphy: We started out in 2001 and up until 2005 you didn’t see the use of anything but ground robots. Everything was very ground-centric, and I think that reflected the state of the technology. For years we had bomb squad robots, which were being made smaller and smaller for military tactical operations so that gave them a tool that was pretty easy to use. Starting in 2005, we saw the first use of small unmanned aerial vehicles that were being developed primarily for the military market and those were very useful. Those have really come up and, in fact, since 2011, I’ve only found one disaster that didn’t use an unmanned aerial vehicle and that was the South Korea ferry where they used an underwater vehicle. So we went from ground robots dominating to about 2005 and then we started shifting toward unmanned aerial vehicles. In about 2007, it became much more commonplace to see underwater vehicles being used. Then starting in about 2011, I think if you have a disaster and you’re an agency and you haven’t figured out a way to use a small unmanned aerial system, it’s kind of surprising.
Better data storage means different things to different people. For some it is all about speed, for others cost is the primary factor. For many it is about coping with soaring data volumes while for some, simplicity and ease of install/use are top-of mind elements.
Whatever your opinion of what better data storage is, here are a few tips on how to improve storage in the coming year.
What worries chief information officers (CIOs) and IT professionals the most? According to a recent survey commissioned by Sungard Availability Services information security, downtime and talent acquisition weigh heaviest on their minds.
Due to the increasing frequency and complexity of cyber-attacks, security ranks highest among IT concerns in the workplace for CIOs; as a result more than half of survey respondents (51 percent) believe security planning should be the last item to receive budget cuts in 2015.
While external security threats are top of mind for IT professionals, internal threats are often the root cause of security disasters. Nearly two-thirds of the survey respondents cited leaving mobile phones or laptops in vulnerable places as their chief security concern (62 percent), followed by password sharing (59 percent). These internal security challenges created by employees, lead 60 percent of respondents to note that in 2015 they would enforce stricter security policies for employees.
Responses to winter storm Juno seem to show that you cannot please the public when it comes to preparedness. In this article Geary Sikich asks whether business continuity and emergency planners are missing something when it comes to communicating preparedness with the public.
I was supposed to be in Boston presenting at ‘The Disaster Conferences’ on 28 January 2015. Well, the weather just put us out to 19 March 2015 for the now rescheduled Boston conference. I guess that they are still feeling the effects of this week’s blizzard, now named ‘Juno’; that left Boston with over 24 inches of snow. According to the Weather Channel Winter Storm Juno pounded locations from Long Island to New England with heavy snow, high winds and coastal flooding late Monday into Tuesday. The storm is now winding down. The National Weather Service has dropped all winter storm and blizzard warnings for Juno.
Snow amounts in New York have ranged from 9.8 inches at Central Park in New York City to 30 inches on Long Island. The snippets from the Weather Channel and from other news sources barrage us with the details of this latest storm:
- In Massachusetts, up to 36 inches of snow has been measured in Lunenburg, while Boston has seen 24.4 inches. Juno was a record snowstorm for Worcester, Massachusetts (34.5 inches). Incredibly, 31.9 inches fell in Worcester on Jan. 27, alone!
- Thundersnow was reported in coastal portions of Rhode Island and Massachusetts late Monday night and early Tuesday.
(TNS) — Lawmakers are scrambling to fix a problem that could result in Idaho driver's license holders being denied entry to federal facilities nationwide by the end of the year.
The issue arose last week, when the Idaho National Laboratory began enforcing the REAL ID Act.
The act, adopted in 2005, was a response to the Sept. 11, 2001, terrorist attacks. It tries to limit the availability of false driver's licenses and identification cards by imposing detailed security requirements on states for issuing such cards.
Idaho is one of nine "non-compliant" states, meaning the U.S. Department of Homeland Security isn't satisfied with its efforts to implement the act.
Consequently, Idaho licenses and ID cards can no longer be used to gain entry to nuclear power plants, to restricted portions of the Homeland Security headquarters building or - as of Jan. 19 - to INL and certain other federal facilities (see related story, at right).
(TNS) —When disaster strikes in Palm Beach County, Fla., a team of volunteers trained by county emergency managers can be deployed as the first line of defense, helping their communities with everything from search and rescue to basic first aid to putting out small fires.
They can also be called upon to distribute or install smoke alarms, hand out disaster education materials or replace smoke alarm batteries in the homes of the elderly, according to a brochure about the program.
But there's no requirement that they be subject to any kind of criminal background check.
That could change after a concerned Boynton Beach resident complained to the Florida Division of Emergency Management's Inspector General. In a report released last week, the inspector recommended that background checks be a condition of the grants doled out for the program.
Business continuity and cloud file sync services provider eFolder, has announced the release of its production version of Cloudfinder for Box, a dedicated cloud-to-cloud backup, search and restore service for Box. The company rolled out the production version of the offering following Box’s (BOX) long-anticipated initial public offering last week.
The Business Continuity Institute is pleased to announce the launch of its new Careers Centre, providing those working in the industry with the support they need to further their career by highlighting the job opportunities available. The BCI Careers Centre will also allow recruiters to find the perfect candidate for them by offering a CV search facility.
If you’re looking for a new job in business continuity or resilience then look no further than the BCI Careers Centre. Powered by JobTarget, the Careers Centre pulls in advertised vacancies from global recruitment sites, as well as those advertised directly with the BCI, and allows users to search by position or location. The system also allows users to set up a job alert so they can be the first to see new vacancies.
If you’re a recruiter then post your job within the Careers Centre to make sure it can be seen by a wide selection of desired candidates. If you’d rather seek people directly then search through the CVs uploaded by business continuity professionals to find the one who is suitable for you, or perhaps a selection that you would like to shortlist. The BCI Careers Centre is an open site with business continuity and resilience specialists from around the world encouraged to register for vacancies.
As the Careers Centre is specifically designed to focus on roles in the business continuity and resilience industry, it might be helpful to know what industry memberships or credentials a potential employee has. If you're a member of the BCI or hold a BCI credential then this will be clearly identified on your profile. It will also be clearly identified if you are on the BCI's CPD scheme.
Big Data will bring new challenges to data governance. Succeeding will require organizations to simplify, prioritize and above all adapt as Big Data use matures.
Yesterday, I shared four Big Data governance challenges:
- Changing data roles
- Broader business involvement
- Business buy-in
- Technical challenges
Let’s look at how those success principles can be applied to the first two Big Data governance challenges.- See more at: http://www.itbusinessedge.com/blogs/integration/a-three-step-strategy-for-tackling-big-data-governance-challenges.html#sthash.ewxFgvwx.dpuf
Big Data will bring new challenges to data governance. Succeeding will require organizations to simplify, prioritize and above all adapt as Big Data use matures.
Yesterday, I shared four Big Data governance challenges:
- Changing data roles
- Broader business involvement
- Business buy-in
- Technical challenges
Let’s look at how those success principles can be applied to the first two Big Data governance challenges.- See more at: http://www.itbusinessedge.com/blogs/integration/a-three-step-strategy-for-tackling-big-data-governance-challenges.html#sthash.ewxFgvwx.dpuf
Big Data will bring new challenges to data governance. Succeeding will require organizations to simplify, prioritize and above all adapt as Big Data use matures.
Yesterday, I shared four Big Data governance challenges:
- Changing data roles
- Broader business involvement
- Business buy-in
- Technical challenges
Let’s look at how those success principles can be applied to the first two Big Data governance challenges.
Is there anything that can’t be connected to the Internet? For example, where I once wore a $10 pedometer clipped to the waistband of yoga pants, I now wear a $130 fitness tracker on my wrist. In the past, I just took a look at the numbers on the pedometer to see how many steps I’d taken; now I need to log onto an app on my smartphone to see how far I’ve walked and how many calories I’ve burned and even how well I’ve slept. Or, if I wanted to, I could turn on any light in the house from the comfort of my couch rather than get up and do so manually. And that’s just a small scratch on the surface of the phenomena that is known as the Internet of Things (IoT).
However, if we know that virtually everything can now be connected to the Internet, we have to recognize its corollary statement: everything that can be connected to the Internet can be hacked. That fitness tracker I’ve come to depend on? Most of the information transmitted isn’t done securely and the apps have been known to have vulnerabilities. According to Symantec, this could make my movements easy to track and make my login details easy to steal. Those smart light bulbs, according to Slate, have insecure transmitters that could share too much information. And what about the home security system you have … you know, the one you turn on and off with your smartphone?
Before you move forward with Big Data, you’ll need to evolve your approach to data governance, experts say.
By now, most organizations are familiar with the basics of data governance: Identify the data owner, appoint a data steward, and so on. While those concepts are still essential to data governance, Big Data introduces new challenges that will require new adaptations.
“The arrival of Big Data should compel enterprises to re-think their approach to conventional data governance,” writes Dan O’Brien for Inside Analysis. “Everything about Big Data – its context, provenance, speed, scale and ‘cleanliness’ – extends data governance far beyond traditional, rigid databases, where it’s already an issue.”
Here’s a look at the new challenges Big Data introduces:
There has been innovation in every aspect of how individuals prepare for major snow storms – everything from funky new snow removal devices to new ways of pre-treating road surfaces for anti-icing before the onset of a major storm. Now, the real promise is in taking some of Silicon Valley’s hottest technologies — the Internet of Things, artificial intelligence, crowdsourcing, renewable energy and autonomous vehicles — and using them to improve the way cities respond to blockbuster snow events such as the Blizzard of 2015:
It was an unprecedented step for what became, in New York City, a common storm: For the first time in its 110-year history, the subway system was shut down because of snow.
Transit workers, caught off guard by the shutdown that Gov. Andrew M. Cuomo announced on Monday, scrambled to grind the network to a halt within hours.
Residents moved quickly to find places to stay, if they were expected at work the next day, or hustle home before service was curtailed and roads were closed.
And Mayor Bill de Blasio, whose residents rely upon the transit system by the millions, heard the news at roughly the time the public did.
“We found out,” Mr. de Blasio said on Tuesday, “just as it was being announced.”
Marshall Goldsmith, an executive coach to the corporate elite, is the author of the very popular book called What Got You Here Won’t Get You There. And while the title may be true as it relates to your individual career path, I have news for C-suite executives everywhere: it is not true when it comes to adopting new technology. In fact, what got you here – to your current state of success – is precisely what will get you to the next level. The problem is, as Chief Information Officers (CIO) and IT professionals, we sometimes allow ourselves to be pressured into acting contrary to what we know is the right thing to do.
Here’s what happens. A CEO approaches a CIO and says (in a nutshell), “What’s our cloud strategy? We have to get everything into the cloud.” The CEO has read the analysts, seen the marketing materials, been to the trade shows, and talked to peers. Is it any wonder that he or she comes to the CIO with an urgent “let’s-move-it-all-before-we-get-left-behind” deliverable? The cloud is the newest, latest, greatest, sexiest thing out there. It has benefits galore. Let’s get in on this. Now.
Were most of the data breaches that occurred in the first half of last year preventable? According to the Online Trust Alliance (OTA), a nonprofit organization that provides businesses with online security best practices, 90 percent of these incidents "could have easily been prevented."
And thanks in part to its recent findings, the OTA sits atop this week's list of IT security newsmakers to watch, followed by Adobe (ADBE) Flash Player, Kaspersky Lab founder Eugene Kaspersky and St. Peter's Health Partners.
Research published by ISACA has shown that close to half (46%) of respondents to a global survey of IT professionals expect their organization to face a cyber attack in 2015 and 83% believe cyber attacks are one of the top three threats facing organizations today. Despite this, 86% say there is a global shortage of skilled cyber security professionals and only 38% feel prepared to fend off a sophisticated attack.
It is not just IT professionals who are worried about cyber attacks, the Business Continuity Institute’s own Horizon Scan report showed that cyber attacks and data breaches are two of the greatest threats to organizations. It is therefore vital that they have systems and people in place to combat these threats or, should any attack be successful as they all too often are, have processes in place to manage the aftermath.
Data breaches at a series of high profile retailers in 2014 made the issue of data security particularly visible to consumers and demonstrated the struggles that companies face in keeping data safe. Finding and retaining skilled cyber security employees is one of those challenges. In fact, 92% of ISACA’s survey respondents whose organizations will be hiring cyber security professionals in 2015 say it will be difficult to find skilled candidates.
“ISACA supports increased discussion and activity to address escalating high profile cyber attacks on organizations worldwide,” said Robert E Stroud, International President of ISACA. “Cyber security is everyone’s business, and creating a workforce trained to prevent and respond to today’s sophisticated attacks is a critical priority.”
On the surface (pardon the pun), NASA’s recent move to the cloud would not seem to have much to do with MSPs who offer cloud-based file sharing. But a closer look into the high-profile project – as recently highlighted on GigaOm – proves otherwise.
Indeed, there are some things that all cloud transitions have in common, whether it’s the nation’s space program or a 10-person SMB. To illustrate our point, we wanted to examine this story through the lens of a managed service provider and their clients. Here we go…
(TNS) — From intuitive improvements — such as better statewide communication and pre-storm protocols — to more sensible plow blades and smarter technology for plow truck drivers, the crews at the Pennsylvania Department of Transportation’s (PennDOT) District 9 are becoming more equipped each year to handle Pennsylvania weather as efficiently as possible.
“The key words are ‘situational awareness,’” said Walter Tomassetti, assistant district executive for PennDOT’s District 9, which includes Cambria and Somerset counties. “The focus is on being ahead of the storm.”
Now, when PennDOT officials see major weather coming, such as double-digit snow, representatives from each district statewide have a pre-storm meeting to cover what resources will be needed most — and where. Depending on what’s expected, they also may set up a command center in each district.
(TNS) — There's something that has appeared on the Diamond School District campus that is so anticipated that it's drawing youngsters away from their recess to watch it in action.
It's a bulldozer, and it's turning ground outside the elementary school in preparation for a new safe room — Diamond, Mo.'s first official community storm shelter.
"If we could do something about it, then let's do it," Superintendent Mike Mabe said of his school district's proposal to try to maximize safety in case of severe weather. "It's just the right thing to do."
The recent collapse of an Interstate 75 overpass in Cincinnati, killing a worker and injuring a truck driver, is yet another reminder of the plight of America’s infrastructure, which is estimated to require billions of dollars to bring up to 2015 standards.
The bridge that collapsed had been replaced and was being torn down as part of an extended project to increase capacity on a congested, accident-prone section of the interstate, according to the Associated Press.
President Obama, speaking today in Saint Paul, Minnesota, outlined several proposals, including launching a competition for $600 million in competitive transportation funding and investing in America’s infrastructure with a $302 billion, four-year surface transportation reauthorization proposal, according to a press release from the White House. Obama also plans to “put more Americans back to work repairing and modernizing our roads, bridges, railways, and transit systems, and will also work with Congress to act to ensure critical transportation programs continue to be funded and do not expire later this year.”
Federal leaders want to like the cloud. They really do.
Then again, they have to — they’re under a cloud first mandate. And yet, they’re still not gung-ho when it comes to actually pursuing adoption, a recent survey shows.
Every year, MeriTalk surveys federal managers about cloud adoption. In the latest survey of 150 federal executives, nearly one in five say one-quarter of their IT services are fully or partially delivered via the cloud.
For the most part, they’re shifting email (50 percent), web hosting (45 percent) and servers/storage (43 percent). They’re not moving traditional business applications, custom business apps, disaster recovery ERP or middleware.
And it seems they’re pretty happy with that so far. This year, 75 percent said they want to migrate more services to the cloud — except they’re worried about retaining control of their data.
As the blizzard of 2015 starts to hit hard across the Northeast, with several feet of snow, intense cold and high winds expected, utility companies are warning of widespread and potentially lengthy power outages across the region.
In New Jersey, utility companies say it’s the high winds, with gusts of up to 65 mph, rather than the accumulation of snow, that are likely to bring down trees or tree limbs and cause outages.
Consolidated Edison inc. which supplies electricity to over 3 million customers in New York City and Westchester county, told the WSJ that the light and fluffy snow expected in this blizzard should limit the number of power outages, but elevated power lines could come down if hit by trees.
The answer to this question depends on how fast you want your data back and how much time and effort you are prepared to spend. If your data is both mission and time critical, then full, frequent backups possibly with mirrored systems for immediate restore or failover may be the only solution. Financial trading organisations, large volume e-commerce sites and hospital emergency wards are examples. Other users who do not want to or cannot go down this route will be faced with more basic options.
Advice from James Leavesley, CEO, CrowdControlHQ.
Social media is no longer the exclusive preserve of the ‘Facebook Generation’ eager to connect with each other or simply a channel for consumer advertisers. It is fast becoming a valuable multi-faceted communications tool with many industries actively using social media networking sites to promote their products and services and drive commercial success.
Mirroring the trend, the finance industry is also waking up to the power of engaging with customers through social media at a time when its clients are increasingly turning to online resources for information and advice. Last year, consultancy giant Capgemini forecast that social media was on its way to becoming a “bona fide channel for executing transactions” and previously a study by Accenture stated that half of US financial advisers had successfully used social media to convert enquiries into clients. So far, so good so what’s the catch?
Information security has become a fixture in the daily headlines, ranging from the latest high-profile data breach; to exotic hacks of USB drives, ICS devices and IOT systems; and new zero-day exploits and attack techniques. While these stories are interesting and help us understand the vulnerabilities and risks that make up the threat landscape, they reflect a frequent bias in the industry towards focusing on the ‘cool’ exploit and detection side of cyber-defense, rather than the more operational response and mitigation side. One of the results of this focus, as reported in a recent SANS study, is that for over 90 percent of incidents, the time from incident discovery to remediation was one hour or longer.
This appears to be changing, however, as new reports shine a spotlight on incident response as both welcome and essential, and now courts are reinforcing that sentiment. This article by Proofpoint considers the other side of the equation and look at incident response. A comprehensive view of threat management includes people, processes, and tools in a process outlined below.
By Sal DiFranco
Misrepresentation isn’t reserved for entry-level interviewees. Chief Information Officer (CIO) candidates can exaggerate their accomplishments with the best of them. Let’s say you and your fellow C-suite executives need to hire a CIO. You know what you want – that picture-perfect ideal CIO candidate. Someone who is current on technology while being business savvy. Someone who takes smart risks when it comes to new technology, but who has insight on when to maintain the systems already in place. Someone who can talk to any segment of the business in their own terms, rather than resorting to technical jargon.
Of course, when interviewing CIO candidates, they will all try to make you believe they are that ideal CIO. It is up to you to identify any bull that get tossed around during the interview process, which is why I’ve come up with five specific points to watch out for.
(TNS) — Gov. Jerry Brown’s office is urging state emergency and law enforcement agencies to take advantage of a system that uses cellphone towers to pinpoint and send alerts.
Established in 2012 through a collaboration between the Federal Emergency Management Agency, the Federal Communications Commission and the wireless industry, the Wireless Emergency Alerts system is meant to complement existing alert systems.
“The Wireless Emergency Alerts are just one addition,” said Lilly Wyatt, an Office of Emergency Services spokesperson. “It’s an additional tool that local agencies can use for public messages.”
Of the 58 counties in California, only 24 have signed up to send alerts through the system.
What do you do when you are responsible for the safety of town, county or state residents and forecasts call for drastic weather conditions? Risk professionals can come under criticism if they are overly cautious, yet under-reacting can mean lives are at stake.
Take the current situation here in New York, New Jersey and Connecticut. Predictions called for one- to three-feet of snow and blizzard conditions over a wide swath of the tri-state area and states of emergency were declared. Governor Andrew Cuomo of New York yesterday called for a full travel ban in 13 counties, beginning at 11:00 p.m. Those breaking the ban were subject to fines of up to $300, he said.
“With forecasts showing a potentially historic blizzard for Long Island, New York City, and parts of the Hudson Valley, we are preparing for the worst and I urge all New Yorkers to do the same – take this storm seriously and put safety first,” Gov. Cuomo said.
“I always knew I was going to be somebody. But now I wish I had been more specific.” – Lily Tomlin
In April 2014 at a conference on “Redefining Roles: Embracing the Patient as Partner,” one of the speakers, a Ph.D. and President of a division of UnitedHealthcare Corporation, began by taking a step back in time to recount the historical evolution of risk management practiced by the leading doctors of the past.
During the early settlement of the United States, the principal medical treatment consisted of “blood letting.” In the 1700s, during the Yellow Fever epidemic, Benjamin Rush, a physician signatory of the Declaration of Independence, bled 100 to 125 people per day. Other treatments included “purging,” “sweat boxes,” “mercury ointments” and “medicinal hanging.” The treatments sound worse than the illnesses.
Before anesthesia, medicine was a horror show, with surgery often resulting in death from shock. Successful amputations were based on the speed and strength of the surgeon often at the expense of the fingers of surgical assistants.
In New York City, obtaining a public data set required an open records request and the researcher toting in a hard drive.
So grab a notepad, Big Apple, and let the Windy City show you how to do open data.
A recent GCN article describes how Chicago simplified the release and updating of open data by building an OpenData ETL Utility Kit.
Before the kit, the process was onerous. Open data sets required manual updates made mostly with custom-written Java code.
That data updating process is now automated with the OpenData ETL Utility Kit. Pentaho’s Data Integration ETL tool is embedded into the kit, along with pre-built and custom components that can process Big Data sets, GCN reports.
GENEVA — The number of people falling victim to the Ebola virus in West Africa has dropped to the lowest level in months, the World Health Organization said on Friday, but dwindling funds and a looming rainy season threaten to hamper efforts to control the disease.
More than 8,668 people have died in the Ebola epidemic in West Africa, which first surfaced in Guinea more than a year ago. But the three worst-affected countries — Guinea, Liberia and Sierra Leone — have now recorded falling numbers of new cases for four successive weeks, Dr. Bruce Aylward, the health organization’s assistant director general, told reporters in Geneva.
Liberia, which was struggling with more than 300 new cases a week in August and September, recorded only eight new cases in the week to Jan. 18, the organization reported. In Sierra Leone, where the infection rate is now highest, there were 118 new cases reported in that week, compared with 184 in the previous week and 248 in the week before that.
ISO 22318 is a guidance document developed by ISO to address Supply Chain Continuity Management (SCCM). It has been created to complement ISO 22301 the specification for Business Continuity Management Systems and its associated guidance ISO 22313.
Before Standards are finalised there is a process of review and comment that helps ensure the quality and consistency of the content they contain.
ISO 22318 despite being called a techincal specification is a guidance document that aims to help those managing BCMS programmes better address the challenge of Supply Chain Continuity.
As one of the goals for the New Year, companies should take stock of how resilient they are, and take steps to improve their ability to prevent disasters, and to recover should one occur.
“As part of their business continuity management, companies assess the risks they face, prioritise them and then put mitigation plans in place. That’s prudent and best practice, and something every board should insist is being done on an ongoing basis,” says Michael Davies, CEO of ContinuitySA. “In addition, I think that we all understand that the risk climate is becoming increasingly more complex, and the chances of a totally unexpected ‘Black Swan’ event are becoming more likely, that we think companies also need to see business continuity as a way to build a business that’s resilient by nature, intrinsically prepared to bounce back from anything. Companies should also become more proactive in avoiding disruptions associated with disasters rather than reacting to them when they occur.”
In fact, Davies argues, this type of approach can help executives and their boards enhance their oversight of the company, and discharge their obligation to ensure the company’s long-term sustainability.
The formal business continuity plan and management processes should provide the starting point for setting about building a more resilient organization, says Davies.
“Once you have done your best to pinpoint all the risks and put mitigation plans in place, then it’s time to put measures in place to help ensure you are prepared for the unexpected,” he notes. “Based on ContinuitySA’s own assessment of the risk environment and our experience with clients, we think the following seven initiatives will enhance organizational resilience.”
HOB has published the results of a new survey which set out to quantify employee knowledge and understanding of their organization’s emergency procedures in the event of a natural disaster or an epidemic.
‘An Inside Look at Disaster Recovery Planning’ surveyed 916 employed people in five cities across the United States: Houston, Los Angeles, Miami, New York and San Francisco.
When asked if their place of employment has emergency procedures in place to ensure the security of company information and data, 40 percent of respondents stated their company either does not have systems in place to protect data in an emergency, or they are not aware of the existence of these procedures.
Vision Solutions Inc., has published its Seventh Annual State of Resilience Report. Entitled ‘The Future of IT: Migrations, Protection & Recovery Insights,’ the report looks at trends, opportunities and challenges.
Highlights of the report include:
- Nearly 75 percent of respondents have not calculated the hourly cost of downtime for their business;
- For those who experienced a storage failure, nearly 50 percent lost data in the process due to insufficient disaster recovery methods or practices;
- Nearly two thirds of those surveyed said they delayed an important data migration for fear of downtime or lack of resources;
- Hosted private cloud is still the most prevalent cloud environment at 57 percent usage; hybrid cloud adoption lags at 32 percent with room to grow;
- Despite the growing popularity of cloud, nearly two thirds state they do not have high availability or disaster recovery protection in place for their data once it is in the cloud.
The report combines findings from five industry-wide surveys of more than 3,000 IT professionals.
Businesses face new challenges from a rise of disruptive scenarios in an increasingly interconnected corporate environment, according to the fourth Allianz Risk Barometer 2015. In addition, traditional industrial risks such as business interruption and supply chain risk (46 percent of responses), natural catastrophes (30 percent), and fire and explosion (27 percent) continue to concern risk experts, heading this year’s rankings. Cyber (17 percent) and political risks (11 percent) are the most significant movers. The survey was conducted among more than 500 risk managers and corporate insurance experts from both Allianz and global businesses in 47 countries.
“The growing interdependency of many industries and processes means businesses are now exposed to an increasing number of disruptive scenarios. Negative effects can quickly multiply. One risk can lead to several others. Natural catastrophes or cyber attacks can cause business interruption not only for one company, but to whole sectors or critical infrastructure,” says Chris Fischer Hirs, CEO of Allianz Global Corporate & Specialty SE (AGCS), the dedicated insurer for corporate and special risks of Allianz SE. “Risk management must reflect this new reality. Identifying the impact of any interconnectivity early can mitigate or help prevent losses occurring. It is also essential to foster cross-functional collaboration within companies to tackle modern risks.”
Whether you realize it or not, many companies contain workstations with software that is not approved by the information technology (IT) department; instead, it has been adopted and installed by individuals or even, in some cases, entire departments. We call this use of unapproved applications or third party cloud services ‘Shadow IT’ due to its clandestine or covert status.
More often than not, these activities are not malicious in nature: they are merely a means of maintaining productivity when IT response times to support requests are sadly lacking. One key – and often overlooked – aspect of shadow IT is found in development environments where some users/developers are using public clouds to do development work, or running their own open source software in a virtual machine (VM) on someone else’s cloud.
There’s no doubt that managing databases and associated middleware has become more complicated over the years. Given the fact that the number of people with the skills needed to manage that class of IT infrastructure has not risen appreciably, there’s naturally going to be a requirement for increased reliance on automation.
With the unveiling of Oracle Enterprise Manager Cloud Control 12c Release 4, Dan Koloski, senior director of product management and business development at Oracle, says that the company has added a raft of new data governance capabilities designed to make it easier to manage large “data estates.”
The new capabilities include the ability to detect differences across databases to eliminate configuration drift, the capacity to patch fleets of databases at the same time, and tools that optimize the placement of databases based on current workloads and other IT infrastructure constraints and requirements.
If you use ICS (Incident Command System) forms – and you’re like most users – you hate them. While simple in design, the forms can be cumbersome to manage. Your organization (federal, state, municipal government, gas & oil exploration and transport, public utility, etc.) may be mandated to use ICS to respond to accidents, disasters and even disruptions of normal business operations. And like many others, you may struggle to manage use of the common ICS forms.
The forms themselves are easy to complete. The stumbling block is collaboration. To share an ‘in progress’ ICS form you need to print it, or sharing it visually (on a projection or computer screen). Both can be difficult when your Operational personnel are not all in the same room. You may resort to updating ‘in progress’ ICS form manually (from multiple copies of a printed form) – and then have someone compile them MS Word later. While Word forms are helpful, they lack true automation. That makes collaborative management of ICS forms cumbersome, inefficient – and can lead to errors and omissions of vital information.
If you created your own ICS form ‘wish list’ it would probably include improvements in both efficiency and collaboration:
Measures and methods widely used in the financial services industry to value and quantify risk could be used by organizations to better quantify cyber risks, according to a new framework and report unveiled at the World Economic Forum annual meeting.
The framework, called “cyber value-at-risk” requires companies to understand key cyber risks and the dependencies between them. It will also help them establish how much of their value they could protect if they were victims of a data breach and for how long they can ensure their cyber protection.
The purpose of the cyber value-at-risk approach is to help organizations make better decisions about investments in cyber security, develop comprehensive risk management strategies and help stimulate the development of global risk transfer markets.
(TNS) — Despite high-profile computer attacks on Target, Sony and other major corporations, Idaho's director of homeland security said cyberthreats remain the "most important and least understood risk" to government and the private sector.
In a presentation Tuesday to the Senate State Affairs Committee, Brig. Gen. Brad Richy said the potential threats range from defaced or misleading websites to data theft and disruption of public services.
"The vulnerabilities are extreme," Richy said. "A breakdown in IT [information technology] services could take it from that sector into our industrial sector, to our water supply or electrical supply."
Cyberattacks are "a trend that's been going in the wrong direction for quite some time," said J.R. Tietsort, who heads up Micron Technology's global security efforts.
The September arrests/detentions in Australia of suspected Islamic State of Iraq and Syria (ISIS) supporters who had allegedly been planning to kidnap random people, decapitate them and then drape their bodies in the group’s flag and post the entire horrific event live to the Internet has brought to the forefront one of the most serious yet least discussed scenarios in counterterrorism. We term it “Main Street terrorism” and by that we mean terror attacks not on a grand scale, but multiple small attacks carried out by individuals or very small groups in environments where we have traditionally felt safe.
The December hostage situation in Australia is another example. It was an attack on a soft target, a target that would not fit the “traditional” profile of being highly visible or connected to government or military operations, carried out by an individual espousing extremist beliefs but acting essentially alone.
Who remembers the pipe bombs placed in mailboxes throughout the American Midwest during spring 2002? A total of 18 bombs were placed with six of those exploding (injuring four U.S. Postal Service mail carriers and two residents) and 12 others discovered without exploding. Until the suspect was apprehended, how many of us changed our routine for something as mundane as getting the mail because, suddenly, that everyday activity had become potentially deadly?
Cosentry has expanded its disaster recovery-as-a-service (DRaaS) offering to help customers improve their data recovery times.
The data center services provider said its expanded DR service is designed to meet a full range of business recovery point objectives (RPO) and recovery time objectives (RTO), with targets ranging from less than 15 minutes to several days based on application importance and budget.
"We anticipate that our customers will be able to implement a disaster recovery solution that meets their own specific requirements as it pertains to availability and the potential for data loss at a price that meets their budget," Craig Hurley, Cosentry's vice president of product management, told MSPmentor. "Our service expansion also looks to address the reality that many of our customers are looking to protect both virtual and physical servers."
A new study titled ‘An inside look at disaster recovery planning’ has revealed just how little employees know about their organization’s planned response to a crisis. In a survey by HOB, 40% of respondents stated that their company either does not have systems in place to protect data in an emergency, or they are not aware of the existence of these procedures.
The report also revealed that, even if a plan does exist, 52% of employees are unaware of the details. This study shows just how important it is for the details of any plan that involves employees to be shared with them. The worst time to find out what to do in a crisis is once the crisis has occurred.
Over the last decade we have seen a tendency towards more flexibility working environments and a greater trend towards working remotely, however 45% of respondents noted that they either do not have the ability to access company information that will enable them to do so, or they just don’t know if they have access.
If working remotely is one of your possible responses to a crisis, does your organization have the capability to do this? If your office is out of action and Plan B is for employees to work from home, you might be in for a surprise if 45% of your employees suddenly find out they can’t.
“For most businesses, access to and the sharing of information is critical to ongoing successful operations,” said Klaus Brandstätter, CEO of HOB. “The survey revealed that most companies are unprepared to withstand the negative consequences of disrupted operations, as many employees won’t have access to the resources and information needed to remain functional in emergency situations. In today’s world with so many unforeseen pending disasters, it is clearly paramount that companies implement comprehensive disaster recovery plans as part of their overall business continuity strategy.”
The hybrid cloud is now the new normal in cloud computing. The whole point of a hybrid cloud is to design and customize cloud capabilities that address your customer’s unique needs. But today – MSPs typically offer a one-size-fits-all service level agreement. Customers will demand a service provider that is willing and able to customize the service level agreement to meet those unique needs of their organization so that they can take advantage of the flexibility, scalability, cost reductions, and resiliency that cloud computing offers. 2015 will be the year that customers demand customized SLAs.
Service Level Agreements (SLA) serve as a roadmap and a warranty for cloud services offerings. All cloud providers offer some type of standard, one-size-fits-all SLA that may or may not include the following, depending on your requirements:
(TNS) — Mary Kirstein and her partner hunkered down under a dining-room table, with their cat corralled in a laundry basket between them, as the tornado roared toward their home.
And this didn’t happen just once during Kirstein’s nine years in Houston, where tornadoes seem as common as wide-brimmed Stetsons. It happened time and again. Thankfully, she said, the big one never hit, but a person doesn’t easily forget that fear.
“Tornadoes freak me out,” said Kirstein, a purchaser at Battelle who now calls Hilliard home.
In 2012, while researching tornado safety as part of her role on a committee at work, she discovered that the state of Ohio had a new program to help pay for safe rooms that can withstand even the 250 mph winds that accompany the most-destructive EF5 storms. She filled out an application for the Ohio Safe Room Rebate Program, run by the Ohio Emergency Management Agency.
Big Data is quickly moving from concept to reality in many enterprises, and with that comes the realization that organizations need to build and provision the infrastructure to deal with extremely large volumes, and fast.
So it is no wonder that the cloud is emerging as the go-to solution for Big Data, both as a means to support the data itself and the advanced database and analytics platforms that will hopefully make sense of it all.
In a recent survey from Unisphere Research, more than half of all enterprises are already using cloud-based services, while the number of Big Data projects is set to triple over the next year or so. This leads to the basic conundrum that the business world faces with Big Data: the need to ramp up infrastructure and services quickly and at minimal cost in order to maintain a competitive edge in the rapidly expanding data economy. The convergence between Big Data and the cloud, therefore, is a classic example of technology enabling a new way to conduct business, which in turn fuels demand for the technology and the means to optimize it.
Some enterprises are attracted by the potential advantages of the cloud for disaster recovery and business continuity. However, they fear the possibility of information being spied on, stolen or hacked after it leaves their own physical premises. A little lateral thinking suggests another possible solution. Instead of moving outside a company firewall to use cloud possibilities, how about implementing cloud functionality inside the firewall? A number of vendors now offer private cloud solutions and they have some customers whose identity may surprise you.
Component distributor partners with DigitasLBi Commerce and hybris to scale its commerce capabilities in global markets
LONDON – DigitasLBi Commerce, the global connected commerce specialist and hybris software, an SAP company and the world’s fastest growing commerce platform provider, have been selected by RS Components (RS), a trading brand of Electrocomponents plc, the global distributor for engineers, to implement a new connected commerce platform. This will enable it to enhance and rapidly scale its B2B eCommerce offerings to an expanding customer base and deliver a highly personalised experience to individual customers in markets around the globe.
Under the agreement, DigitasLBi Commerce will implement the hybris Commerce Suite, a powerful and scalable single-stack commerce platform capable of delivering highly sophisticated B2B features to a global user base. The solution enables RS to further enhance its online B2B functionality while seamless integration with the company’s enterprise architecture, which includes a SAP business intelligence system, will support streamlined business operations and make the faster initiation of go-to-market strategies and new business models possible.
Guy Magrath, Global Head of eCommerce at RS, commented: “eCommerce is a major driver of growth for our business and the entry point for our customers to a long term multi-channel relationship with us. By partnering with DigitasLBi Commerce and hybris we’ll gain the ability to respond faster to new market needs and further exploit the potential of our eCommerce offer to a diverse B2B customer base.”
With operations across 32 countries and a global network of 16 distribution centres worldwide, RS is the world’s largest distributor of electronics and maintenance products, shipping over 44,000 parcels daily. With around 500,000 products available for same day dispatch and serving more than one million customers worldwide, the company is dedicated to helping customers find the right product at the right price.
As a next phase, DigitasLBI Commerce will undertake the global deployment and rollout of a new connected multi-language, multi-currency, multi-site commerce platform that can be adapted fast to changing market conditions. DigitasLBi Commerce’s robust agile implementation approach will enable RS to incrementally advance its eCommerce capabilities.
With 58 percent of global revenues generated online, RS’s ambition is to build a £1 billion plus connected commerce business and DigitasLBi Commerce will support the brand in extending its ‘eCommerce with a human touch’ vision to further improve the online customer experience with innovative B2B functionality that make it even easier for customers to transact.
“With connected commerce at the heart of the company’s operation, RS has to make the online customer experience the best and most relevant in each and every market they do business in,” said Jim Herbert, Managing Partner at DigitasLBi Commerce. “As a leading exponent of global hybris implementations we’re delighted to have been chosen to support RS in extending how it connects to its global audience to reach customers locally, at the point of need.”
The new multi-device optimised commerce platform will power 29 highly localised websites, and finely tune procedures that address specific market requirements. Under the agreement, DigitasLBi Commerce will enable the brand’s global connected commerce team of 100 staff, who oversee online trading, merchandising and behavioural repurchasing (email/offline event triggers across all channels and digital devices), to become fully self-supporting in their utilisation of the hybris Commerce Suite.
“In today’s market where B2B customers expect and are demanding a B2C-like experience, companies - especially industry giants such as RS - require a new breed of solutions that consider the customer interaction across touch points and channels, including that pivotal moment in the journey where a purchase is made,” explained Rob Shaw, Vice President New Business EMEA and MEE, hybris software. “hybris makes it possible to integrate web, customer service, print, mobile and social commerce that will give RS’s customers a more seamless multi-channel shopping experience.”
Now that the dust has settled on the infamous hack of Sony Pictures Entertainment, it would be prudent to take a look back at how the attack was carried out, consider what lessons IT security professionals can learn from it, and formulate a plan to counter a similar attack.
To that end, I recently conducted an email interview with Gary Miliefsky, an information security specialist and founder and president of SnoopWall, a cybersecurity firm in Nashua, N.H. To kick it off, I asked him what the likelihood is that a Sony insider assisted with the attack, and whether it could have even been carried out without the help of an insider. Miliefsky dismissed the insider theory:
While many speculate that the attack on Sony Pictures Entertainment was done by a malicious insider, I believe that the DPRK carried out the attack themselves, originally initiated from IP addresses they lease from the Chinese government. I believe they initially eavesdropped on emails to learn a pattern of behavior for socially engineering a Remote Access Trojan to be installed via email of an unsuspecting employee, inside the network.
In a Jan. 13 presentation to the federal Health IT Policy Committee, Annie Fine, M.D., a medical epidemiologist in the New York City Department of Health and Mental Hygiene, described both the sophisticated software used to track disease outbreaks such as Ebola, as well as how better integration with clinicians’ electronic health records (EHRs) would improve her department’s capabilities.
“In New York City, every day we are on the lookout for unusual clusters of illness. And we receive more than 1,000 reports a day just in my program,” Fine said. Epidemiologists run a weekly analysis to detect clusters in space and time, and use analytics and geocoding to compare current four-week periods with baselines of earlier four-week periods.
“We get a large number of suspect cases reported, and they may be way out of proportion to the number of actual cases,” Fine said. Epidemiological investigations require hundreds of phone calls to providers and labs. “That could be made much less burdensome and efficient if we could have improved integration with EHR data.”
What role could social media play in effectively communicating information about breaking news such as natural disasters and disease outbreaks? It’s not a new question, but one that lacks an easy answer. Researchers and emergency response personnel in San Diego plan to spend the next four years exploring the topic, and what they find may eventually serve as a model for other communities looking to better leverage social media for disaster response.
San Diego County and San Diego State University (SDSU) recently formed a partnership to research and develop a new social media-based platform for disseminating emergency warnings to citizens. The project aims to allow San Diego County’s Office of Emergency Services (OES) to spread disaster messages and distress calls quickly and to targeted geographic locations, even when traditional channels such as phone systems and radio stations are overwhelmed.
Is your business prepared for IT outages? Disaster preparedness is vital for businesses of all sizes, especially for those that want to avoid prolonged service interruptions, and companies that prioritize disaster preparedness can find ways to protect their critical data during IT outages as well.
Managed service providers (MSPs) can offer data backup and disaster recovery (BDR) solutions to help companies safeguard their sensitive data during IT outages. These service providers also can teach businesses about the different types of IT outages, and ultimately, help them prevent data loss.
Whether you are planning a traditional data center build-out or all-new cloud infrastructure, the appeal of white box hardware is difficult to resist.
Provided you need enough of a particular device to benefit from economies of scale, and you have a plan to layer all the functionality you need via software, white box infrastructure can do wonders to reduce the capital costs of any project. Plus, you always have the option to rework the software should data requirements change.
But it isn’t all wine and roses in the white box universe. As IT consultant Keith Townsend noted to Tech Republic recently, white box support costs often emerge as a fly in the ointment. Large organizations like Facebook and Google have the in-house knowledge to deploy, configure and optimize legions of white boxes, but the typical data center does not. It takes a specialized set of skills to implement software-defined server, storage and networking environments, and white box providers as a rule do not offer much support other than to replace entire units, even if only a single component has gone bad. There is also the added cost of implementing highly granular management and monitoring tools to provide the level of visibility needed to gauge a device’s operational status to begin with.
Talk to many data storage experts about high-performance storage and a good portion will bring up Lustre, which was the subject of a recent Lustre Buying Guide. Some of the tips here, therefore, concern Lustre, but not all.
Use Parallel File Systems
Parallel file systems enable more data transfer in shorter time period than their alternatives.
Lustre is an open source parallel file system used heavily in big data workflows in High Performance Computing (HPC). Over half of the largest systems in the world use Lustre, said Laura Shepard, Director of HPC & Life Sciences Marketing, DataDirect Networks (DDN). This includes U.S. government labs like Oakridge National Lab’s Titan, as well as British Petroleum’s system in Houston.
To small business owners, the buzz words from the Big Data world (i.e., petabytes, zettabytes, feeds, analytics, etc.) seem very foreign indeed. According to research from the SMB Group, only 18 percent of small businesses currently make use of Big Data analytics and business intelligence solutions. On the other hand, midsize businesses have shown greater adoption, with 57 percent of those surveyed reporting that they use BI and analytics to gain actionable information.
However, many Big Data vendors have begun creating a better story for smaller businesses, focusing more on how they can use their tools to achieve deeper insight into business data to help them make more informed decisions. And the ones that listen to this retooled message will receive a decent payoff for their efforts.
You’ve taken the time to implement a disaster recovery (DR) plan for your company – you’re prepared for anything. You’ve covered all the milestones, including:
- Performing a Business Impact Analysis (BIA) to determine the recovery times you’ll need for your applications.
- Tiering your applications and documenting their interdependencies so you know which order your servers should be restored in.
- Putting your recovery infrastructure in a geographically-diverse data center.
- Created a comprehensive recovery playbook and tested each and every step.
Bring on the storms … the floods … the power outages … you’re ready. But are you really?
The United Kingdom’s GCHQ, in association with the Centre for the Protection of National Infrastructure, Cabinet Office and Department for Business Innovation and Skills, has re-issued their ’10 Steps to Cyber Security’ publication, offering updated guidance on the practical steps that organizations can take to improve the security of their networks and the information carried on them.
Originally launched 2012, the guidance has made a tangible difference in helping organizations large and small understand the key activities they should evaluate for cyber security risk management purposes. The 2014 Cyber Governance Health Check of FTSE 350 Boards showed that 58% of companies have assessed themselves against the 10 Steps guidance since it was first launched. compared to 40% in 2013.
‘10 Steps to Cyber Security’ has been updated to ensure its continuing relevance in the climate of an ever growing cyber threat. It now highlights the new cyber security schemes and services that have been set up more recently under the National Cyber Security Programme.
The Business Continuity Institute’s Horizon Scan report has consistently shown that cyber attacks and data breaches are two of the biggest concerns for business continuity professionals with the latest report highlighting that 73% of respondents to a survey expressed either concern or extreme concern at the prospect of one of these threats materialising.
Robert Hannigan, Director of GCHQ, said: “GCHQ continues to see real threats to the UK on a daily basis, and the scale and rate of these attacks shows little sign of abating. However despite the increase in sophistication, it remains as true today as it did two years ago that there is much you can do yourself to protect your organisation by adopting the basic Cyber Security procedures in this guidance.”
With more enterprise IT organizations relying on software-as-a-service (SaaS) applications than ever, securing the data that flows in and out of those applications has become a major challenge and concern.
To give IT organizations more control over that data, Protegrity today unveiled the Protegrity Cloud Gateway, a virtual appliance that, once deployed on a server, enables organizations to apply policies to the flow of data moving in and out of multiple SaaS applications.
Protegrity CEO Suni Munshani says it applies a mix of encryption and vaultless tokenization to make sure data residing in a SaaS application can only be viewed by users that have been given explicit rights to see that data. Those rights are assigned using a “configuration-over-programming” (CoP) methodology that allows administrators to configure the gateway without having programming skills.
Support for SaaS applications is provided by accessing the public application programming interfaces (APIs) those applications expose, with support for each additional SaaS application that Protegrity supports taking a few days or weeks to add, depending on the complexity of the project.
A new survey of more than 3,000 IT decision-makers worldwide revealed the majority of businesses are "behind the curve" when it comes to their data protection strategies. The survey showed that most businesses are "not very confident" that they can fully recover their critical data after an IT service disruption, yet they also considered data protection "to be totally critical to their success."
Following up on my previous post regarding hyperscale infrastructure, I feel I should point out that once the decision to go hyperscale has been made, it will most likely take place in a Greenfield hardware environment.
Unless you are already working with a state-of-the-art data facility, any attempt to convert complex, multiformat legacy environments will almost certainly lead to a morass of integration issues. The key benefit to hyperscale is that it is both large and flexible, allowing data executives to craft multiple disparate data architectures completely in software. This is why current hyperscale plants at Google and Facebook rely on bulk commodity hardware.
But as I mentioned last fall, the average enterprise does not have the clout to purchase tens of thousands of stripped down servers and switches at a time, and besides, all those components still need to be deployed, provisioned and integrated into the cluster, which takes time, effort and of course, money.
(TNS) — Until now, North Texas has been one of the least likely places in the country to have an earthquake.
But after the Dallas area suffered a series of more than 120 quakes since 2008, the U.S. Geological Survey is re-evaluating the metroplex’s “seismic hazard” — or the risk of experiencing earthquakes.
This year, for the first time, the USGS will include quakes believed to have been caused by human activity in its National Seismic Hazard Map, which engineers use to write and revise building codes, and which insurers use to set rates.
The map predicts where future earthquakes will occur, how often they will occur and how strongly they will shake the ground.
(TNS) — "A rising tide lifts all boats," John F. Kennedy said, in defense of the government taking on big public works projects for the greater good.
About 10 of Iowa's river towns will share a $600 million pot of state money based on the belief sales tax revenue will rise higher and commercial and residential development will flourish along riverfronts, if protected from flood with sophisticated green and hard infrastructure.
Flooding in Iowa is occurring more often, making the nomenclature 100-year or 500-year flood levels meaningless. The city of Burlington had 500-year floods in 1993 and 2008, a 15-year interval.
Cedar Rapids, which sustained $6 billion of the state's $10 billion flood damage in 2008, led the way in convincing the Legislature to establish a flood mitigation fund.
As the Board of Directors focuses its attention on risk oversight, there are many questions to consider. One topic the Board should consider is how the organization safeguards itself against breakdowns in risk management (e.g., when a unit leader runs his or her unit as an opaque fiefdom with little regard for the enterprise’s risk management policies, a chief executive ignores the warning signs posted by the risk management function or management does not involve the Board with strategic issues and important policy matters in a timely manner). As illustrated during the financial crisis, the result of these breakdowns can be the rapid loss of enterprise value that took decades to build.
An effectively designed and implemented lines-of-defense framework can provide strong safeguards against such breakdowns. From the vantage point of shareholders and other external constituencies (an external stakeholders’ view), we see five lines of defense supporting the execution of the organization’s risk management capabilities.1 They are outlined below.
Cyberattacks are clearly on the minds of President Barack Obama, Islamic State jihadists, Sony Pictures execs and the CBS producers who are launching a new show this spring called CSI: Cyber. On Jan. 13, Obama announced plans to reboot and strengthen U.S. cybersecurity laws in the wake of the Sony Pictures hack and the one on the Pentagon's Central Command Twitter account from sympathizers of the Islamic State. Whether a real attack or depicted in television and films like Blackhat, this flood of cyberattacks means that hackers are relentless and more sophisticated than ever before.
I’m not a fear monger by trade but want to sound the alarm that there is another cyber-risk that is looming and warrants attention of our emergency management community and government: electronic health records. The American Recovery and Reinvestment Act of 2009 authorized the Centers for Medicare and Medicaid Services to award billions in incentive payments to health professionals (hospitals, long-term care agencies, primary care, etc.) to demonstrate the meaningful use of a certified electronic health record (EHR) system.
The intent to create EHR systems is to improve patient care by providing continuity of care from provider to provider by creating health information exchanges (HIEs) that allow “health-care professionals and patients to appropriately access and securely share a patient’s vital medical information electronically,” says HealthIT.gov. In addition, financial penalties are scheduled to take effect in 2015 for Medicare and Medicaid providers who do not transition to electronic health records.
As the enterprise tries to make the data center more efficient in the face of rising operating costs, one problem keeps reoccurring: Disparate infrastructure makes it very difficult to determine what systems and solutions are in place and how they interact with each other.
The data center, after all, is a collection of assets, so it only makes sense to have a good idea of what those assets are and how they operate in order to either improve their efficiency or swap them out for new, better assets.
The idea of asset management (AM) in the data center is not new – in fact, it is a bustling business. MarketsandMarkets puts the total value of the AM industry at $565.4 million, with annual growth rates averaging 34 percent between now and 2019 to top out at more than $2 billion. The report segments the market by region, components, services, support and other factors, concluding that efficiency, management, planning and expansion of data footprints are key drivers, while limiting factors include tight budgets, poor awareness of available solutions, and a lack of perceived benefits. And as with most technology solutions these days, established markets in Europe and North America provide the bulk of activity, while emerging markets represent the fastest growth.
For most organizations, employees, or the human resources, account for the largest percentage of total costs. Northeastern University D’Amore-McKim School of Business Distinguished Professor of Workforce Analytics and Director of the Center for Workforce Analytics Dr. Mark Huselid says the workforce often represents fully 60 to 70 percent of all expenses. Quite clearly, the refinement of workforce management, and attempting to “connect human capital and performance with management strategy and business goals” is a keen point of interest for both HR and upper management.
The fact that a Professor of Workforce Analytics position exists is intriguing, and the sort of academic research that the Center for Workforce Analytics conducts may well result in some rather unexpected outcomes for some industries. Consider this idea, for example: “Most organizations tend to invest in talent hierarchically, where senior-level talent gets the most pay, best development opportunities and other professional perks. However, organizations should be managing vertically in who and what really matters – and in measuring and managing the outcomes associated with these processes.”
In the tech world, the idea of investing a higher percentage of pay and perks in less senior and less experienced employees is not foreign. Raising pay rates and bonuses for, say, highly in-demand developers and designers can often be easily justified in shortened time-to-market or other deliverables. In other areas, though, HR and the business would have a hard time with the concept without some solid predictive numbers.