Many successful organizations attribute their superior performance and accomplishments to their organization’s culture. In a 2015 study by Duke University of 1,900 executives around the globe, 79 percent said culture is among the top five things that make their company valuable. But only 15 percent said their own corporate culture is where it should be, and 92 percent said improving their culture would enhance company value.
Many recent high-profile scandals, such as those at Toshiba, Volkswagen, FIFA and Baylor University, have shown the adverse effect of having toxic culture. Toshiba’s $1.2 billion profit inflation scandal, which occurred over seven years and came to light last summer, was called “the most damaging event for the brand in the company’s 140-year history” by the outgoing CEO. The Independent Investigation Committee concluded that “there existed a corporate culture at Toshiba where it was impossible to go against the boss’ will.” This led to the dismissal of the CEO, two former CEOs and multiple board members.
In the technology age in which we live, CCOs often come face to face with a new phenomenon – too much information or data. TMI is not something to laugh at nor ignore. CCOs often face situations where they need to understand what is occurring through a monitoring or audit function. In those cases, CCOs have to decide whether it is worth the cost in money and/or resources (e.g. personnel, time) to review every piece of information to see if some event or trend can be discerned.
Luckily, there is a less burdensome way to solve this problem. It is a well-understood concept – sampling. Even when I took a basic statistics class and learned about sampling, it was easy to see why this would be a good solution — less time, less work and relevant results.
The concept of sampling is a practical solution to many difficult issues that come up for CCOs in managing a compliance program.
According to the results of a HyTrust survey of 400 attendees at the recent VMworld 2016 conference, more than a quarter (28 percent) of organizations that use a public cloud are doing nothing to encrypt the data they store there.
Thirty-two percent of respondents are encrypting data using the cloud provider's solution, while 21 percent are deploying a separate data encryption solution.
Forty-seven percent of respondents said security was their main reason for avoiding cloud deployments, and 54 percent said their old approaches to security would not work for future cloud deployments.
One of the biggest obstacles to good cybersecurity is the failure to recognize its need. MSPs often run into this problem with clients that, whether they realize it or not, operate under the false impression that “it won’t happen to me.”
But cyber attacks are increasingly common, and all businesses are vulnerable. In a recent Ponemon Institute survey, 55 percent of respondents said they had experienced a cyber attack, and 50 percent of companies had suffered a data breach in the previous 12 months.
Defending against cyber attacks gets tougher by the day; many companies lack the budget or skills to properly build up their defenses. This being the reality, it would stand to reason that more and more organizations would welcome a managed security services (MSS) approach. Yet, two-thirds of organizations in a study by Raytheon said they would use MSS only after experiencing “a significant data loss.”
While the worst-case scenario approach is a good one to use in order to reflect on organizational needs and the impacts of a disaster, it often brings an improper sense of safety.
So, should you plan for a catastrophic event or a localized disruption? When we work with organizations on business continuity, the scenario that almost always comes up is the “smoking hole” – whether it is a complete loss of the data center or the destruction of the headquarters building. This worst-case scenario is useful for planning, but there are two questions that should be considered as we put plans and strategies together for business and technology resiliency. What is the potential impact of an event, and what is the likelihood of it happening? Will it cause a catastrophic loss (the worst-case scenario), or will it cause a localized failure that will still have a significant impact on the business? Too many organizations fall into using only worst-case scenarios, thinking that with the “smoking hole” plan in place, their business is now adequately prepared to respond to and recover from a disaster.
But, based on statistics and our experience over the past 17 years, an organization is most likely to experience a localized outage rather than a catastrophic event. In the last several months, what issues have been in the news? Security breaches, human error, and single points of failure have caused significant business impacts. There may be some of you who were impacted by the two recent airline outages. Those were not “smoking hole” scenarios. Data breaches, both large and small, have had an impact on many of us. I have received notice of security breaches from more than one company where I am (or have been) a customer. I now have credit monitoring in place from multiple identity theft vendors, all provided by the impacted companies.