Spring World 2017

Conference & Exhibit

Attend The #1 BC/DR Event!

Bonus Journal

Volume 29, Issue 5

Full Contents Now Available!

Industry Hot News

Industry Hot News (6817)

The growth of data surpassed unfathomable long ago, and anyone who deals with data knows this. But seriously, step back from the vastness of it for a minute and consider how massive these data amounts are.

We live in a world where cell phones have more processing power than the Apollo computers that landed us on the moon. And cell phones are far from the only devices contributing to our data gluttony.

A few recent stats on what to expect:



Tuesday, 03 September 2013 15:19

Futurist SME

You can find things of ERM interest in many different places.

I’m reading a novel* that involves organogenesis and some Wall Streeters who were buying life insurance policies at 15 cents-on-the-dollar from people with diabetes and other life-shortening diseases, people who due to the economy or cost of medical care were unable to continue paying policy premiums.

The ERM connection is that the Wall Streeters thought they had covered all the bases to assure their scheme would be highly profitable - the Wall Streeters would buy the policies, pay the policy premiums for what they expected to be a limited time, and then collect the policy's face value when the former policy owner died. They even hired a company to "run the numbers" based on actuarial statistics to assure the worthiness of their scheme.

Unfortunately, the Wall Streeters and their statistics vendor were putting their eggs into the proverbial basket based on history. They overlooked near-future possibilities such as the development of test-tube organs (organogenesis).



Tuesday, 03 September 2013 15:17

Eating your own Cooking – NSA example


Many have been jumping on the bandwagon of criticism for both sides of the issue of the Snowden leaks. Discussions are flying around LinkedIn and other sites about who is to blame ( http://www.theregister.co.uk/2013/08/30/snowden_sysadmin_access_to_nsa_docs/ ). All of this further enhances my belief that organizations, large and small, need to re-evaluate business as usual.

There is a push for security against external attacks; however, it has been well documented that most threats come from inside to allow passage inadvertently or maliciously take / leak classified information. I use the term classified intentionally, not to indicate governmental but organizational context. Classifying information is not just the responsibility of the government but of each organization (Information: Integrity, Confidentiality, Availability).  A couple month old advertisement for FedEx touted the cost saving mentality by a company that reprinted on the blank side of used paper.  An individual in the meeting turned the page over to ask about the title on the back, “Executive Compensation List”.  The head of the meeting dives across the conference table to grab the ‘list’ out of the attendee’s hand.  I found this hilarious as it points out that too often business as usual does not look at the big picture.  Discrete parts certainly need to be refined by subject matter experts; however, the whole needs to be examined as well.



Tuesday, 03 September 2013 15:16

Managing Small Business Risk

As any risk manager can tell you, risk knows no market segment. Large businesses with their multi-million dollar losses may get more attention but small- and medium-sized enterprises (SMEs) face risks as well. The difference for these smaller businesses is that the losses they face can’t always be absorbed into their balance sheet. Losses that would be relatively minor for their larger counterparts, could be devastating and could even force an SME to close its doors forever.

This is why, according to a survey by UK insurer Premierline Direct (part of the Allianz UK Group), it is interesting to see that despite being aware of, and having encountered, many common risks like customer non-payment, supplier issues and natural disaster losses, not all SMEs have been spurred to take action to mitigate future risk. One-fifth of UK SMEs surveyed not only do not have anyone who is responsible for managing risk, but have no plans to manage risks in the future. One-quarter do not consult with any specialists for risk management advice. Of course, the majority of SMEs do take risk management measures but closing the gap for the remaining businesses should be a priority.

To illustrate their findings and offer some tips on how SMEs can manage their risks more effectively, Premierline Direct provided the following infographic.



Companies that hold any amount of data on their customers must now -- today -- begin thinking very seriously about what will happen to their reputations and their businesses if they do not take immediate steps to reassure customers their data is safe and private. Questions about who actually owns, and therefore controls the rights to, customer data are bound to surface very quickly as the world realizes privacy, as it was once defined and understood, is gone. To guide IT professionals in thinking about Big Data privacy challenges, ICC, a nationally recognized enterprise technology, has defined five questions every company must ask about their data and offers a new white paper about Big Data and privacy issues, “Big Data: Big Brother or Guardian Angel?”



The term “information technology” is often forgotten. People think of IT and they think of the “tech” who will help fix their computer. But the primary role of information technology workers is to manage the flow of information or data. IT systems provide email, calendars, records, documentation, data storage and more—all of which are forms of information.

Providing security to all of this data is done in large part via access control (AC), which includes managing user access to disparate systems and stores of data.

The National Institute of Standards and Technology (NIST) has developed a framework for AC called the Policy Machine (PM), which helps IT create an enterprise-wide operating environment that simplifies management, governance and data interoperability issues that plague AC administration today.




As part of the National Preparedness Month 2013 the Disaster Recovery Journal (DRJ) group is offering some free webinars offering a great resource of information regarding the timely disaster preparedness topics of writing and testing your organization’s disaster.

The first webinar is entitled “How to Conduct Powerful Exercises Every Time and addresses those crisis situations that require quick reaction and on-the-spot decision making —and — are often unexpected and unavoidable.  The degree of any organization’s success in responding, controlling and managing such a crisis is directly reflected by the level of effective and relevant training of the people involved.  Having accurate continuity plans is not enough to ensure a successful recovery.  People must be trained.  Conducting exercises is one of the most important activities that we can do to train people to respond, restore and recover from a crisis event.  These exercises transpose our response and recovery strategies from theory based ideas to reality.

Exercises build continuity muscle by generating knowledgeable and trained people along with more accurate and viable documentation when designed, developed and conducted correctly.  But what does it take to pull off a powerful and effective exercise?  How do you measure the results of an exercise?  And how can you leverage the learning environment that the exercise creates for maximum learning experience?

Some of the topics to be covered in this webinar are:

1.         What is and why should you conduct disaster exercises

2.         The six types of exercises

3.         Using the Exercise Planning Template (handout)

4.         Designing just the right exercise with the best of ingredients to obtain desired results

5.         How to prepare people for their success with maximum outcome

6.         Conducting the exercise where theory meets reality

7.         How to spot low hanging fruit by recognizing and identifying action items

8.         Virtual exercises, are they effective

9.         Focusing on what makes exercising easy …and fun

Exercising our business continuity, disaster recovery and crisis management plans are no longer an option if the objective is to have a viable continuity program works when reality strikes.  This is the one training about exercises that you cannot miss.  We will cover issues and present information that you will find nowhere else.  Take the time to make this happen for you.

Click here to register for this webinar being held on Wednesday, September 11, 2013 – 2:00 PM – 3:00 PM EDT.

Click here to register for the second webinar entitled “Escape the BC Plan Quagmire: Tips and Tricks for Migrating Seamlessly to a Better Solution “and which will be held on Wednesday, September 18, 2013 at 2:00 PM – 3:00 PM EDT.

If applicable, please pass this information along to those business continuity planning team members


Whistle-blowers are in the news more and more, but some organizations don’t seem to have caught up with the trend, or the fact that retaliation is illegal. They don’t seem to realize that negative reactions to a whistle-blower can make them look petty—and guilty.

Take two front page stories in our area newspaper on the same day this week. Both were about whistle-blowers who put their jobs on the line to come forward. One was fired, the other was suspended and later resigned.

In one case, The Journal News reported, a member of a New York town’s financial staff, the supervisor of fiscal services for more than 10 years, testified at a hearing that she notified several of her superiors that the town’s revenue projections were overestimated—on a financial statement needed for a bond application. She also reported improper money transfers—one made to the town supervisor. The woman was ignored, told to keep quiet, and eventually fired.



By Rob Sobers

A tidal wave of structured and semi-structured data is drowning the enterprise – documents, video and audio – and to get value from this data, and turn it into an asset, people across many teams need to be able to collaborate and share that data. However, if the wrong people access the data, it can seriously damage the business.

In order to manage and protect that data, businesses need to have systems and structures in place to manage it, and to understand how the data is being used, who has access to it and, more importantly, who shouldn’t have access to it.

Businesses today are struggling with proper data protection. IT is tasked with protecting an organization’s data, but often without the business-context needed to do this effectively. When considering how valuable an organization’s data is, a ‘best guess’ scenario is not enough. There are certain steps IT should take to keep data properly protected and managed, while still ensuring the right people have the access to that data.



If you ever need a belly laugh, visit the site DamnYouAutocorrect.com (warning: it’s often not safe for work). It’s also a great illustration of why you shouldn’t just force users through the same exact login procedure when they use mobile apps versus full-fledged browser windows: hitting all the right tiny keys is hard work, and often the software behind the scenes is helpfully trying to “correct” everything you type.

Responsive design is all the rage in consumer web app design, and for good reason: users can put down one device, pick up another, and change the screen orientation in mere moments, and app developers can’t afford to miss a trick in optimizing the user experience. Similarly, in researching current authentication methods and trends, we’ve come to believe more strongly than ever in adapting your user authentication methods to your population, the interaction channel they’re using, your business goal, your risk, and your ability to pick up on contextual clues about the user’s legitimacy or lack thereof. Call it responsive design for authentication.



Decades ago, the ‘Jaws’ film series struck a chord with its marketing slogan ‘Just when you thought it was safe to go back in the water’. Risks are like sharks as well. You think you’ve disposed of one, only to find a new one circling you and your organisation, waiting for an opportunity to emerge and attack. The Institute of Risk Management has a research paper on offer dealing with emergent (or is that emerging?) risks – which it defines as those risks that have not yet happened, but that are expected to firm up and increase greatly in significance in the near future.



Just a quick reminder — September 2013 is National Preparedness month throughout the U.S.

And, given the fact that each year many small businesses nationwide are forced to close their doors in the aftermath of severe storms, flooding, tornadoes, wildfires and hurricanes, it is a good time to remind all businesses that help with your own business preparedness planning is available in this National Preparedness Month period through a series of free webinars in September hosted by the U.S. Small Business Administration (SBA) in collaboration with FEMA’s Ready Campaign and the Agility Recovery organization.

Below is a list of the topics. The hour-long webinars will be presented at 2 p.m. EDT each Wednesday in September.

September 11: Protecting Your Organization by Preparing Your Employees

September 18: The NEW 10 Steps to Preparedness – Lessons from the Past

September 25: Crisis Communications for any Organization

Watch for more information to follow regarding this important resource of awareness assistance and training opportunity.

As the job descriptions for data scientists and data analysts become more specific and the worries that not enough skilled potential hires are available in the right place at the right time to deal with Big Data initiatives, some CIOs see another potential problem arising from the race to maximize data analytics. And the negative consequences are partially pointed right at IT.

Much planning and investment is being devoted in companies around the world to selecting, procuring and providing sophisticated and powerful tools to allow employees across the organization to collaborate, analyze data and reach organizational goals. It’s become accepted as almost fact that to neglect this technology investment is tantamount to ruin. But, writes Andrew Horne, managing director of the CEB CIO Leadership Council, on the Executive Board blog,



To many executives of small to midsized businesses, Big Data isn’t even a part of their lingo, much less on their IT radar screen. From what I’ve read, though, just because you may not call it “Big Data” doesn’t mean you aren’t already dealing with it. The size of the data may be relative to the size of your company.

But my focus isn’t the size of your Big Data; it’s whether Big Data projects provide value for small to midsize businesses.

The answer is yes, but with some caveats. Let me explain. A company may have only 1 terabyte of data to analyze, but it may still be less effective than an enterprise with 100 terabytes of data to comb through. According to a post from the SMB Group:



Friday, 30 August 2013 14:11

Closing the Chasm


I was recently following a discussion on LinkedIn about what Law Firm staff should do to help IT. There were many responses over a period of three months. One response likened IT to fire fighters and staffers to arsonists (LOL – Ben Schorr). Ironically, the specificity of law is not unique to this problem. Legal firms have their own life-cycle and cadence that is certainly unique. The problem of communication and integration of IT and business is worldwide and ubiquitous in all industries.

The basic premise of a law firm is to serve the clientele in matters of the law. IT is also a service (similar to the ever more popular Cloud), which needs to be consumed as such. Often this point is touched upon very lightly or not at all. In fact, some pundits in the industry refer to IT as a commodity or a product. A product (piece of hardware or package of software) is not a solution. Technology enables people and process to create a solution. Thus, ITs basic premise is to serve its clientele in matters of automating processes and assisting people using technology to resolve the problems facing business.



CIO — As the federal government warms to the idea of allowing employees to use their own mobile devices for work and develops new device management policies, agency CIOs and others will still have to grapple with the challenges associated with application security, experts warn.

The initial challenge for federal IT managers evaluating BYOD policies was to ensure that their agency's infrastructure was secure enough for new devices to enter the network and provide for central management, according to Tom Suder, president of the mobile services provider Mobilegov.

With those policies in place, agencies have cleared the way for the development and adoption of innovative new applications that could boost productivity in a mobilized workforce. But those apps invite a host of new security challenges.



Many UK organizations are struggling to manage the threats they face because of inconsistencies in the way different teams communicate, share and interpret information. Although most organizations have a good understanding of the potential hazards they face, KPMG’s ‘Global Risk Survey’ reveals that a lack of skills, combined with a relaxed approach to ‘raising the alarm’, is increasing the risk to business operations.

More than 1 in 5 respondents to KPMG’s survey (21 percent) suggest that poor lines of communication between risk management teams and senior executives, combined with weak reporting processes, are to blame. A similar proportion (22 percent) argue that not all business units fully appreciate enterprise-wide threats, and that the resulting lack of a ‘big picture’ is a major challenge to handling day-to-day business risks.

Many of those questioned also suggest that their organization’s ability to spot, weigh up and manage emerging risks is not where it needs to be. For example, just 28 percent claim that their front-line teams are very effective at identifying potential problems and only 33 percent believe that these teams can adequately deal with new threats.



CIO — I hear a lot of conversations these days about whether the "I" in CIO still means "information" or if it really stands for some other "I" word. Innovation? Integration? Intelligence? While those are always entertaining discussions to have, I'm thinking about a different letter entirely: Who is the CPO at your organization? The "P" doesn't stand for procurement or privacy, but for policy or process.

As I talk with CIOs about where their businesses are heading and what they are doing to get there faster, we often end up discussing their investments in consumer-based or emerging technologies. Then the focus inevitably moves to policy and process. "If I am going to enable and promote [bring your own device]," one CIO told me, "I need to have a policy and process in place that employees must follow to ensure we are safe, secure and compliant." I hear virtually the same comments about cloud and social, too.



Wednesday, 28 August 2013 16:07

Coherence of Vision

In my recent blog post Choosing Your Point of Organizational Incoherence, I stressed the importance of making a choice on how to deal with systemic incoherence that is beyond your control as a CIO or a CTO. Technology, economy and society are not likely to be aligned anytime soon; emphasis on maximizing shareholders value might make it impossible for you to make certain strategic investments; and, unrealistic expectations about predictability of the software development process might make you want to tear your hair out. True and painful that these three factors and possibly many others might be, you can’t just sit on your hands waiting for all the moons to be aligned. You have to act now and pick your point of incoherence in order to address today’s needs. For example as mentioned in the previous blog post, a CTO client of mine has recently chosen his Scrum Masters as the preferred “point” through which to manage end-to-end incoherence in his company.

This blog post addresses dealing with (in)coherence at the vision level. My fundamental premise is that once you have picked your point of organizational incoherence, you will be able to deal with most of the tactical, operational and strategic challenges that might come your way. However, you will not be able to deal with vision issues through your chosen point of incoherence. The reason is straightforward: unlike tactics, operations and strategy, your vision must be sustainable and coherent. Figure 1 illustrates this critical difference between tactics/operations/strategy on the one hand, and vision on the other.



Wednesday, 28 August 2013 16:03

The Realities of Cloud Data Integration

I’ve written many times about the challenges of integration when you’re dealing with the cloud—either integration with services or integration with cloud infrastructure.

But I’m starting to see articles that add more depth to the data integration/cloud conversation, particularly when it comes to using cloud infrastructure.

Baseline Magazine recently published an article, “Integrating Clouds Into the IT Infrastructure,” that reminds us once again that the cloud is permanently changing the role of CIOs and IT within the enterprise.



By Jim Mitchell

Perhaps I’m just a curmudgeon (a crusty, ill-tempered old man), but it irks me when someone uses the term “Business Continuity” exclusively to refer to IT planning.  Perhaps I’ve been in this industry too long.  I remember when IT planning was referred to as “Disaster Recovery”, and only business operations used the term “Business Continuity”.  Suddenly (or at least it seems sudden to me) IT specialists are throwing around the term Business Continuity as though they invented it – and as though everyone should understand what they mean.

Is Business Continuity an appropriate term for everything to do with recovery from, or response to a business disruption – to include both technology and operations?

Let me take a step back for a moment to admit that I’ve been a BCM industry advocate for integrating BC (business operations) and DR (technology) planning for many years.  I have been in the industry long enough to remember when IT Disaster Recovery plans were routinely created without input from ‘the business’ (the people who actually make money for the organization).  In that era – largely based on mainframes and midrange computers, and eventually ‘client-server’ infrastructure – DR plans were an all-or-nothing proposition.  You either had a working data center or you didn’t.  If the data center was disrupted (fire, power outage, flood, etc.) the Disaster Recovery Plan was the only alternative.  You packed up your people and sent them – and your backup tapes – to a 3rd party recovery site.  Anything short of smoke-and-rubble was viewed as an operational outage – not worth the cost of invoking the DR plan.



An updated version of an article first published in 2010.

By Charlie Maclean-Bristol.

Snake oil is applied metaphorically to any product with exaggerated marketing but questionable and/or unverifiable quality or benefit (1).

For the consultant, selling business continuity can be the ultimate snake oil. Often, the potential client has been told to implement business continuity and doesn’t know where to start. Along comes the consultant, offering to take all the potential client’s pain away. They make all the right noises about BIAs, BCPs and RTOs but the client is never sure whether they are being sold the snake oil or a genuine cure. With other types of consultancy there is often a ‘cost benefit’, where the consultant will be able to show demonstrable changes or cost savings to the client.

In purchasing business continuity consultancy you buy from a consultant who plans for something which may never happen. If the plans have to be used, the consultant has been paid and is off to their next job. If the plan does not work, the consultant can blame the updating of the plan and not the original plan which they delivered. Therefore, providing business continuity consultancy is the snake oil peddler’s dream: it can command a premium price; you are often selling to a client who does not really understand what they are being sold; and it is very unlikely that your plan will actually be used and, if it is used, you most likely have been paid for the work and are long gone.

The purpose of this article is to give potential purchasers some ideas on what to look for in choosing a business continuity consultant, which will hopefully ensure that you get the services at the quality you require. By using the ideas within this article you should hopefully avoid the purveyors of snake oil and employ someone who will give you a genuine cure for your business continuity problem.



University tuition fees are at an all-time high of £9,000 maximum a year, so for many students university is no longer a viable career option for them. However, university is not the only way to achieve a fulfilling career in the IT industry.

Lesley Cowley, CEO of Nominet, said for those that do decide to look into the alternatives to university, there are many options:  “Apprentice schemes are a good starting point as they can offer students the opportunity to gain some on-the-job skills alongside college studies, meaning that both the business and the individual can grow their own future talent.”

Cowley said in the IT industry for example, many are put off by the misconception that you must have an ICT education or qualification to work in the IT industry: “In fact, there are multiple routes into IT careers; from college and university courses to workplace apprenticeships. For example, our post A-level apprentice scheme is currently in its third year and gives school leavers the opportunity to apply for roles within our technical infrastructure, software development and business intelligence teams.



What good is a high-performance team in a vacuum, and how long will one last without an environment in which it can thrive?

This is the question that comes to mind when I’m asked to comment on the role of leadership in high-performance teams. Teams may be able to achieve various states of high performance for a time, or from time to time, perhaps experienced by the team as being “in the zone.” But my thoughts turn toward questions of causing teams to be in the zone on demand, and of sustaining a state of high performance.

Three Simple Words…

Be. Do. Have. These three words outline what I’ve learned in life, and they work as a sequence to achieving sustained success. Ironically, in most cultures I’ve encountered, the success sequence is often performed backward, and doing it backward isn’t successful. In fact, instead of success, the reversed sequence leads to a state of sustained unfulfillment. Too often, people operate in a “have-do-be” sequence. For example, “Were I to have money, I would do what people with money do, then I could be what people with money are” (rich). As a result, following this sequence leaves people perpetually unfulfilled because how much “have” do you need before you can start “doing,” and how much “doing” is needed before you can declare yourself to “be rich”? Typically, starting with the “have” leads to never getting to the “be.”



Editor's note: Kelly Wallace is CNN's digital correspondent and editor-at-large covering family, career and life. She's a mom of two girls and lives in Manhattan. Read her other columns and follow her reports at CNN Parents and on Twitter.

(CNN) -- My mother-in-law and I talk about nearly everything. But when I mentioned to her recently that I was working on a story about emergency preparedness, I realized that's one thing we've never discussed -- even though she lives nearby and would certainly factor into our family plan.

"If a disaster strikes, where would we meet?" we asked each other. "Who would we call? What would we take with us?"

A new national advertising campaign shared with CNN exclusively ahead of its official launch Wednesday aims to get families like my own at least talking about what we'd do in the face of a natural disaster or other emergency.

"This is a pretty fearful topic for a lot of parents," said Priscilla Natkins, executive vice president and director of client services for the Ad Council, the private nonprofit group spearheading the campaign along with the Federal Emergency Management Agency.



Have you ever wondered, what’s top of mind for leading CEOs? Below are direct quotes from a discussion on July 23, 2013 with some of the most admired CEOs on key topics like: uncovering emerging changes, CEO priorities, what’s around the corner, the future of big data, differentiating their customer model. The CEOs in the discussion all have 100 plus year old companies who lead their specific industry.

- Hikmet Ersek is CEO and President of Western Union, which might actually be the world’s largest retailer with 520,000 storefronts and more than 1 million agents. Ersek was cited as 2012 ‘Responsible CEO of the year’ by Corporate Responsibility Magazine.

- Shivan Subramaniam is 14-year CEO, and Chairman, of FM Global. FM Global is the 185-year old insurance leader with no actuaries – only engineers — where 30% of the Fortune 1,000 are clients.



BRAITHWAITE, La. (AP) – Isaac barely had hurricane-strength winds when it blew ashore southwest of New Orleans a year ago, but its effects are still apparent in coastal areas where it flooded thousands of homes.

After landfall on Aug 28, 2012, Isaac stalled, dumping more than a foot of rain and churning a monstrous storm surge. Water flowed over levees and destroyed homes and businesses in coastal Louisiana and Mississippi.

In the end, it was blamed for seven deaths. In Plaquemines Parish, one of the hardest hit areas, damage to homes and businesses has been estimated at more than $100 million, said Guy Laigast, director of the parish's Office of Homeland Security and Emergency Preparedness.



Tuesday, 27 August 2013 15:19

HR Departments Invaded By Data Scientists

CIOWhen General Motors was looking for someone to lead its global talent and organizational capability group, the $152 billion carmaker clearly wasn't looking for a paper-pushing administrator. Michael Arena, who took the position 18 months ago, is an engineer by training. He was a visiting scientist at MIT Media Lab. He's a Six Sigma black belt. He's got a Ph.D.

This is not your father's human resources executive.

But it is a sign of where the corporate HR function is headed. Arena is dedicated to the hot field of talent analytics--crunching data about employees to get "the right people with the right talent in the right place at the right time at the right cost," he says.



Tuesday, 27 August 2013 15:19

Disaster Recovery Set to Grow in the Cloud

One of the big things about cloud computing is the potential for cutting costs and saving capital. On demand storage and Software as a Service (SaaS) paved the way with applications stretching from cloudified accountancy to sales force and customer relationship management. ‘All things shall move to the cloud’ is the mantra of many, and disaster recovery appears to be obeying the same rule. RaaS or Recovery as a Service is set to grow according to a recent Research and Markets report, with an impressive 55.2 per cent compound annual growth rate between 2013 and 2018, moving to a $5 billion market globally in five years’ time. But what does RaaS change for organisations down on the ground?

What changes is the way disaster recovery is paid for and how much it costs. With cloud vendors continually innovating in terms of service offerings, customers will often see cloud DR costs going down compared to conventional or in-house solutions. New pricing models are coming where users pay on the basis of how much disaster recovery they actually do (for example, restoring stored data), rather than how much DR for which they provision (for instance, how much data they upload for storage).



The announcement that Microsoft CEO Steve Ballmer will step down from that position within the next 12 months has brought the often-neglected topic of succession planning to the forefront again. The attention is not yet as sharp as it was when Steve Jobs announced his medical leave from Apple, and it may not reach that level. However, it’s never too early to discuss in earnest your company’s and your department’s plans, if any, for succession in at least key positions.

If the words “succession planning” don’t appear anywhere in your organization’s processes or documentation, that’s not necessarily a negative. This type of planning, writes Sue Brooks, managing director for talent management firm Ochre House, is ready for broadening: “Currently, succession plans are focused on filling roles, but to be truly strategic we need to look at developing individuals into these new roles through talent management.” That approach isn’t surprising coming from a talent management firm, but it doesn’t mean she’s wrong. And for those with no formal succession planning process in place, I think these should be encouraging words. Considering the succession plan as part of the ongoing talent management efforts keeps the focus and energy from flagging, and covers alternate scenarios, including department reorganizations, for example, and not just leaders leaving the company.



The data center is quickly moving toward hyperscale architectures, the result of both advancing technologies and economic forces weighing on the enterprise.

The question, though, is not whether hyperscale deployments will increase in numbers or even come to dominate the IT industry, but will the owned-and-operated data center model simply become too burdensome for the vast majority of organizations?

On the economic front, it’s hard to argue against the hyperscale model. As Google, Facebook, Amazon and others have proven, volume hardware and software deployments can reach the point at which a single buyer becomes a channel in itself—that is, the company consumes in such volumes that it can custom-order its own platforms directly from the chip- and board-level suppliers that cater to the big OEMs. And in the case of Facebook, these designs are starting to trickle into the IT industry at large through initiatives like the Open Compute Project.



CSO — Big data does not necessarily mean Good Data. And that, as an increasing number of experts are saying more insistently, means Big Data does not automatically yield good analytics.

If the data is incomplete, out of context or otherwise contaminated, it can lead to decisions that could undermine the competitiveness of an enterprise or damage the personal lives of individuals.

One of the classic stories of how data out of context can lead to distorted conclusions comes from Harvard University professor Gary King, director of the Institute for Quantitative Social Science. A Big Data project was attempting to use Twitter feeds and other social media posts to predict the U.S. unemployment rate, by monitoring key words like "jobs," "unemployment," and "classifieds."

Using an analytics technique called sentiment analysis, the group collected tweets and other social media posts that included these words to see if there were correlations between an increase or decrease in them and the monthly unemployment rate.



According to a survey performed by Experian Data Breach Resolution and the Ponemon Institute, only 31 percent of companies are insured against data breaches.  Meanwhile, 76 percent of respondents rated the impact of a security breach to be greater than or equal to a natural disaster, business interruption or fire.

The average cost for data breach was estimated by respondents to be $163 million, although some projections neared $500 million in damages. For a 24 month period, the 56 percent of respondents having suffered a cyber-security attack reported the average cost of the breach to be $9.4 million.



IT outsourcing as a percentage of the IT budget dropped this year, reversing a four-year trend and marking the first time since the start of the recession that IT organizations have begun shifting spending plans on a percentage basis toward developing internal operations and capabilities and away from outsourcing partners, according to a report by research and advisory services specialist Computer Economics.

Survey results suggested organizations are starting to "back-source" their IT services, bringing them back in-house after a period of growth in the use of service providers. The decline in IT outsourcing was reported as significant, down from an average 11.9 percent in 2012 to 10.6 percent in 2013. Meanwhile, IT operating budgets are rising 2.5 percent this year at the median, and IT capital budgets are up 4 percent.

"With the tentative improvement in the economic outlook, IT organizations are putting newfound resources into internal operations and capital investments at a pace that is greater than their spending with IT service providers," the report noted. "IT outsourcing budgets are not necessarily shrinking so much as IT budgets are rising. The denominator is rising faster than the numerator."



Monday, 26 August 2013 15:39

There are no winners in the blame game

Every time a major security breach makes the headlines, a common reaction happens. Even before the details of the breach are known, the infosec world gets into a frenzy of speculation as to how the attack happened, who conducted it, and whether the attackers were skilled or not. Invariably the conversation focuses onto the company that is the victim of the attack, and it often tends to highlight how stupid, negligent or weak its security defenses were. In effect, we blame the victim for being attacked.

While the organization may have been negligent, or their security not up to scratch, we should not forget they are still the victim. How good, or not, the victim’s security is a separate issue for a separate conversation. Foisting blame on the victim on top of having to deal with the incident does not bring much value to the conversation. The blame for the attack should lie squarely on the shoulders of those who conducted it.



A lot of people don’t see the necessity of listening online. The truth is this is perhaps more important than actually being active online. The cold hard fact is that people are having conversations (good and bad) about your brand whether you like it or not and for anyone with an interest in selling (which let’s face it we all are) its crucial to pay attention to what our customers, potential customers, competitors and influencers are saying about our brands.

ORM — Online Reputation Management — is a really good way to go about listening. There are a number of tools out there that you can use, but the real value comes out of understanding what ORM actually means for your business.



WASHINGTON (AP) – The latest high-tech disruption in the financial markets increases the pressure on Nasdaq and other electronic exchanges to take steps to avoid future breakdowns and manage them better if they do occur.

The three-hour trading outage on the Nasdaq stock exchange Thursday also can be expected to trigger new rounds of regulatory scrutiny on computer-driven trading. Investors' shaky confidence in the markets also took another hit.

The exchange opened as normal Friday.

Questions about potential dangers of the super-fast electronic trading systems that now dominate the U.S. stock markets ripple again through Wall Street and Washington. Stock trading now relies heavily on computer systems that exploit split-penny price differences. Stocks can be traded in fractions of a second, often by automated programs. That makes the markets more vulnerable to technical failures.



There has been a lot of speculation about the impact of PRISM on data security and cloud computing; just this week alone two influential articles have been written quoting wildly different predictions on how much the revelations will cost cloud vendors, but there’s no denying that the ripples in the industry are starting to rock the boat.

The Information Technology and Innovation Forum (ITIF) recently announced that due to the fears over data privacy and security that PRISM has highlighted, the cloud computing industry stood to take a hit in the order of $36 billion by 2016. But Forrester Research has come out to say this estimate is too low and the impact could be far deeper to the tune of $180 billion.



CIO — Even though midmarket industrial firms have valuable IP and business processes, they are lagging behind other industries when it comes to data security, according to a recent report by assurance, tax and consulting firm McGladrey.

"A lot of the executives we asked about security risks don't believe their data is at risk or is at very little risk," says Karen Kurek, leader of McGladreys industrial products practice and a member of the National Association of Manufacturing (NAM) Board of Directors. "Two-thirds of them said it was at little or no risk. I think in general, in this sector, a lot of people don't understand the potential exposure that they have."

"But we know that middle market companies very much are targeted," she adds. "Part of [the reason for their belief] is because ignorance is not bliss. There's this false sense of security. They don't know what they don't know until something happens to them."



Network World — California is rolling out a new law to reduce greenhouse gas emissions, primarily from electric generating plants, and the cost of the effort is expected to be passed along to data centers, which are among the biggest consumers of electric power in the state.

This means data center operators in California will need to step up their energy efficiency efforts in order to avoid the higher costs. And the handwriting is on the wall for data centers in the rest of the U.S., as President Obama has directed the EPA to develop greenhouse gas controls nationwide.

The law that took effect on Jan. 1 requires California to reduce greenhouse gas emissions to 1990 levels by 2020. The plan is to try to reduce emissions statewide by 2 percent to 3 percent a year. According to the California Air Resources Board, the lead enforcement agency, the law requires power plants to obtain permits, also called "allowances," for every metric ton of greenhouse gases they emit.



The Department of Energy was hacked. Again. It is the second time this year that the DOE was the victim of a breach. The breach took place in, and it is believed that the personally identifiable information (PII) of 14,000 present and former employees was potentially compromised.


Defense contractor Northrop Grumman recently announced that it, too, suffered a similar breach.

In both cases, because of the type of information affected, the hackers may have been doing little more than data mining for valuable-on-the-black-market PII. Or it could be the hackers were looking for more, like the ability to access data involving the critical infrastructure or national security stored on the organizations’ networks. We don’t know, and we won’t know, as Anthony DiBello, strategic partnerships manager, Guidance Software, pointed out to Sue Marquette Poremba in an email, without a complete forensic analysis of the compromised systems. He went on to say:






Hello, this is George J. Silowash, Cybersecurity Threat and Incident Analyst for the CERT Division. Organizations may be searching for products that address insider threats but have no real way of knowing if a product will meet their needs. In the recently released report, Insider Threat Attributes and Mitigation Strategies, I explore the top seven attributes that insider threat cases have according to our database of over 700 insider incidents. These attributes can be used to develop characteristics that insider threat products should possess.

The top seven characteristics that insider threat products should have based on cases from our database include the ability to execute these activities:




Business adoption of Internet of Things solutions will be fast — in fact, as I wrote yesterday, it’s already here for some industries. That’s why CIOs and other IT leaders need to gear up for supporting the unique data issues related to this trend.

Let’s look at what makes the Internet of Things data a bit different from other IT data resources.

The Problem: Mega Big Data. One of the main differences will be in the amount of data you’ll need to sort, improve, integrate, analyze and manage. You’ve heard of Big Data? All these devices, constantly chattering updates about moisture, light, movement and whatnot, will create crazy amounts of Big Data.

IT Requirement: A (possibly real-time stream) data analytics platform that can handle Big Data and a scalable infrastructure to support it.



Friday, 23 August 2013 22:11

Four steps for denying DDoS attacks

How should banks and financial institutions deal with increasing numbers of large-scale denial of service attacks?

By Avi Rembaum and Daniel Wiley.

Financial institutions have been battling waves of large distributed denial of service (DDoS) attacks since early 2012. Many of these attacks have been the work of a group calling itself the Qassam Cyber Fighters (QCF), who until recently posted weekly updates on Pastebin about their reasons behind their attacks, and summarising Operation Ababil, their DDoS campaign.

Other hacktivist groups have launched their own DDoS attacks and targeted financial services institutions with focused attacks on web forms and content. There have also been reports of nation-state organized cyber assaults on banks and government agencies, along with complex, multi-vector efforts that have combined DDoS attacks with online account tampering and fraud.

These incidents against all sizes of banks have shown that there are many kinds of DDoS attacks, including traditional SYN and DNS floods, as well as DNS amplification, application layer and content targeted methods. Denial of service (DoS) activities that have targeted SSL encrypted webpage resources and content are an additional challenge. In some instances, the adversaries have moved to a blended form of attack that incorporates harder-to-stop application layer methods alongside ‘cheap’, high-volume attacks that can be filtered and blocked through simpler means.



Friday, 23 August 2013 22:11

Five new virtualization challenges

As virtualization capabilities are built into networking, storage, applications and databases giving shape to the software defined data centre, problems with management and visibility across data centre boundaries will emerge. A recent survey by SolarWinds revealed that more than 700 IT professionals in six countries across the globe agreed that virtualization technology contributes significantly to management challenges, indicating the impact is undeniable and vast.

With the software defined data centre transition an imminent reality, the following five management challenges arising from the survey should be considered by every business continuity manager:

Virtual mobility impacts network optimization
Virtualization has typically operated within a contained portion of the network such that changes in the virtualization environment didn’t usually impact the broader network. With improvements and increased adoption of workload mobility technologies like Metro vMotion and storage vMotion that make it easier to move workloads geographically, the rapid movement of workloads could cause new problems for the overall enterprise network.



European companies are prioritising risk management as never before, although some weaknesses remain.

These findings come from research on risk management leadership conducted with risk managers from the Federation of European Risk Management (FERMA) and the public sector associations PRIMO by Harvard Business Review Analytic Services sponsored by insurer Zurich.

In their responses, more than 200 executives at major European organizations emphasise how top management and the board are increasingly setting direction and taking tighter control of risk management, integrating it with overall company strategy and embedding it deeper into corporate culture.

The survey indicates that, at 35 percent of organizations, either a chief risk officer or a risk manager has direct responsibility for risk management. At 27 percent, either the CEO or the CFO/treasurer has direct responsibility, while the board itself is responsible at 14 percent.



CIO — There's no doubt that multisourcing -- parceling out the IT services portfolio among a number of vendors--has its benefits: competitive pricing, increased flexibility and access to a deeper pool of talent, among others. But working with multiple providers creates multiple challenges, not the least of which is trying to get all of those competing vendors to play nice.

In fact, almost everything in the typical outsourcing transaction, transition and operation is conspiring against them getting along.

For one thing, they may be no incentive for the providers to work together. Multisourcing has entered the mainstream, but outsourcing contracts and negotiations haven't kept pace with the trend.



Believe it or not, your IT department is probably full of squirrels. No, not those cute fuzzy critters that climb trees, but data consumers that hide data away with the same relentless fortitude as their bushy tailed namesakes hide acorns.

I was inspired by this idea by Dave Russell, VP and Distinguished Analyst at Gartner. Dave is a long-time industry watcher and one of the smartest people around when it comes to understanding the data protection industry. I was in a meeting with him recently when he mentioned how IT departments tend to have lots of people in them who like to “squirrel away” copies of data. That got me thinking.



Friday, 23 August 2013 22:07



Outsourcing, co-location, leasing, COO / CFO absorption of the CIO role, cloud computing and so on are the topics littered across the landscape of today’s IT world. Reading an article recently sparked a long running exposed nerve I have endured painfully throughout my career in this industry, IT. While it is absolutely true that we should not bind ourselves within the borders of our thought, nor our physical location. The truth resounds in a deafening roar, “Do not forget the human element!” People are still a part of this technological world. Processes certainly support people and are automated by technology; however, this does not take the place of the communion that occurs between people.

Regardless of the business model, remote operations are attractive due to the low cost component of the equation. Those that are skeptical about IT ever providing bottom line benefit if kept in house can now relax. I am not out to debunk the bottom line cost reduction that outsourcing, cloud computing, or other forms of remote operations contribute. IT must evolve (http://wp.me/p3JnQK-12). In fact, I am a big believer in cost reduction. The issue at hand is how to “communicate” within the context of our ever-digitizing world. We cannot lose the communion portion of that word, communicate.



By Jack Rosenberger

A vice president of datacenter initiatives and digital infrastructure with the analyst firm 451 Research, Michelle Bailey recently spoke with CIO Insight about IT investments, the current lack of innovation, business metrics and what many CIOs should be thinking about but aren’t. Here is a condensed version of Bailey’s remarks.

It’s time for companies to invest in IT. “The economy is improving, and we’re seeing jobs growth and improvement in the housing market, especially in the U.S., but what we aren’t seeing a return to IT spending. We haven’t seen the return to IT spending that we would have expected to see by now. Instead, we are seeing companies hoarding cash and a lot of bloated balance sheets. We’ve seen a lot of IT consolidation projects, with CIOs going after the low-hanging fruit, which is fine during the downturn of the economy. But what we’re not seeing—and what we should be seeing—is long-term investments in IT.”



Friday, 23 August 2013 22:05

How to build a risk threat model

Each business is different and requires diverse security measures and best practices, yet each security division runs into similar barriers when trying to convince management to loosen the purse strings.  

Security experts shared their tips and advice on how to build a risk threat model, at Rapid7’s United security summit 2013.

John Pescatore, director emerging security trends at SANs believes different environments require different security gauges.

“A car has a check light for when running out of gas," said Pescatore. "A boat has different gauges to not just gas but to show depth. A plane has gauges on gas, if the wings are level, etc. All environments are different and require different protections. Attackers target anyone that has information that be sold.”



It’s time to think about how you’ll manage data from the Internet of Things.

I’m not being trendy. I know it seems too new to be possible, but actually the Internet of Things is a simple concept. Sensors + Wi-Fi = Device. It will quickly take root like kudzu, overwhelming your systems, particularly your data systems.

Consider this: Cisco states that what it calls the “Internet of Everything”—people, process, data and things using network connections—will reach an additional $544 billion in profits this year alone, according to CNET. By 2020, the GSM Association’s Connected Life predicts growth to 24 billion connected devices, Wired reports.



Migrate an installed Windows system, even Windows 8 or Windows Server 2012, to a GPT/uEFI configuration on a solid-state drive without interrupting the use of applications or having to restart the system.

Paragon Software Group (PSG), the leader in data backup, disaster recovery and data migration solutions, announces Paragon Migrate OS to SSD 3.0, a one-step tool to migrate Windows systems to faster solid-state drives (SSDs). This major upgrade allows users to perform system migration to a GPT/uEFI configuration directly under all versions of Windows from XP, onwards including Windows 8 and Windows Server 2012. Users can continue working with applications during the migration process and are not required to restart the system. In addition, Paragon Migrate OS to SSD 3.0 now offers the option to build a WinPE bootable media to do migrations or fix various boot problems without installing the product. 

As PC users seek to take advantage of SSDs’ better access time, read/write speeds, and resistance to physical shock from drops, the challenge becomes moving massive amounts of data, applications and the operating system from the existing hard drive to a smaller SSD. Paragon’s intuitive wizard simplifies the migration process, automatically downsizing the source system volume and providing intelligent selection of specific files when migrating to smaller-capacity drives, and auto-aligning copied system partitions – all without rebooting the system.



Softening market a relief for business insurance buyers

By John Prendergast

Market forces at play in the business property and casualty insurance category mean that some buyers can expect reduced costs and improved quality when the time comes to renew.

This contradicts predictions at the time of the Christchurch earthquakes, that cover would be more expensive and restrictive for many years. The reality is increased capacity from insurers seeking market share has led to a softening market for business insurance. But to access possible benefits, organisations will need to demonstrate a sound understanding of their risk profile and have an active risk management plan in place.

Property and casualty insurance is a category that includes business interruption, material damage and business continuity insurances. Business interruption is an area where medium to large businesses typically may spend up to 80% of their insurance dollars – anywhere between $200,000 and $3.5 million a year, depending on the organisation size and industry.



The US Securities and Exchange Commission (SEC), the Financial Industry Regulatory Authority (FINRA) and the Commodity Futures Trading Commission’s (CFTC) Division of Swap Dealer and Intermediary Oversight have issued a joint advisory on business continuity planning.

The advisory follows a review by the regulators in the aftermath of Hurricane Sandy, which closed US equity and options markets for two days in October 2012. It encourages firms to review their business continuity plans and consider implementing the following suggestions (published verbatim):



The Hurricane Sandy Rebuilding Task Force has issued a detailed new document which provides recommendations for ways to use Hurricane Sandy rebuilding projects to enhance business, community and critical infrastructure resilience.

The recommendations in ‘Hurricane Sandy Rebuilding Strategy: Stronger Communities, A Resilient Region’ were identified through the help of input from the Task Force’s community engagement with a wide range of stakeholders (including businesses, non-profits, philanthropic organization, local leaders and community groups).

The recommendations include:



Perhaps you’ve already come across Duct Tape Marketing, a popular business book about successful marketing for small businesses. Duct tape, as you may know, is the strong adhesive tape you can use as a quick fix to bind many different things together especially if you don’t have any other solution. It stops things from falling apart, falling over, leaking or separating when they shouldn’t. Is a ‘duct tape’ approach possible for business continuity too? And if so what would be the ‘duct tape’ to make it happen?

It turns out that the title of the book may be a bit of a stretch compared to its contents. With a slogan of ‘Stickiness – marketing that sticks like Duct Tape’, the methods proposed are based more on top-down business strategy. On the other hand, readers seem to appreciate the book for its simplicity, its orientation towards action and its ‘let’s do it now’ approach. In other words, the book scores higher on the freshness of its approach, rather than on any innovation in its material. Readers also note the emphasis on planning and the design of programs to support key business objectives.



Personally, I have several old cell phones stuffed into a “junk drawer” in my office. I know the IT guys at my previous employer, a midsize tech company, had extra monitors, towers and hard drives stacked in empty cubicles and in the server room. My point is, it can be a good idea to reuse electronic equipment, but are we all just really putting off getting rid of the stuff because we’re honestly not sure what to do with it?

If your company doesn’t have a policy for disposing of electronic devices, phones or computers, now may be the time to create one. Usage of PCs, laptops, smartphones and tablets is increasing in the business world, and with new technology constantly evolving, users typically need the latest and greatest versions, which leads to stacks of old, obsolete electronics piling up in the office. And getting rid of those old gadgets and computers isn’t as easy as chucking them into the dumpster, either.



New market data is adding momentum to the software-defined data center (SDDC) movement, invoking images of instant provisioning of end-to-end data environments and anytime/anywhere access for users unwilling to restrict their data usage according to the whims of the physical universe.

Normally, this is the part where I would say something like “the reality will be quite different,” but the fact is that SDDC does have the potential to foster the kind of data infrastructure that allows users and even applications themselves to define their own operating environments, seamlessly compiling resources wherever they may be found—physical, virtual, on-premise, in the cloud—and dramatically reducing both the cost and power consumption of today’s patchwork infrastructure.



Thursday, 22 August 2013 15:04

Patching the BCM Program Gaps

As a software company involved in the Business Continuity Management industry for over 13 years, we are constantly collaborating and exploring new opportunities with organizations in the market for BCM software that are looking to create an effective program to meet their goals and objectives.

Based on our experience, we have concluded that the end-state of BCP should be the ability to respond to any disruption that impacts an organization’s ability to deliver products & services. Disruption of that ability may be result from any an impact on any area of operations. Unifying Employee Health &Safely, Crisis Management, BCP, IT- Disaster Recovery Planning, Supplier Continuity Planning, Alternate Work-Area Planning, Integrated Notification and Incident Management – among other forms of contingency planning – can lead to a resilient organization, and provide tremendous advantages. We call this collaborative effort, “Unified BCM”.




CIO — In the aftermath of the great data heist by Edward Snowden, the now-infamous computer specialist who stole top secret information from the National Security Agency and leaked it to The Guardian earlier this summer, CIOs are feeling a little helpless.

"People are saying that if it happens to the NSA, which must have incredible tools to prevent people from leaking data yet still leaks on a grand scale, we better be really careful," says Jeff Rubin, vice president of strategy and business development at Beachhead, a mobile security company.




Ah, the irony. We have all of these incredibly cool communication tools at our fingertips, and most of us are probably far lousier communicators now than we were before all of these tools came along. If things keep going the way they are, at some point, we’re all going to become babbling idiots who use yet undreamed of devices to convey our babbling.

Maybe what we need is a counterintuitive approach to reverse the trend. If so, Geoffrey Tumlin might have found the key. Tumlin, a communication consultant and author of the new book, “Stop Talking, Start Communicating,” contends that shoddy communication may be ubiquitous, but it’s not inevitable. Here are 10 tips he’s come up with to help save us from ourselves:

Back up to go forward. Try to remember how we communicated before we got our new devices. The digital revolution facilitated hypercommunication and instant self-expression, but, ironically, made it harder for anyone to listen. There’s just too much “chatter clutter” getting in the way—just consider the frenetic activity happening on Twitter at any given moment. To make the most of our conversations, we need to remember how we connected effectively with others before we had smartphones and computer screens to “help” us. Specifically, we should implement three guiding habits: Listen like every sentence matters, talk like every word counts, and act like every interaction is important. These points will help you be more present in conversations and will improve your ability to communicate effectively.



Today, Citrix crossed an important milestone in the way enterprises will view apps for work. We announced the general availability of the Citrix Worx App Gallery – an app ecosystem with over 100 committed apps.

So, what is an app ecosystem? Apple and Google created big app ecosystems for iOS and Android that drove the adoption of those platforms, Facebook launched their App Center for social media apps, and Salesforce.com created the App Exchange for SaaS apps. Similarly, the Citrix Worx App Gallery is an ecosystem for enterprise-ready mobile apps.

The enterprise app challenge

End users wish to use different types of apps for work. However, enterprises looking to mobilize apps face a big burden of tasks in order to make apps enterprise-ready and available to their end users. App security tools in the form of app wrappers or SDKs have been bandied about as tools to protect apps. But, for IT there is very little clarity on how an app makes its way from an app vendor into their end users’ device with the necessary policies and controls in place. Often, the solution involves the enterprise identifying apps or app categories that it needs, executing contracts with the app developer, getting the app binaries, applying the security wrapper, verifying the app and then deploying it for end users in an enterprise app store. This process would then start all over when the app or mobile OS is updated, or when the enterprise mobility vendor changes the app security SDK.



Intellectual Property (IP) theft – whether by competitors or states – has been occurring for a long time. Traditional approaches of protecting IP involve patents, copyrights, trademarks, physical security (locking documents away), classifying documents using a labelling scheme and staff education.

These traditional approaches are still valid today, and may need to be strengthened. They should also be supplemented by a range of electronic approaches. 

These include electronic licensing, encryption, data classification, access control, logically or physically separate networks, and providing "clean" devices to staff travelling to countries where IP theft is likely. All approaches are complicated by the demands of international travel, collaborative working, the need to share information (including IP) in the supply chain, consumerisation, and the cloud.

Information Security Forum (ISF) research has shown that protecting your IP can follow an information-led, risk-based process similar to that used to protect information in your supply chains, as discussed in the Securing the Supply Chain reports and tools.



Much has been written, presented and debated in the past few years on the “right way” for executives and policy makers to reinvigorate companies, markets and economies. The distinguished scholar Carlota Perez suggests fundamental changes to the way growth and prosperity get measured. Along somewhat similar lines, Steven Denning focuses on the damage inflicted through adherence to the tenet of maximizing shareholder’s value. Gary Hammel, elaborating on another thread that Perez touches on, advocates values over value. Last but not the least, Hagel, Brown and Davison emphasize the power of pull for both designing the right system and designing the system right [i].

While the debate spans some topics that are clearly beyond the scope of responsibilities a typical executive is entrusted with, it is quite relevant to the Agilist concerned with end-to-end process implementation. Agile principles can, of course, be beneficially applied to product delivery departments such as dev and test. However, the real benefits to be had can only be attained through applying agile principles to the overall business process, not “just” the software development process. As pointed out by Tasktop’s Dave West in his recent Agile 2013 presentation, many/most of the Agile implementations tend to be of the Water-Scrum-Fall variety. In such implementations the Agile process in R&D is “sandwiched” between before-and-after corporate processes that are Waterfallish in nature. From a system perspective, incoherence at one point or another of such systems is pretty much inevitable due to incongruence of operating principles across the “Water,” the “Scrum” and/or the “Fall” components of the system. This reality and its operational manifestations are illustrated in Figure 1 and Figure 2 respectively.



President Obama’s Hurricane Sandy Rebuilding Task Force released their findings yesterday, sharing 69 recommendations to repair existing damage and strengthen infrastructure ahead of future natural disasters.

The task force encouraged an emphasis on new construction over simple repair, citing the impact of climate change on severe weather events. “More than ever, it is critical that when we build for the future, we do so in a way that makes communities more resilient to emerging challenges such as rising sea levels, extreme heat, and more frequent and intense storms,” the report said. Construction designed for increasingly dangerous storms, infrastructure strengthened to prevent power failure and fuel shortage, and a cellular service system that can subsist during disasters are all critical investments to prevent future loss.

Recommendations included streamlining federal agencies’ review processes for reconstruction projects, revising federal mortgage policies so homeowners can get insurance checks faster, and making greater use of natural barriers like wetlands and sand dunes. The team also said that planners need better tools to evaluate and quantify long-term benefits of future projects along the shoreline, but did not detail what would be best ecologically and economically.



IDG News Service - Heading into the heart of hurricane season 10 months after Sandy slammed the New York metropolitan area, Wall Street has had time to reassess and revamp backup plans.

Sandy's storm surge caused the first weather-related, 48-hour closure of markets since the Great Blizzard of 1888.

"You could say Sandy forced the hand of the trading firms," said David Weiss, an analyst with the consulting firm Aite Group.

"A confluence of trends" that lend themselves to overall system resiliency was, however, already under way, Weiss added. The commoditization of server hardware suitable for trading and back-office systems, for example, has helped give rise to third-party data centers that can help financial-sector companies reduce risk.



IDG News Service - After the terrorist strikes of Sept. 11, 2001, the New York Stock Exchange learned some valuable lessons in keeping a time-sensitive financial trading network alive during a time of crisis.

"We found that during 9/11, carrier point-of-presence facilities went down, a lot of firms in the industry were not able to trade. So we made a decision to build a resilient network for the industry," said Vince Lanzillo, who is head of co-location for the Americas for NYSE Technologies (NYXT), a commercial subsidiary of NYSE Euronext that offers infrastructure, content and liquidity services to the financial industry.

So, when Hurricane Sandy struck last year, NYXT was prepared to continue operations, though the NYSE itself decided to halt trading, citing concerns with employee safety and other factors.



Wednesday, 21 August 2013 16:58

Hurricane Sandy Task Force Issues Report

With two months to go to the one-year anniversary of Hurricane Sandy, a federal task force created after the storm has issued a report that’s getting a lot of media coverage.

The plan includes 69 policy initiatives, of which a major recommendation is to build stronger buildings to better withstand future extreme storms amid a changing climate.

Shaun Donovan, secretary of the U.S. Department of Housing and Urban Development, and chair of the task force, notes:



The fear that business services – or indeed the business itself – might not be recoverable after a disaster-level event results in many sleepless nights for CIOs across the world. But it doesn’t need to be that way.

Disaster recovery planning, a subset of business continuity, comprises the process, policies and procedures required for the recovery or continuation of technology infrastructure after a disaster-level event. 

Disasters come in multiple forms and may be highly unpredictable in nature, but the effect they have on your business can be calculated and mitigated against through robust preparation and testing.



Part one of a two-part series

Crisis: Any situation that is threatening or could threaten to harm people or property, seriously interrupt business, significantly damage reputation and/or negatively impact the bottom line.

Every organization is vulnerable to crises. The days of playing ostrich are gone. You can play, but your stakeholders will not be understanding or forgiving because they've watched what happened with Fukushima, Penn State/Sandusky, BP/Deepwater and Wikileaks.

If you don't prepare, you will incur more damage. When I look at existing crisis management-related plans while conducting a vulnerability audit (the first step in crisis preparedness), what I often find is a failure to address the many communications issues related to crisis/disaster response. Organizational leadership does not understand that, without adequate internal and external communications, using the best-possible channels to reach each stakeholder group:

- See more at: http://blog.missionmode.com/blog/the-10-steps-of-crisis-communications.html#sthash.1PpM1F2j.dpuf

Part one of a two-part series

Crisis: Any situation that is threatening or could threaten to harm people or property, seriously interrupt business, significantly damage reputation and/or negatively impact the bottom line.

Every organization is vulnerable to crises. The days of playing ostrich are gone. You can play, but your stakeholders will not be understanding or forgiving because they've watched what happened with Fukushima, Penn State/Sandusky, BP/Deepwater and Wikileaks.

If you don't prepare, you will incur more damage. When I look at existing crisis management-related plans while conducting a vulnerability audit (the first step in crisis preparedness), what I often find is a failure to address the many communications issues related to crisis/disaster response. Organizational leadership does not understand that, without adequate internal and external communications, using the best-possible channels to reach each stakeholder group:

- See more at: http://blog.missionmode.com/blog/the-10-steps-of-crisis-communications.html#sthash.1PpM1F2j.dpuf

Neither snow nor rain nor heat nor gloom of night will stop a Postal Service worker. But a hurricane will stop the mail truck.

Hurricane Sandy, the massive super-storm that pounded the East Coast in 2012 and caused billions of dollars worth of damage, also managed to destroy or damage 110 delivery vehicles used by the U.S. Postal Service. Most of vehicles were damaged by flooding, but one got hit by a falling tree.

The damaged vehicles are a small segment of the fleet affected by the hurricane. Postal Service employees managed to save 16,157 vehicles unscathed, which the USPS Inspector General credits to good emergency planning before the hurricane.

According to its 2012 Hurricane Preparedness Guide, USPS instructed employees to move mail vehicles to higher ground.

By Danny Bradbury

The world and its dog has been shocked by the Prism news story. Early in June, we found out that the US National Security Agency (NSA) had developed a secret data-gathering mechanism to steal all our data and store it in a large data warehouse.

We are outraged that it is being mined, searched and otherwise prodded. But do we really think that big data security problems stop at Google, Facebook, Microsoft and Fort Meade?

The private sector has been collecting data on all of us for ages. It is stored in massive data sets, often spread between multiple sources. What makes us think this is any more secure? At least the NSA is well trained in keeping it all under lock and key.

Social trend

What does “big data” mean, anyway? Some describe it – wrongly – as simply a lot of data in a relational database. But if that were the case, then the security challenges would be the same as for conventional databases. And they aren’t.

Others view it as data sets so large that they cannot be handled by traditional relational tools. But we have had that kind of thing for years, in the form of data warehouses.



By Lockwood Lyon

As summer (in the northern hemisphere) comes to an end and summer vacations wrap up, it's time to prepare for the upcoming end-of-year rush. The months of November and December are characterized by a significant increase in consumer transactions including holiday-related purchases of food and gifts, travel, bank transactions, and winter clothing.

Many retail organizations call this period the Peak Season, and for good reasons: not only are transaction rates higher during this time of year, but a significant amount of a company's profit (sometimes as much as 40%) is realized.

To meet the upcoming demands on IT systems database administrators (DBAs) need to prepare the database and its supporting infrastructure for increased resource demands. Being proactive now can pay big dividends by maintaining service level agreements (SLAs), avoiding outages and resource shortages, and ensuring a positive overall customer experience.



Monday, 19 August 2013 17:27

Policies & Procedures

Create BEFORE need


Lack of relevant policies and procedures is likely to cost the University of Toledo Medical Center (UTMC) at least US$25,000.

According to Lawyers and Settlements.com, a 30-year veteran nurse at UTMC was terminated for failure to stop another nurse from removing items from the operating room before the procedure had concluded. The complaining nurse claims she was also fired for violating policies on communications and logging out.

The story is that the plaintiff was working in the operating room (OR) with another nurse.

The other nurse left the OR for lunch, but, according to the article, failed to log out of the hospital computer system. Returning from lunch the nurse allegedly disposed of a kidney that was waiting to be transplanted.



The new European Union regulation requiring mandatory personal data breach disclosures by telecoms operators and internet service providers (ISPs) comes into force on Sunday 25 August 2013.

The new regulation builds out the security breach provisions for telecoms providers and ISPs introduced into EU law in 2009 through the E-Privacy Directive 2009/136/EC.

From 25 August, all EU telcos and ISPs will be required to notify national authorities of any theft, loss or unauthorised access to personal customer data, including emails, calling data and IP addresses.

Details concerning any incident, including the timing and circumstances of the breach, nature and content of the data involved, and likely consequences of the breach, must be reported.

“Controversially, the regulation requires breach notification to national regulators within 24 hours of detection, subject to a "feasibility" request,” said Stewart Room, privacy and information partner at law firm Field Fisher Waterhouse.



Following on from my previous article about Prism, we have since heard further revelations of the US National Security Agency's (NSA) interception and surveillance of data. 

Prism is evidently the tip of a data privacy iceberg. International “cyber espionage” makes great press, but let’s get this straight from the outset: your data is at risk whether you are small, medium, large, a corporation, charity or nation. Moreover, your sensitive information is at risk.

So why look at intellectual property (IP)?

IP is your most sensitive data; that which you need to control completely. If compromised, it could affect the stability or the existence of a company or product, and as such represents the greatest prize to an attacker. National security has its equivalents – passport data, criminal databases, spy identities – information an aggressive foreign state could use against the home nation to cause disruption and discord.



Monday, 19 August 2013 17:21

Point Solutions Must Die

Last year I wrote a blog post titled, “Incident Response Isn’t About Point Solutions; It Is About An Ecosystem."  This concept naturally extends beyond incident response to broader enterprise defense.  An ecosystem approach provides us an alternative to the cobbling together of the Frankenstein’esque security infrastructure that is so ubiquitous today.

Many of us in the information security space have a proud legacy of only purchasing best in breed point solutions. In my early days as an information security practitioner, I only wanted to deploy these types of standalone solutions. One of the problems with this approach is that it results in a bloated security portfolio with little integration between security controls. This bloat adds unneeded friction to the infosec team’s operational responsibilities.  We talk about adding friction to make the attacker’s job more difficult, what about this self-imposed friction?  S&R pros jobs are hard enough. I’m not suggesting that you eliminate best in breed solutions from consideration, I’m suggesting that any “point solution” that functions in isolation and adds unneeded operational friction shouldn’t be considered.



The myth of King Midas warns us that what we first perceive as a blessing can also be a curse. Turning objects into gold with the slightest touch would be a magnificent power to have, however inadvertently transforming food, family, and friends into gold would be a nightmare. Such is the case with technology. Our interconnected world and seemingly never-ending supply of even “smarter” smart-phones and other devices provides us previously unimaginable power to share our ideas and make our complicated world more manageable. Calling these advantages of technology a blessing hardly seems hyperbolic; and yet, with the good also comes the potential for bad.

For auditors and compliance professionals, both the greatest advantage and the greatest threat of the digital world is big data. By following the digital footprints left by the company around the world, auditors can now seek the truth about employee actions and company operations more objectively and efficiently than ever before. The challenge, however, is in effectively managing the sheer volume of sensitive information that the company and its employees create, share, and store on these powerful devices. For example, an employee’s personal Facebook update could reveal proprietary information; a stolen laptop can be akin to losing control of a safety deposit box; hackers could break into the company’s computer network and export confidential data; companies can lose access to their data that’s stored in the cloud; and then there are the complexities caused by the wild variation in country-specific data privacy laws. It can be enough to make your head spin.



Monday, 19 August 2013 17:19

The Reality of Cloud Scalability

Now that the cloud is becoming a standard feature in the enterprise, a little truism has emerged: Resources are infinitely scalable, but so are the costs.

Theoretically, at least, increased cloud consumption should only happen in the presence of increased business activity, and therefore increased revenue. So the cost/benefit ratio should always favor the enterprise, at least if you’re smart about it. In practice, though, it doesn’t always work that way. But even if it did, the real question is not at what point does a gargantuan cloud presence become a money loser, but when does it end up costing more than building and operating your own data center?

This conflict is particularly acute in rapidly growing enterprises. Companies that go from little-known start-up to must-have business solution provider overnight can suddenly find themselves on the hook for millions per year. Wired.com, for example, tells the tale of MemSQL, a West Coast database services company that originally provisioned its entire test and development infrastructure on Amazon only to dump it one day in favor of in-house, bare metal infrastructure. A simple cost comparison was the key driver: For about $120,000 amortized over three years, the company was able to shed more than $300,000 in cloud costs per year – a reduction of more than 80 percent.



Monday, 19 August 2013 17:12


To most people a crisis is bad and for the most part, they’d probably be right. However, an organization can do good things when they are hit with a crisis; some may even say there is an opportunity. The situation itself might be bad enough but it it’s not being managed correctly or communications aren’t approached in a positive way, the crisis can be compounded because the media and the public will think there are more things being hidden by the organization.

If it seems that an organization isn’t prepared – through its communications and response actions – the media and public may start to go ‘hunting’ for more information and uncover other details of the organization that the organization may not want released. Not that they are bad examples on their own but compounded with the existing crisis they will seem larger and could create another crisis or even escalate the existing one. The organization will then be fighting more than one crisis on its hands.
Below are some tips for how to communicate during a crisis; some do’s and don’ts and tips for ensuring good communications when speaking to the media and the general public.



Friday, 16 August 2013 16:29

Know your neighbors

I’ve written it before.

I’m writing it again.

Know your neighbors.

Usually the admonishment comes with a suggestion to know what your neighbor does (is the product or service popular or not?), who your neighbor employs (popular or unpopular segments of the population), and how you neighbor treats its personnel (walkouts possible to probable?).

Turns out, according to an Associated Press article in the “PhillyBurbs.com” site titled

Salvation Army to be named in Philadelphia building collapse lawsuits

(see http://www.nbcnews.com/id/52764647/ns/local_news-delaware_valley_pa_nj/t/salvation-army-be-named-philadelphia-building-collapse-lawsuits/), that's not enough.



If the financial crisis and events like the Japanese tsunami had but a single lesson, it is this: What we don’t know can be more important that what we do know. This raises the ultimate rhetorical question, “Do we know what we don’t know?” Of course, no one knows. The reality of today’s environment is that management and the board can never be certain that they know everything they need to know. So how do we manage an organization given this reality?

Following are 10 things companies can consider in managing uncertainty:

(1)      A margin for error may be needed to cover what we don’t know: While management has knowledge from internal and external sources, do they have a useful point of view regarding what they don’t know? Probably not. That’s why strategic choices and the risks undertaken should provide a margin for error to reflect what directors and management may not know.



By Eric Thomas

“Use it or lose it!” You might hear your doctor say that expression about your mental acuity or your personal trainer about your physique. I often hear it from my clients in government, specifically from federal CIOs or IT managers. The phrase relates to their IT budget; if they don’t spend their money in the current year, it goes away the following year. Of course, we should have smarter incentives to reward spending under budget, but we’ll properly address that issue another day.

The impact of “use it or lose it” or, more aptly, “spend it or lose it” is most acutely felt during the budgeting process. The federal budgeting process is highly regulated, long and not very transparent to the layperson. In short, the U.S. Congress appropriates funds to agencies which then appropriate funds within the agency. From there, the IT manager is given a sum of money to spend during the fiscal year. The manager starts with a spend plan, allocates money to individual projects or line items, and tracks obligations and actual spending throughout the fiscal year.

- See more at: http://www.cioinsight.com/it-management/it-budgets/five-tips-for-use-it-or-lose-it-budgets/#sthash.zYbK8A7Q.dpuf

By Eric Thomas

“Use it or lose it!” You might hear your doctor say that expression about your mental acuity or your personal trainer about your physique. I often hear it from my clients in government, specifically from federal CIOs or IT managers. The phrase relates to their IT budget; if they don’t spend their money in the current year, it goes away the following year. Of course, we should have smarter incentives to reward spending under budget, but we’ll properly address that issue another day.

The impact of “use it or lose it” or, more aptly, “spend it or lose it” is most acutely felt during the budgeting process. The federal budgeting process is highly regulated, long and not very transparent to the layperson. In short, the U.S. Congress appropriates funds to agencies which then appropriate funds within the agency. From there, the IT manager is given a sum of money to spend during the fiscal year. The manager starts with a spend plan, allocates money to individual projects or line items, and tracks obligations and actual spending throughout the fiscal year.

- See more at: http://www.cioinsight.com/it-management/it-budgets/five-tips-for-use-it-or-lose-it-budgets/#sthash.zYbK8A7Q.dpuf

By Eric Thomas

“Use it or lose it!” You might hear your doctor say that expression about your mental acuity or your personal trainer about your physique. I often hear it from my clients in government, specifically from federal CIOs or IT managers. The phrase relates to their IT budget; if they don’t spend their money in the current year, it goes away the following year. Of course, we should have smarter incentives to reward spending under budget, but we’ll properly address that issue another day.

The impact of “use it or lose it” or, more aptly, “spend it or lose it” is most acutely felt during the budgeting process. The federal budgeting process is highly regulated, long and not very transparent to the layperson. In short, the U.S. Congress appropriates funds to agencies which then appropriate funds within the agency. From there, the IT manager is given a sum of money to spend during the fiscal year. The manager starts with a spend plan, allocates money to individual projects or line items, and tracks obligations and actual spending throughout the fiscal year.

- See more at: http://www.cioinsight.com/it-management/it-budgets/five-tips-for-use-it-or-lose-it-budgets/#sthash.zYbK8A7Q.dpuf
Friday, 16 August 2013 16:22

Networking Beyond TCP

Difficult to imagine? From our grandparents days Networking across systems is working reliably over TCP and that is what we have seen all throughout. The systems at either end of the network did not have to bother how the TCP connection was being established so the core definition of TCP was “a single connection between two hosts”. While researchers designed TCP/IP protocol suite, they did an awesome job on looking through the requirements which may come up in next couple decades. Given their vision till today we are able to communicate well over TCP.

But what did change in between? The network of devices or the Internet grew at an unexpected rate and broke all the predictions. The internet backbone traffic in 1990 was close to 1 Terabyte which grew to nearly 35000 Terabyte by year 2000. What an exceptional growth and large businesses started transforming themselves on Internet. Was the TCP designed to take up this much load without getting slower and getting to a point where it starts breaking? While all this growth was happening, in the background researchers continued to work on simplifying the congestion control issues with TCP and many new RFCs came up and got adopted as well. Today we all are able to work efficiently using these complex congestion control and avoidance algorithms.



It would be impossible for a company that has no disaster recovery (DR) plan in place to continue business after a severe hacker attack, fire, flood or tornado. And yet, many companies still do not have solid DR strategies developed. Businesses often find it challenging to make a case for a business continuity plan, much less devote funds, people and time to its creation “just in case” something were to happen someday.

Every minute your business systems spend down is a loss of revenue. For your enterprise to ensure its continued services after an emergency situation, having an extensive DR strategy is critical.

Our IT Download, Business Continuity: Considerations, Risks, Tips and More, provides instruction on how to develop a business recovery strategy. According to this report:

…Executives know that downtime equals lost dollars and that every minute spent on recovery data and systems is time taken away from running their business. This results in a lack of productivity and a poor customer response time. Companies can create a resilient IT infrastructure with automated disaster recovery (DR) for any service, any time and any place…



A small documentary released this summer has created a reputational riptide for SeaWorld. Blackfish, directed by Gabriela Cowperthwaite, combines park footage and interviews with trainers and scientists to explore the impact of keeping killer whales for entertainment – and, ultimately, examines the possible factors that led one such whale to kill three people in captivity. The film has outraged animal rights activists and casual audience members alike with footage of brutal whale-on-human attacks at the parks and haunting tales of a natural order torn apart to keep 12,000-pound animals in captivity. SeaWorld’s attempts to head off criticism by emailing an itemized rebuttal to critics has drawn widespread publicity, but many have interpreted the move as defensive and further damning.

This week, it became clear that Pixar has taken note of the movie – and the backlash. The animation studio decided to rewrite part of the upcoming sequel to Finding Nemo that referenced a SeaWorld-like facility.

The plot is reportedly still in flux for Finding Dory, currently scheduled for release in November 2015. Ellen DeGeneres is set to star as Dory, an amnesiac blue fish who cannot remember who raised her, according to the L.A. Times. Initial plans for the movie saw characters ending up in a marine park for fish and mammals. But now, the aquatic center will be differentiated from SeaWorld by giving the animals the option to leave.



Weighing up the cost of risk against the cost of coverage seems to be the perpetual dilemma of some insurance buyers.

In the case of cyber insurance, it would appear that concerns about the cost of coverage diminish once companies make the decision to purchase a policy. And the longer that policy has been held, the greater the satisfaction.

According to a recently released Ponemon study, only 31 percent of risk management professionals at companies surveyed say they have a cyber security insurance policy. However, among those companies that don’t have a policy, 57 percent say they plan to purchase one in future.



In the 10 years since sagging power lines in Ohio sparked a blackout across much of the Northeastern United States and Canada, utility engineers say they have implemented measures to prevent another such event in the country's electric grid.

But there is one disaster scenario for which the power companies are still unprepared: a massive attack on the computer networks that underlie the U.S. electric grid.

Energy industry leaders believe a cyberthreat could produce a blackout even bigger than the , which left an estimated 50 million people in the dark.



Martin Lee, technical lead threat intelligence, CISCO, explains why smart buildings bring a new range of potential vulnerabilities that need management and mitigation.

CISCO defines the ‘Internet of Everything’ as “as bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before - turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries” but as well as bring opportunities is also changes the threat landscape.

The Internet of Everything is being created through continuing technical advances. Computers are getting smaller, more powerful in terms of functionality, yet drawing less electrical power. These features coupled with the ubiquity of WiFi, 3G, 4G and mesh networks means that small computing devices can be embedded within the most mundane devices that previously had operated autonomously — like a toaster or copy machine —and connect them to the Internet. These devices can then report on local conditions to a central server that can understand the wider environment, and then receive instructions on how to modify their operation to achieve maximum efficiency.



Asigra has released the results of new research into the impact of data growth on backup and recovery pricing and cost containment. The research, commissioned by Asigra and conducted by the Enterprise Strategy Group (ESG), includes findings from nearly 500 financial and IT decision makers/influencers. The research includes insights on data growth, software pricing preferences, and data recovery trends.

In the report, IT end-users were questioned about the financial pressure they are under to reduce IT expenditures amidst rising data growth costs. The research revealed that two out of three respondents felt at least some pressure to reduce IT spending and that pressure was found to increase with a corporation’s annual revenue. Those from large companies were more likely to say they felt strong pressure to reduce costs across several areas of IT. While the desire to reduce IT costs are high for many organizations, financial buyers of backup and recovery software and/or services expect to see a substantial increase in purchases in this area over the next five years due to data growth rates.



CSO — Growing awareness of cyber threats and reporting requirements by regulators are driving a newfound interest in insurance products covering data breaches and other computing risks.

Almost a third of companies (31 percent) already have cyber insurance policies, and more than half (57 percent) that don't have policies say they plan to buy one in the future, a recent study by the Ponemon Institute and Experian Data Breach Resolution found.

"It's an issue that's much more front and center with senior executives in companies now," Larry Ponemon, founder and chairman of the Ponemon Institute, said in an  interview.

"Data security may not be a top five issue with companies, but it's in the top 10," he added.



CIO — Between electronic health record (EHR) systems, imaging systems, electronic prescribing software, healthcare claims, public health reports and the burgeoning market of wellness apps and mobile health devices, the healthcare industry is full of data that's just waiting to be dissected.

This data analysis holds much promise for an industry desperately seeking ways to cut costs, improve efficiency and provide better care. There are victories to be had, to be sure, but getting data from disparate, often proprietary systems is an onerous process that, for some institutions, borders on impossible.



Thursday, 15 August 2013 15:15

XenApp administration going mobile

Our Mobile SDK for Windows Apps  has been out for a while now, and customers are already using it to mobilize Windows Apps delivered via XenApp/XenDesktop. You might have seen it, but not looked into it as you don’t have any development experience. Well you don’t need to be a developer to try out the Mobile SDK as we have some sample apps for that leverage it.

One of our sample apps is a simple XenApp administration console that provides basic view and control functionality for a XenApp farm. It allows you to view sessions and servers in your XenApp farm. The following screen shot shows the Servers page where you can see summary information for your XenApp servers.



In mid-July 2013, several of New York’s Wall Street firms participated in an exercise to test their resilience in the face of cyber-attacks. The initiative was coordinated by SIFMA, the Securities and Financial Markets Association, and included commercial financial companies, as well as the U.S. Treasury Department. Financial institutions in the US have been subjected recently to massive attacks centred on distributed denial of service (DDoS). DDoS attacks render systems inaccessible for normal use, either by generating floods of traffic to use up all the network bandwidth for the system, or by overloading the application itself. Given that such attacks are not specific to the financial arena, where else might such tests need to be done?



Writing about technology is, by nature, an exercise in predicting the future. And when it comes to enterprise technology, the question hanging over nearly everyone’s head is: “What will happen to my data center?”

To be sure, data is the lifeblood of the enterprise. But the infrastructure used to process and manipulate that data is in a constant state of flux. In today’s world, the biggest changes involve virtualization, software-defined systems and the cloud, all of which are steadily breaking down the close relationships that once existed between hardware, software and middleware platforms, while at the same time ushering in new levels of dynamism and diversity across data environments.



By Nicole Hawk

An estimated 75,000 wildfires occur in the United States each year, and each one has potential public health concerns including evacuating safely, dealing with smoke, or cleaning up spoiled food after a power outage.  In June 2013, Colorado faced multiple devastating wildfires, including the Royal Gorge FireExternal Web Site Icon in Cañon City, which required the evacuation of a state prison, and the Black Forest FireExternal Web Site Icon in Colorado Springs, which became the most destructive in Colorado history.  The 14,000-acre fire forced 38,000 people to evacuate and destroyed almost 500 homes.  Before, during, and after the wildfires, local, state, and federal public information officers (PIOs) worked together to quickly share emergency information via traditional media, social media, and websites such as InciwebExternal Web Site Icon

Smokey the Bear warns of extreme danger

As with most responses, CDC’s main role is getting information to people before an emergency to help them prepare and after an emergency during the recovery phase to help them protect their physical and emotional health.  As members of CDC’s Joint Information Center (JIC), Joanne Cox and I had the opportunity to travel to Colorado to observe these wildfire information activities.  Understanding how Colorado handled information needs helped us build relationships and find new ways to get CDC information to our partners during a wildfire response.  

We first reached out to the Colorado Department of Health and EnvironmentExternal Web Site Icon, which put us in touch with Dave Rose, an El Paso CountyExternal Web Site Icon PIO.  Dave welcomed us to the Black Forest Fire JIC in Colorado Springs.  We found the JIC, staffed by county and city PIOs and volunteers, buzzing with activity.   People worked around the clock answering phones, posting evacuation and damage updates to websites and social media, and coordinating public meetings and media interviews. 

wildfire PIO meeting

The Rocky Mountain Incident Management Team B gathers for an afternoon command and general staff meeting.

Although this was Joanne’s first time observing a wildfire, she was in good hands.  Before working at CDC, I served as a wildland firefighter and PIO for the U.S. Forest Service.  As a result, Joanne and I were armed with plenty of fire T-shirts, which helped us blend into the crowd of firefighters. By the time our 3-day whirlwind trip was over, we had toured the Black Forest Fire JIC, a wildfire base camp, two incident command posts (ICPs), and the Rocky Mountain Area Coordination CenterExternal Web Site Icon, and made a lot of new friends in the wildland fire community.   Most importantly, we learned even more about the kinds of information people need and how they can best receive it before, during, and after a wildfire. 

We used CDC’s social media network and real-life connections to make the most of our time in Colorado.  Because CDC’s own @CDCEmergencyExternal Web Site Icon Twitter handle follows local, state, and federal emergency management agencies, we learned of a public meeting for the Royal Gorge Fire in Cañon City, Colorado.  Our virtual network may have gotten us to the public meeting, but once we arrived, we were fortunate to meet Susan Ford, a liaison officer for the Rocky Mountain Area Incident Management Team BExternal Web Site Icon.  She invited us to spend June 14 with the team.  At the ICP, we attended a VIP visit from Colorado Governor John Hickenlooper as well as meetings with command and general staff and agency cooperators, including the Fremont County Public Health AgencyExternal Web Site Icon

Another connection at the Royal Gorge Fire was one from my days in the Forest Service. I worked with Chris Barth, the lead PIO for the fire, on the 2011 Rockhouse fire in Texas.  He put us in contact with the lead PIO for the Black Forest Fire which was managed by the Great Basin Type 1 Incident Management TeamExternal Web Site Icon.   On June 15, we fortified ourselves with coffee and attended the 6:00 a.m. briefing at the Black Forest fire ICP, where we met the Incident Commander, Rick Harvey.  It was another action-packed day of observing live media interviews, a press conference, and lots of communication activities. 

Joanne Cox gets a tour of the Royal Gorge Fire incident command post from Susan Ford, a liaison officer on the Rocky Mountain Incident Management Team B.

Shane Greer, an incident commander for the Royal Gorge fire, helped snag us an invitation to visit the Rocky Mountain Area Coordination Center in Lakewood, CO.  The Geographic Area Coordination CenterExternal Web Site Icon works with the National Interagency Fire CenterExternal Web Site Icon to mobilize wildland fire resources across Colorado, Kansas, Nebraska, South Dakota, and Wyoming and maintains a big-picture view of fire activity by analyzing information, maps, weather forecasts, GIS files, and data from fire modeling software.   While observing a morning coordination call, we got a taste of how information flows from the national to the regional to the local level. 

We learned a lot about how information was shared on Colorado’s wildfires and made many valuable connections to the wildland fire community. Now we are even better equipped to help the JIC share CDC wildfire information with PIOs, partners, the media, and most importantly, with local communities.


While IBM may be dominant when it comes to all things mainframe, EMC has been steadily expanding its share of the mainframe storage business.

EMC’s launch of new disk-based library systems for mainframe environments that are based on the company’s VMAX, VNX, or Data Domain storage platforms strengthens its role in the mainframe storage arena.

According Rob Emsley, senior director of product marketing for EMC Backup Recovery Systems division, the latest generation of EMC storage systems takes advantage of Intel processors to deliver backups at speeds that are four times faster than anything IBM currently offers. Speed is critical in mainframe environments, says Emsley, because of the sheer volume of data typically flowing through mainframe systems.



Step 1 – Over commit and under deliver. Large corporations are seeking ways to drive their cost models down in the market place today by using Cloud based services.  Bespoke outsourcing is not a Cloud based delivery model and yet many large Outsourcing companies are billing their services this way to large enterprise.  Committing a custom delivery for thousands of subscribers with thousands of applications will lead to a higher cost model and lower customer satisfaction.  If you are a Service Provider, better to start with a catalog of applications and meet the needs of the SMB first, then move up stream to the larger businesses.  Migration of subscribers from large enterprise into a cloud data center is very time consuming.

Step 2 – If you build it they will come.  Cash is king… it always has been so why develop an environment spending tens of millions of dollars/euros unless you have adequately done the research for who needs what and where.  Looking at IaaS purchases in the last three years should give a clue.  How many of these purchases (buy vs. build) have led to the success in cloud delivery of services?  Again, Service Providers should develop a business model based on the demand for apps, desktops and data in the SMB and stoke your cash flow engine before sinking huge capital costs in data centers?



Life as a Chief Compliance Officer is not so easy.  The job, as defined, means living with day-to-day risks, any one of which is significant enough to damage or even destroy the company for whom you work.  CCOs learn to live with risk.

When a CCO has the backing of the board and the CEO, their job is relatively easier.  That does not mean it is an easy job.  To the contrary, every CCO has their challenges in their company to secure adequate resources, to gain the cooperation of other business components, and to persuade senior managers and employees that ethics and compliance is important to the company bottom line.

The inherent difficulty for the CCO is to demonstrate his or her importance to an organization by proving a negative – we have not had any serious law violations because of the existence of the company’s ethics and compliance program.  That is a hard argument to make, but luckily it is intuitive and it naturally appeals to intelligent senior managers and a CEO.



Struggling with what comes after “instant news,” I’ve tried to come up with a way of describing the dramatic change in real time information sharing that was powerfully demonstrated in the Boston manhunt. For better or worse, I’m using “NanoNews” to describe it.

I created a video in lieu of an in-person presentation I was invited to make at the National Capital Region’s Social Media in Emergencies conference. That presentation was just concluded so now I’m sharing this with you.

In 2001, when I wrote the first version of “Now Is Too Late: Survival in an Era of Instant News” I used the term instant news to help communicate that news cycles were gone, that as fast as news helicopters could get overhead the news of your event or disaster would be live on the air. I was thinking of the ubiquitous breaking news as well as the already emerging trend of sharing information via the Internet—at that time primarily through email.

But compared to the “instant news” we have today, “breaking news” corresponds more to snail mail. It’s practically dead and gone, and not just through over-use. When millions are tuned into the police scanner chatter broadcast live through Ustream or converted into a Reddit thread using websites like Broadcastify or scanner apps like 5_0 Scan, it’s obvious that breaking news can’t keep pace. By the time even the fastest news crews get the information from such sources, and relay it, it will be minutes old—and minutes old is unacceptable when you could have real time information.



Enterprises are struggling to understand the risk and privacy impacts of the mobile applications in use in their environment. As the consumerization of mobile continues to shove BYOD into the enterprise, the number of applications in use is growing exponentially. Organizations must get a better handle on just how much risk is accumulating from the proliferation of mobile apps on their user’s devices.

I'm currently researching a concept designed to help an enterprise know where they are on the mobile application security maturity curve. Understanding where one currently resides is the quickest method to determine the path required to improving your standing in the future.



Wednesday, 14 August 2013 15:53

Green IT Initiatives Provide Business Savings

For businesses, going green often means cost savings. Nowhere can this be truer than in the area of IT. Smaller, more efficient computers and servers, cloud computing and even advancements in software can bring about significant budgetary and carbon-footprint savings for the business. This brings many companies to start thinking about creating greener data centers.

But where and when do you begin to adopt greener policies? How do you know what to buy?

The book “Green Computing: Tools and Techniques for Saving Energy, Money, and Resources,” by Bud E. Smith provides an in-depth look at green IT initiatives. It begins by explaining why a company should go green, and then continues with chapters that give detailed explanations on cost savings, environmental drivers and climate change issues. Other chapters give informative looks into:



Wednesday, 14 August 2013 15:52

Why You Won’t Hire a Data Scientist

I remember the first time I heard the terms “business intelligence” and “analytics.” Business. Intelligence. Yep, that was something I could get behind.

Then I figured out that it really amounted to business statistics, automated to a certain extent by a computer. It was a bit of a bummer, really.

It seems the term "data science” is likewise overrated.

IT consultant Robin Bloor, in a fabulous piece, points out that there’s really no such thing as “data science.” In fact, what we’re calling data science has very little to do with science and everything to do with mathematics — specifically, statistics.

“If you are already tired of the term ‘big data,’ but not yet tired of the term ‘data science,’ let me help you get there as swiftly as possible,” Bloor writes. “If there were a particular activity devoted to studying data, then there might be some virtue in the term ‘data science.’ And indeed there is such an activity, and it already has a name: it is a branch of mathematics called statistics.”



Wednesday, 14 August 2013 15:51

How to handle a software audit

Software audits are an irritating and time consuming part of life.

To survive one unscathed you'll need a thorough understanding of your licensing requirements.

'IT executives being thrown into prison' is the usual battle cry of software industry bodies such as the BSA and FAST (despite no executive going to prison in my knowledge in the last 15 years).

The more realistic pain of software audits is unbudgeted cost and distraction from delivery of projects. It takes time to defend an audit; to collect the appropriate data and documentation - precious time that should have been spent focusing on business priorities. 

Microsoft, Oracle, Adobe, IBM, SAP, Attachmate and other large software publishers regularly audit their customers. Research with ITAM Review readers in the past suggest that, faced with a vendor audit, Microsoft are said to be most helpful, and Oracle least helpful.



Wednesday, 14 August 2013 15:49

Visual Discovery Tools

There is no question that we are becoming more visually oriented in our approach to thinking today. You can see it in the increasing numbers of PowerPoint presentations given with the admonition that fewer words will suffice. You can see it in the increase in infographics, catchy photographs, and pictorial slogans that continue to spread across social media. And you can see the result in BI dashboards and an increasing array of visually oriented approaches to the display, digestion, and understanding of data. It is no wonder, then, that visual discovery tools should emerge as an important and rapidly growing part of BI.

Visual discovery tools are applications that typically enable non-analyst users to “play” with relationships between data items and explore an array of hidden possibilities that might yield interesting trends. They are available in some form from every major BI vendor, with a few pure play solutions leading the way. Current leaders are QlikView, Tableau, and TIBCO Spotfire, although rankings are somewhat obscured by increasing incorporation of this capacity in larger BI solutions.



Wednesday, 14 August 2013 15:42

No, Your Data Isn't Secure in the Cloud

Computerworld — While online data storage services claim your data is encrypted, there are no guarantees. With recent revelations that the federal government taps into Internet search engines, email and cloud service providers, any myth about data "privacy" on the Internet has been busted.

Experts say there's simply no way to ever be completely sure your data will remain secure once you've moved it to the cloud.

"You have no way of knowing. You can't trust anybody. Everybody is lying to you," Security expert Bruce Schneier said. "How do you know which platform to trust? They could even be lying because the U.S. Government has forced them to."

While providers of email, chat, social network and cloud services often claim -- even in their service agreements -- that the data they store is encrypted and private, most often they hold the keys, not you. That means a rogue employee or any government "legally" requesting encryption keys can decrypt and see your data.



Wednesday, 14 August 2013 15:11

Valley Fever, Explained

Cases of an illness known as valley fever have increased dramatically over the past decade. So what is it exactly? And who's at risk? We went to California's Central Valley to find out—watch the video above, then read this handy FAQ.

What is it? Coccidioidomycosis—commonly known as valley fever—is a fungal disease. Its spores live in the soil. If the soil becomes dry and dusty, people and animals can breathe it in, allowing the spores to grow inside their bodies.

What does valley fever feel like? It depends. Some people who get valley fever don't have any symptoms at all; in others the disease resembles a cold or flu. Some develop a pneumonia-like condition from the fungus in their lungs. In rare cases, the fungus disseminates and can even attack the brain. According to the CDC more than 40 percent of people who become ill from valley fever may require hospital visits; the average cost of that visit is $50,000. Between 1990 and 2008 there were 3,089 reported deaths from valley fever, though some public health experts suspect that it was an underlying cause of many more deaths.




Thriving in the Mainframe World: 4th Gen EMC Disk Library for Mainframe Sets a New Standard


Peter Smails

By Peter Smails

Senior Director, Product Marketing, Backup Recovery Systems Division at EMC

Even with significant growth in mainframe market share in 2012, Darwinian evolution never takes a break at EMC.

Today, EMC announced the next generation of Disk Library for Mainframe (DLm) systems; the DLm 8100 and DLm 2100.  Enabled by an enhanced virtual tape engine and new 8 Gb/s FICON adapters, the new products deliver 2x the scalability of the previous generation, with support for up to 11.4 PB of logical capacity and up to 80% faster performance, making the new systems more than 4x faster than the nearest competitor.



ROLLING MEADOWS, Ill. – Big data—dubbed “the new oil” by the World Economic Forum—can improve decision making, reduce time to market and increase profits. But it can also raise significant risk, ranging from disastrous data breaches to privacy and compliance concerns. To help enterprises retain control of their massive and fast-changing information, ISACA has issued new guidance available freely at www.isaca.org/privacy-and-big-data. Privacy and Big Data: An ISACA White Paper outlines critical governance and assurance considerations as well as key questions that must be answered.

“CIOs are often under pressure from the board and senior leadership to implement big data before proper risk management and controls are in place, in order to compete in the marketplace,” said Richard Chew, CISA, CISM, CGEIT, a developer of the ISACA paper and senior information security analyst at Emerald Management Group. “Big data provides an important opportunity to deliver value from information, but an enterprise will be more successful in the long run if policies and frameworks such as COBIT are put into place first.”



Tuesday, 13 August 2013 15:26

When backups are not enough

By Lee Fleming

The vital importance of developing a disaster recovery plan – and testing it regularly.

Not that long ago, to prepare for an IT disaster (either manmade or natural), hospitals and other healthcare facilities cared only about having some sort of back-up system in place. They still kept patient information on paper charts along with medicine prescriptions should their IT system collapse.

Then the concept of “disaster recovery” emerged. Hospitals became more sophisticated, relying on computerized storage. Today, it’s the high availability of IT that matters, not disaster recovery. The new motto is: “Let’s make sure we don’t have to recover.”



PHILADELPHIA – Recently a council was formed to gain a better understanding of Disaster Recovery (DR) best practices and make preparedness more cost-effective and efficient. This Disaster Recovery Preparedness (DRP) Council was created by IT business, government and academic leaders to address these issues, with its mission to increase DR Preparedness awareness and improve DR practices.

Organizations around the globe have participated in an online Disaster Recovery Preparedness Benchmark (DRPB) Survey created by the council that launched just over a month ago. This survey is designed to give business continuity, disaster recovery, compliance audit and risk management professionals a measure of their own preparedness in recovering critical IT systems running in virtual environments.



“Something is happening here, but you don't know what it is, do you, Mr Jones?”  

Bob Dylan's lyrics come to mind with the findings of Deloitte’s second Data Nation survey of consumers’ and citizens’ attitudes towards how companies and public sector organisations collect and analyse their personal data. For it reveals a 10% drop in people fully aware of what is being done with their information.

Peter Gooch, privacy practice leader at Deloitte said this shows that people are: “More aware that something is happening with their data, but they don't know what that is and there is increased nervousness.

“There is no real sign of a tipping point, where people see their own data as an asset that can be exploited. Consumers recognise their data as an asset to the extent that they want to protect it, but not to the extent of exploiting it.



The frequency and potential impacts of information security breaches are increasing. Dr. Jim Kennedy explains why and looks at what organizations can do about it.

Computer, network, and information security is based on three pillars: confidentiality, integrity, and availability. In my business as an information & cyber security, business continuity and disaster recovery consultant, I see every day how various sized and types of companies address these three areas. Some very well, some not so well, and some really poorly.

Given all the regulations and standards (like HIPAA, SOX, NERC-CIP, FISMA, PIPEDA, and etc.), developed and published over the last five years you would think that business and government should be doing much better in securing their computing systems and network infrastructures. However, based on the on-going events prominent in the press and trade journals almost every day this does not seem to be the case.

We continue to be informed that government agencies and private sector companies continue to have numerous cases of data leakage: a politically correct way of saying data loss, theft, or compromise. We hear about the theft of credit card and personal information and worst of all we hear of companies that have lost critical personal and health related information despite the many security controls that were supposed to be in place. Worse yet we hear of extremely large sums of monies extorted from banks and other financial institutions and also of the fragility of our power grids and gas distribution systems world-wide.



NOAA has issued an updated Atlantic hurricane season forecast, saying that the season is shaping up to be above normal with the possibility that it could be very active. The season has already produced four named storms, with the peak of the season – mid-August through October – yet to come.

“Our confidence for an above-normal season is still high because the predicted atmospheric and oceanic conditions that are favorable for storm development have materialized,” said Gerry Bell, Ph.D., lead seasonal hurricane forecaster at NOAA’s Climate Prediction Center. “Also, two of the four named storms to-date formed in the deep tropical Atlantic, which historically is an indicator of an active season.”

The conditions in place now are similar to those that have produced many active Atlantic hurricane seasons since 1995, and include above-average Atlantic sea surface temperatures and a stronger rainy season in West Africa, which produces wind patterns that help turn storm systems there into tropical storms and hurricanes.

The updated outlook calls for a 70 percent chance of an above-normal season. Across the Atlantic Basin for the entire season – June 1 to November 30 – NOAA’s updated seasonal outlook (which includes the activity to date of tropical storms Andrea, Barry, Chantal, and Dorian) projects a 70 percent chance for each of the following ranges:

13 to 19 named storms (top winds of 39 mph or higher), including6 to 9 hurricanes (top winds of 74 mph or higher), of which 3 to 5 could be major hurricanes (Category 3, 4 or 5; winds of at least 111 mph)

These ranges are above the 30-year seasonal averages of 12 named storms, six hurricanes and three major hurricanes.

The updated outlook is similar to the pre-season outlook issued in May, but with a reduced expectation for extreme levels of activity. Motivating this change is a decreased likelihood that La Niña will develop and bring its reduced wind shear that further strengthens the hurricane season. Other factors are the lack of hurricanes through July, more variability in the wind patterns across the tropical Atlantic Ocean and slightly lower hurricane season model predictions. In May, the outlook called for 13-20 named storms, 7-11 hurricanes and 3-6 major hurricanes.

Techworld — Data center providers have welcomed the news that Google, IBM and Nvidia will collaborate to form an open development alliance for datacentres called OpenPower.

The consortium aims to provide advanced server, networking, storage and graphics technology to give more control and flexibility to developers of next-generation, hyperscale and cloud datacentres.

IBM will license designs of the Power microprocessor architecture to other companies in the consortium including Google, as part of an effort to expand use of the architecture and reverse declines in its systems hardware business. Meanwhile, component companies will be able to make hardware that can be integrated, or attached, to the processor.



‘How do you eat an elephant’ is the age-old metaphorical business question. ‘One piece at a time’ is the answer. Big problems can be broken down into smaller ones, which can in turn be broken down again, until you get to a level where you can see your way to solutions. Project management and production assembly lines work on the same basis, although the concern is that the whole does not become less than the sum of the parts. In a recent development in IT security and business continuity, a similar divide and conquer strategy uses virtualisation to isolate individual IT activities instead of applying malware detection techniques to a system as a whole.



According to SMB Group’s 2013 Top 10 SMB Technology Market Predictions, this is the year that small to midsize businesses get serious about their social media efforts. The group’s study shows that 58 percent of SMBs used social media in 2012, but only 28 percent of them were putting a strategic plan into place. Although social media is a fairly new concept, its use requires just as much planning and attention as any other marketing campaign in order for it to be deemed successful.

All the posts and tweets and pins may seem foreign to those who are used to traditional marketing lingo, so learning which social media platforms to use and how to use them is key to an effective social media campaign.



It seems that everyone is using cloud storage these days. Even enterprise managers who say they aren’t on the cloud yet probably are—they just don’t know it. So at this point, the question is not whether to use cloud storage, but how best to integrate it into the overarching enterprise infrastructure.

Ideally, this integration will come about through the transformation of internal IT infrastructure from current silo-laden architectures to a diverse hybrid cloud. But that process will not happen overnight, and the technology to produce such a flattened, infinitely scalable data environment is not quite out of the lab yet.

In the meantime, then, what is the enterprise to do? First off, says Widen Enterprises’ Matthew Gonnering, recognize that cloud integration is already taking place on the software level, particularly as the workforce becomes more mobile. Smartphones in particular lack the storage capacity to meet personal needs, let alone professional ones, so many apps come with built-in links to Dropbox, Google Drive and other such services where data can be stored, shared and synced outside the enterprise firewall. Rather than pull up the drawbridge when it comes to external storage, enterprises would be wiser to embrace the trend by working with software developers and cloud providers to devise the proper APIs and other tools needed to keep cloud data safe, secure and available.



By Roberto L. Hylton, Senior Law Enforcement Advisor

If you have ever had the chance to speak with Administrator Fugate or listen to him discuss the role of first responders in disasters… you will know he views their work with a revered appreciation.  They are an intricate part of the emergency/disaster response team.  As a former Police Chief, I can attest to their hard work and dedication and agree whole heartedly with Administrator Fugate.

In my 30 year career I have witnessed heroic efforts by my officers and colleagues, including during times of disasters.  While serving Prince George’s County, we responded to 9/11, Hurricane Isabel, snowstorms, and multiple tornadoes.  Specifically, I recall one of the tornadoes that impacted my county.  An EF-3 tornado impacted the nearby college campus and devastated neighborhoods and infrastructure.  Emergency services were stretched to the max.  Our officers worked relentless hours, 48 hours straight in some cases, setting up and supporting emergency response and rescue operations.  The scene was chaotic with debris and terrified college students, but the right training helped officers maintain public safety and conduct lifesaving missions.

Over the last two years I have had the distinct privilege of sharing the Administrator’s views with the law enforcement community and recently, he reflected on Law Enforcement’s Role in Responding to Disasters in an article in Police Chief Magazine:

We ask a tremendous amount of our first responders during disasters and emergencies. They are the first line of defense; they are the first helping hand extended to survivors. Every police officer knows emergencies can happen without notice. Our ability to respond to and recover from disasters is directly influenced by how well prepared our first responders are and how well we all work together as a team before, during, and after a crisis. 
The role of law enforcement in responding to a disaster is very similar to the day-to-day role of public safety and supporting the community. In preparing for a disaster, police officers trust in their training and capitalize on their knowledge of a community. Exercises portraying the situations (large- and small-scale events) help better prepare officers and allow them to fully understand the resources needed for each event and apply that information to each community’s needs. Law enforcement officials know their communities best and interact with residents on a daily basis. This knowledge gives them the ability to provide valuable situational awareness to response and recovery groups coming in to help. For example, where will there be language barriers? Does the community have unique challenges? Law enforcement can help communicate this information to the emergency management team and can offer support to other members of the team by simply being a presence in the neighborhoods.
During a disaster, police officers play a key role in many operations including: search and rescue, evacuations, door-to-door checks, and maintaining overall public safety within the community. These are critical actions that support not only their own communities but neighboring towns as well. 

As the Administrator explained in the article, the law enforcement community has two vital roles in responding to disasters:

  • As first responders during times of crisis, and
  • Providing for the safety and security of the community. 

Responding to disasters is a shared responsibility, and those in law enforcement are aware that emergency management planning is for all hazards and that it takes a team effort to keep our communities safe.  I’m proud to represent the law enforcement community at FEMA as we continue to strengthen the coordination among the entire emergency management team.

Editor’s Note: Police Chief Magazine is a publication from the International Association of Chiefs of Police and serves as the professional voice of law enforcement and supports programs and research, as well as training and other professional services for the law enforcement community.


Hurricanes and other natural disasters can bring business to a screeching halt when an office or plant is damaged or destroyed, and critical infrastructure is offline.

"When Hurricane Sandy hit the East Coast last fall, it resulted in $62 billion in damages and economic losses from businesses that were not able to operate because of flooded buildings, power blackouts and damaged communications infrastructure," said Justin Moore, CEO at Axcient, a provider of cloud solution applications to avoid downtime and data loss.

"However, there were several success stories, where firms had disaster plans in place and were able to leverage cloud-based disaster recovery and business continuity solutions to weather the storm. Dozens of IT providers in Sandy’s path used the latest technology to spin up virtual offices in the cloud to keep employees productive while waiting for primary systems to come back online or be restored," Moore explains.



A new law requiring school drills that prepare students for an attack by armed intruders is an unfortunate, but necessary, sign of the times.

The sad truth is that teachers and students, however young, must know what to do to protect themselves in such an unthinkable situation.

These drills, which have been added to the standard school fire drills, have been in place since 1999, after the fatal shootings at Colombine High School in Littleton, Colo. More states have been enacting legislation mandating such drills in the wake of the 2012 shooting at a Newtown, Conn., school that left 20 young children and six adults dead.



Network World — Devops and the cloud: They're two of the biggest buzzwords in high-tech today. But organizations embracing these trends are finding out just how closely the two are linked, and the advantages that automating IT processes can bring.

Take Rafter, a San Mateo-based company that was founded on the idea that college textbooks are really expensive. Chris Williams created a sort of Netflix for textbooks rental business that started by running off a couple servers sitting in a closet. Seven years later the company has 150 employees and is helping students and bookstores manage inventory and host online book stores for colleges, in addition to the book rentals.

Rafter is continually rolling out feature enhancements to its web site, so the company has a bustling development and testing lab where new services are created. Instead of the code-writers waiting for the IT shop to spin up a virtual machine with a replica of the production website, instead the developers can provision their own compute resources themselves. Welcome to a devops shop.



Today, many regulatory standards—from HIPAA to FISMA to PCI—have created a compliance landscape that can be onerous and burdensome. And it’s likely to only get worse. Complying with the requirements set forth by all of these regulatory bodies that control the business world has a profound effect on companies, as it involves a great deal of time, cost and effort. Historically, different functions within a company—legal, IT, operations, accounting—have each owned different compliance mandates. Yet in that situation, there has been very little coordination between them, creating silos that stand in the way of efficiency, communications and organization. So, how can companies rise above the complexity created by geographical boundaries and different workflows within the business?

The answer is an approach that, once adopted by companies, could eventually make any other way to conduct compliance efforts obsolete. Called the “one-to-many” approach, it is a streamlined effort of energy that involves working with constituents within the same company to coordinate the different compliance efforts that are needed within a company. In simple terms, it’s all about eliminating inefficiencies. For example, if you are answering the same question to fulfill five different mandates, why not gather the answer only once? Performing redundant work to provide the same information for the many users of this information is a waste of resources. Instead, you should streamline your compliance efforts by adopting the one-to-many approach. This alleviates the impact of compliance on the company and frees up employees’ time to concentrate on other strategic initiatives.



We start the week with a new animation from NASA that shows the increasing risk of wildfire activity across the United States in the coming decades.

An article on the NASA website notes that with satellite and climate data, scientists have been able to track an increase in dry conditions since the 1980s.

Climate projections suggest this trend will continue, increasing the risk of fire in the Great Plains and Upper Midwest by the end of the 21st Century, according to NASA.



Risk managers around the world appear to be closely aligned when it comes to top concerns for their organization, according to findings of two studies.

One was preliminary results of a study, Global Risk Management Research, which is due in September by Accenture. Executives from 446 organizations across eight industries were asked what they see as the biggest risks over the next two years. Out of a list of 10 “external pressures,” legal risks topped the chart at 62%. Second on the list were business risks at 52%, and third were regulatory requirements at 49%.

There was a tie at 46% between the fifth, sixth and seventh concerns, which were credit risks, operational risks and strategic risks.



Monday, 12 August 2013 18:21

Protect the Data in the Cloud Castle

We’ve all read medieval stories about castles, knights, traitors and thieves. Stories about villains storming the walls and castle guards surrounding the moat have dotted our memories since we were children. Each story has a prize – maybe the queen or treasure. Each story has a battle over that prize, resulting in a war of good versus evil with a potential victor.

When we read these stories, we know that good triumphs over evil. However, real life doesn’t always mirror the fairy tales we grew up hearing.

While it’s extremely important to protect the castle with the right moat, drawbridge, guards and weapons, the castle itself should not be the only thing that is secured.  In all of the hustle and bustle of protecting the castle infrastructure, the most important thing is often forgotten: the prize. Thieves or traitors don’t always care about the castle itself – they only care about what’s inside. The same principle applies to your data in the cloud. Hackers, thieves and snoops aren’t interested in the infrastructure they’re only interested in the one thing they can use: your data.



Steve’s Flower Shop rents a commercial space in a downtown area. Steve’s income is derived primarily from purchasing wholesale live flowers and creating arrangements and selling those arrangements at retail. Steve’s shop has coolers to preserve the arrangements, an area to create the arrangements, and a retail space for customers. Steve’s rent and utilities are his highest expense. To save costs and increase profits, Steve’s purchases its wholesale flowers in bulk through a local distributor with a long-term contract. Let's say Steve’s clears $500 per day. Of that amount, $350 is cost for utilities, rent, wholesale product, supplies, and etcetera. Steve’s makes $150 per day which is acceptable to Steve – this is his retirement business after years as an overworked business lawyer.

Then… the storm comes. Steve’s shop is wiped out. The commercial space is uninhabitable. The coolers are gone. A shipment bound for a customer is gone and the stock is gone. Steve submits a claim to his property insurer. Within a few days, the insurer has put Steve in touch with a contractor and has cut a check to Steve to replace the equipment.



By Larry Lang

Statistics have shown that most small to mid-sized businesses will experience at least one instance of system downtime a year. Once a year doesn't seem like much, but consider this: Aberdeen Group estimates that an hour of downtime costs a mid-sized business an average of $74,000. Then factor in results from a Harris Interactive survey, which found that IT managers estimate 30 hours on average for recovery.

Now that the cost has been put into perspective, are you sure your business can bounce back from even one instance of system downtime each year? Has your disaster recovery system been through regular real-world tests to find out? Unfortunately, only a small minority can respond to this last question in the affirmative: A 2011 survey found that only 28 percent of small to mid-sized businesses surveyed have even tested their backup at all.



LINCROFT, N.J. -- FEMA’s Hazard Mitigation Grant Program provides important assistance to local, state and tribal governments following a major disaster declaration, both speeding recovery and protecting life and property from future disasters.

With the Hazard Mitigation Grant Program, the Federal Emergency Management Agency provides funds to the state to enable mitigation measures to be implemented during recovery from a disaster.

The Hazard Mitigation Grant program can be used to fund projects to protect public or private property as long as the project fits within state and local government mitigation strategies. Funds are sent to the state for distribution.

Examples of projects include:

  • Acquiring and relocating structures from hazard-prone areas, such as the $29.5 million acquisition of flood-prone properties in Sayreville.
  • Retrofitting structures to protect them from floods, high winds, earthquakes or other natural hazards.
  • Constructing certain types of minor and localized flood control projects.
  • Constructing safe rooms inside schools or other buildings in tornado-prone areas.
  • Helping state, local or tribal governments develop mitigation plans.

Federal funding under FEMA’s Hazard Mitigation Grant Program is made available at the request of a state’s governor following the declaration of a major disaster.

Hazard Mitigation Grant Program funding is allocated using a sliding scale formula based on the percentage of funds spent on FEMA’s Public and Individual Assistance Programs for each declared major disaster.

FEMA's mission is to support our citizens and first responders to ensure that as a nation we work together to build, sustain, and improve our capability to prepare for, protect against, respond to, recover from, and mitigate all hazards.

Follow FEMA online at www.fema.gov/blog, www.twitter.com/fema, www.facebook.com/fema, and www.youtube.com/fema. Also, follow Administrator Craig Fugate's activities at www.twitter.com/craigatfema.


CIOBYOD is a reality, and we all have to deal with it.

Most of us are used to well-behaved devices such as laptops, netbooks, iPhones and iPads. There are enough mobile device management products to handle remote wipes and other strategies to lock down these devices if they are lost or stolen.

But when the device doesn't have a disk, things get a little dicey. Flash RAM that's soldered into a device can't be removed practically, and if the device is broken, that memory can't be erased. It gets more fun with Android tablets; the hardware may not be all that long-lived, and the myriad software configurations can be hard to manage in the wild.



Usage-based payment systems are becoming increasingly common, but a recent variation in disaster recovery has an interesting twist. A new pricing model from a company called Asigra is based not on how much data an organisation backs up, but how much it restores. In particular, a ‘recovery performance score’ determines the amount of money a customer will pay. The Asigra system emphasises value rather than cost: the value is in the data restored, rather than the data saved. Is a similar pricing model likely to spread to related services such as DraaS (Disaster Recovery as a Service)?



Thursday, 08 August 2013 18:47

Cloud: Responsibility and Accountability

For years, the IT industry has been experiencing growth in outsourcing. Organizations large and small have looked to utilize the promises of lower cost of operation. Witnessing this trend over time has allowed me to see something emerge that I have long-held as truth. Users have a responsibility to be accountable. Accountable to the service that they have contracted for, the information provided, the knowledge of the ownership of information, the recoverability, the usage, and the measurement against established criteria to name a few. Cloud is no different. I like to say, “You cannot manage that which you do not measure, and you cannot measure that which you do not know about”. Nonetheless, countless organizations dive into contracting for a service at one level and demand the service of the levels above that which they have contracted for.

When an organization outsources “backup”, for instance, the act of recovery must have established objectives (both time and point). This may come as no surprise to countless people in the business, but few organization have prioritized which applications are mission critical and need different recovery objectives than say the holiday office party logistics. While some may have done this, too many do not have an application matrix which outlines up-line and down-line dependencies. The number one reason why a “backed” up system cannot be restored, beyond hardware failure, is the lack of synchronization with the application up-line and down-line dependencies. So, why is it that the yelling and screaming commences once the failure occurs and the information provided was incomplete, inaccurate, or simply missing with regard to the actual nature of the criteria for success? It seems that the answer is lack of responsibility and accountability. The user no longer feels any responsibility or accountability for the “backup” since they have contracted for it even though they have not contracted for the level of service they are demanding, nor have they done their due diligence to manage the contracted service.



While three of the major hurricane forecasters have reduced by a smidgen their predictions for the 2013 Atlantic hurricane season, the season as a whole is still expected to be above-average as is the chance of a major hurricane making U.S. landfall.

Bear in mind that to-date the 2013 season has seen four named storms (Andrea, Barry, Chantal and Dorian) – none of which reached hurricane status.

Here’s how the revised forecasts stack up:



Thursday, 08 August 2013 18:45


By Meredith Cherney

When you ask someone what the most important thing to have on hand for a hurricane is, the common answers include food, water, flashlights, batteries, or a radio.  As I read through my student surveys however, I found a different set of answers.  Lifejackets.  Boats.  Buckets.  Axes.

Growing up in New Orleans fosters a unique hurricane perspective. When I stepped into that classroom to teach 9 to 12 year old students about hurricanes and preparedness, I wasn’t sure what to expect.  What do they know about hurricanes?  Do they understand that some evacuations are mandatory? Has their experience with hurricanes fostered a fear or resilience?

I work for Evacuteer.orgExternal Web Site Icon, a private non-profit commissioned by the New Orleans Office of Homeland Security and Emergency Preparedness to help with the City Assisted Evacuation (CAE) plan.  Beyond our role in emergency events we also seek to inform the public about the CAE and foster community preparedness. 

Our EvacuKids program targets a younger demographic.  We’ve already quadrupled our reach since 2012, from 30 to 120 students. Complete with a new curriculum and corresponding science experiments and activities, we not only teach students about hurricanes, but also work to improve literacy, writing, and critical thinking skills. 

There are four modules: disasters, hurricanes, prepare, and evacuate.  Each week builds upon the previous week, starting with the science of disasters and how hurricanes form to preparing your home for a storm and finding a safe place to stay in the event of a hurricane. 

In addition to academic lessons, we also talk to students about their experience with hurricanes, what they did, and how they felt.  Many students express fear and uncertainty when recalling their experience and as a class we discuss coping mechanisms to help them deal with their feelings.  Additionally, learning how hurricanes form and why they are common in our area can alleviate anxieties and foster a greater sense of understanding, preparedness, and even excitement in students. 

EvacuKids is tailored to the specific needs of the children, those whose families have transportation out of the city and those without it.  EvacuKids is a fantastic opportunity to make a meaningful, sustainable impact on a generation that will someday lead New Orleans in a positive direction.


Today is our 40th wedding anniversary, so naturally it leads to me to think about what love, marriage and life together has to do with crisis communication. A lot I think. And not just because there are plenty of crises in any marriage and communication or the lack of it is often the major cause of such crises.

Though some dispute the statistics, about half of marriages don’t survive–which makes 40 years very much worth celebrating. I’m going to suggest that the primary reasons why some do are very applicable to crisis communication, and for that matter any relationship.

Crisis communication, despite what too many think, is primarily about relationships. The all-important relationships between your company and organization and its most important stakeholders. Trust and respect are key elements of that relationship. What customer will stick with a company, what investor will maintain investment, what donor will contribute, what employee will eagerly produce without those two critical ingredients. Crises are crises mostly because they threaten the trust and respect that the important relationships hold in the leaders and the organization. That’s why whether or not an organization survives a crisis is primarily based how key stakeholders view the character of the leaders–are they worthy of continued trust and respect?



Hello, I’m David Mundie, a CERT cybersecurity researcher. This post is about the research CERT is doing on the unintentional insider threat. Organizations often suffer from individuals who have no ill will or malicious motivation, but whose actions cause harm. The CERT Insider Threat Center conducts work, sponsored by the Department of Homeland Security’s Federal Network Resiliency Division, that examines such cases. We call this category of individuals the “unintentional insider threat” (UIT).

This research includes

  • creating a definition of UIT
  • collecting and reviewing over 60 cases of UIT
  • analyzing contributing factors and observables in those cases
  • recommending preliminary ways to mitigate unintentional insider threats

For the purposes of our research, the team built a working definition of an unintentional insider threat:

An unintentional insider threat is (1) a current or former employee, contractor, or business partner (2) who has or had authorized access to an organization’s network, system, or data and who, through (3) their action/inaction without malicious intent, (4) negatively affects the confidentiality, integrity, or availability of the organization’s information or information systems.

Our preliminary study of the UIT problem identified a number of contributing factors and mitigation strategies. The malicious insider threat and the UIT share many contributing factors that relate to broad areas in security practice, organizational processes, management practices, security culture, etc. However, there are significant differences. Human error plays a major role in UIT. Countermeasures and mitigations to decrease UIT incidents should include strategies for:



CIO — IT walks a fine line between balancing security issues and giving people the tools they need to get the job done. Every day companies move sensitive data around and IT is in charge of securing that data, but what about the little things that tend to fall through the cracks?

According to data from several recent surveys there are a number of things your employees could be inadvertently doing that puts your company's sensitive data and information at risk.

A survey done recently by IPSwitch, an FTP software organization, includes some of the reasons employees are putting sensitive data into places where IT has no control over what happens to it:



CSO — A security researcher has shown that hackers, including an infamous group from China, are trying to break into the control systems tied to water supplies in the U.S. and other countries.

Last December, a decoy water control system disguised as belonging to a U.S. municipality, attracted the attention of a hacking group tied to the Chinese military, according to Trend Micro researcher Kyle Wilhoit. A dozen similar traps set up in eight countries lured a total of 74 attacks between March and June of this year.

Wilhoit's work, presented last week at the Black Hat conference in Las Vegas, is important because it helps build awareness that the threat of a cyberattack against critical infrastructure is real, security experts said Tuesday.



KANSAS CITY, Mo. – With several areas throughout Kansas and Missouri experiencing bouts of late-summer flooding, the Federal Emergency Management Agency (FEMA) is urging residents to stay informed about the potential hazards of flooding.

Floods, especially flash floods, kill more people each year than any other weather phenomenon. This recent spate of severe weather-related events across the Midwestern states serves as a pointed reminder just how dangerous floods can be and how important it is to stay abreast of weather warnings, understand flood terms, and take action by monitoring, listening, preparing and acting accordingly.

Beth Freeman, Regional Administrator for FEMA Region VII urges residents to be constantly aware of their environment and any potential for flooding. "There's no doubt that when people are aware of the dangers and power of flooding, they can take measures to lessen the exposure to danger for themselves and family members," Freeman said. "When you're driving and you see the road ahead is flooded, be safe. It's best to 'turn around, don't drown.' FEMA is monitoring the situation and is on standby to help states if assistance is requested.”

While floods are the most common hazard in the United States, not all floods are alike. Floods typically occur when too much rain falls or snow melts too quickly. While some floods develop slowly, flash floods develop suddenly. 

One of the most dangerous elements of a flood is floodwaters covering roadways, and motorists are urged to never attempt driving through them.  About 60 percent of all flood deaths result from people trying to cross flooded roads in vehicles when the moving water sweeps them away.

While flood risks can indeed be a formidable threat, there are simple steps citizens can take today to reduce their risk to all types of floods. 

If a flood is likely in your area, you should:

  • Listen to your radio or television for information.
  • Be aware that flash flooding can occur. If there is any possibility of a flash flood that could affect you, move immediately to higher ground. Do not wait for instructions to move.
  • Be aware of streams, drainage channels, canyons, and other areas known to flood suddenly. Flash floods can occur in these areas with or without such typical warnings as rain clouds or heavy rain.

If you must prepare to evacuate, you should:

  • Secure your home. If you have time, bring in outdoor furniture. Move essential items to an upper floor.
  • Turn off utilities at the main switches or valves if instructed to do so. Unplug electrical appliances. Do not touch electrical equipment if you are wet or standing in water.
  • Take essential documents (http://www.ready.gov/evacuating-yourself-and-your-family)

If you must leave your home, remember these evacuation tips:

  • Do not walk through moving water. Six inches of moving water can make you fall. If you have to walk in water, walk in areas where the water is not moving. Use a pole or stick to make sure the ground continues in front of you.
  • Do not drive into flooded areas. If floodwaters rise around your car, abandon the car and move to higher ground if you can do so safely. You and your vehicle can be quickly swept away.
  • Six inches of water will reach the bottom of most passenger cars causing loss of control and possible stalling.
  • A foot of water will float many vehicles.
  • Two feet of rushing water can carry away most vehicles including sport utility vehicles (SUVs) and pick-ups.

Additional tips to consider:

  • United Way’s 2-1-1 is a helpful resource before, during and after disasters. Keeping this number and an up-to-date family communication plan handy is a must-do when preparing for emergencies.
  • Keep emergency supplies on hand, such as non-perishable food, medicine, maps, a flashlight and first-aid kit.
  • Use extreme caution when returning to flood damaged homes or businesses.

Become familiar with the terms that are used to identify flooding hazards:

  • Flood Watch: Flooding is possible. Tune in to NOAA Weather Radio, commercial radio, or television for information.
  • Flood Warning: Flooding is occurring or will occur soon; if advised to evacuate, do so immediately.
  • Flash Flood Watch: Rapid rises on streams and rivers are possible. Be prepared to move to higher ground; listen to NOAA Weather Radio, commercial radio, or television for information.
  • Flash Flood Warning: Rapid rises on streams and rivers are occurring; seek higher ground on foot immediately.

The National Weather Service is the official source for weather watches and warnings.

For more information on flood safety tips and information, visit www.ready.gov/floods or the Spanish-language web site www.listo.gov.

For information on how to obtain a flood insurance policy, visit www.floodsmart.gov.

Follow FEMA online at www.twitter.com/fema, www.facebook.com/fema, and www.youtube.com/fema.  Find regional updates from FEMA Region VII at www.twitter.com/femaregion7. The social media links provided are for reference only. FEMA does not endorse any non-government websites, companies or applications.


In today’s enterprise, data is the key. It enables a business to make its best decisions and efficiently manage its business processes.

Data is demanded by many departments and must be gathered, sorted, cleaned, managed, analyzed and protected. Because data is often gathered from applications, it likely falls in the realm of IT, where business intelligence and analytics systems are managed. However, what many IT organizations lack is a framework for data governance—a solid set of processes and policies that dictate the way data is supervised and preserved.

The book “Data Governance: Creating Value from Information Assets,” provides a detailed look into information governance; it begins with a chapter on how data governance plays a role in an enterprise, moves through management of metadata, and then explains how to operationalize data quality. Other chapters include:



Wednesday, 07 August 2013 15:47

The Road to the Hybrid Cloud Runs Through PaaS

Most enterprises are far enough into the cloud deployment process to understand that there is more than one type of cloud. At the moment, many organizations are content to spin up a few hosted resources to gain extra storage or run a few key applications. But as cloud strategies become more refined, the style of cloud implemented on both private and public resources and the infrastructure that supports them can have a dramatic impact on future data objectives.

As I’ve pointed out, hybrid architectures are only as good as the private cloud allows them to be, and so far only a handful of organizations are pursuing what leading experts deem to be a true private cloud strategy. Part of this is because the cloud is still an ill-defined concept, but legacy infrastructure can be a major drag as well—particularly when it consists primarily of silo-based, bare-metal architecture. So clearly, the first step in any coordinated cloud strategy is to implement virtual and software-defined infrastructure to the broadest extent possible.



Wednesday, 07 August 2013 15:45

IT Evolution

We really need to transform what the American IT workforce is made up of. Instead of teaching COBOL, Pascal, C++, and other elements of technology, we really need to teach how to align business and IT to take advantage of innovation and creative thinking. The way to align business and IT is to focus on the customer experience and the value that they live in that experience.

Instead of IT being a separate business unit, IT needs to be integrated into every business unit. I am by no means advocating breaking IT up into multiples of itself contained within each business unit. I am advocating that IT needs to reside with knowledge of the business and each unit in their strategic planning to assist with how to enable their people and process in a cost effective, simple, agile, and rigorous way. If IT establishes strategy along side of the business, then the execution and results will match. This is opposite of the way it is done today where the business and each unit goes off to develop strategy based upon a vision that IT is not a part of. Likewise, IT, more often than not, sequesters itself and develops its own strategy and execution plan based upon a limited view or knowledge of the vision of the organization. I liken this to picking the route to go on vacation before picking the destination.



PC World — For small businesses today, there's nothing that can't be done in the cloud. You could plunk down your cash for Basecamp, Yammer, and Google Docs like everyone else, but alternatives to these stalwarts abound. For something that does more, costs less--or both--check out these six Web-based tools, categorized based on their primary functionality.


General collaboration: Podio

Podio may still fly under the radar of such behemoths as Basecamp, but it's rapidly emerging as the go-to collaboration tool for a new generation of knowledge workers. Originally a Danish startup, Citrix acquired it last year, and the new features keep on coming.

Designed (like most collaboration systems) to eliminate excessive emailing, the structure is relatively simple: You invite employees into Podio's internal communication network, then create any number of "workspaces" in which they can collaborate. You can admit outsiders on a workspace-by-workspace basis, keeping them out of the broader employee network.



For homeland security professionals to be successful in their field, it is critical to stay ahead of prevailing tendencies within the industry. Colorado Technical University recently sponsored a mock exercise, hosted by the Colorado Emergency Preparedness Partnership (CEPP), and attended by personnel from private and public sector institutions to help prepare for a cyber-attack.

During the tabletop exercise, an expert panel addressed propagation and impacts of a cyber-attack from domestic and foreign organizations. This simulated exercise was part of a continued series of emergency preparedness events led by CEPP and this event’s sponsors: Western Cyber Exchange, CTU and the Canadian Consulate.

The cyber-attack scenario began in southern Colorado and spread from local jurisdictions to a national threat, and ultimately a global one. Families, businesses, communities, government services and the critical infrastructure we depend on for our everyday needs suffered the consequences from the simulated attack. Our expert panel, consisting of private and public sector members from the city of Colorado Springs; telecommunications and energy sectors; the state, federal and Canadian governments; addressed the evolving scenario.



One flood victim in Canmore says he has concerns after learning the province's disaster recovery program is being run by a private company.

Gus Curtis' yard was washed away by Cougar Creek and his home's foundation is exposed and cracked. Until recently Curtis assumed he was working with a government employee on a recovery plan.

In fact, Edmonton-based Landlink Consulting has been contracted to processes flood claims and calculate and distribute payments.

Curtis said an employee shut him down after he asked a few questions. "So I said ‘who is Landlink?’ He paused and said Landlink is a company hired to administer the fund,” Curtis said.



Disasters happen. And though business and IT leaders like you can’t prevent them, you can curtail the losses and costs that disasters cause — by ensuring that Business Continuity and Disaster Recovery (BC/DR) plans are in place at your organization.

Hurricane season, flooding, tornadoes and other severe weather threats remind us once again just how important it is to be prepared 

For instance, in the event of a disaster, would your IT operations be back to business with the help of data centers that remain running amid the storm, transitioning from generators to utility power in the days following? We explore this possibility further in our recent Forbes.com article “Does Your Data Center Have a Disaster Plan?” with strategies that protect buildings, systems, equipment, and personnel — and also have contingencies for the loss of any or all of them.



Tuesday, 06 August 2013 17:52

Training children in emergency preparedness

In July 2012, the Federal Emergency Management Agency (FEMA), through Administrator Craig Fugate, announced the following regarding youth disaster preparedness: “Youth have a unique ability to influence their peers and families to be more resilient, and children play an important role in disaster preparedness, during and after a crisis.”

According to FEMA, studies have shown “those households with schoolchildren who brought home preparedness materials are more likely to be prepared on a range of preparedness than households with schoolchildren who did not bring home preparedness materials.”

It is reported that 70% of households receiving preparedness information from their children have an emergency response plan they have discussed with family members compared to the national average of 45%. It appears the best champions for disaster preparedness are our children.

Some training can start at home before they’re old enough to attend school, when your children are of an age they can absorb information, and comprehend what to do with the information. Here are some things you can teach your children to get them started down the path of emergency preparedness:



Tuesday, 06 August 2013 17:50

Lost in the privacy landscape

Australia’s privacy and data protection laws are hard to explain and often poorly understood. The first challenge is to explain that the Australian Privacy Commissioner sits in the Office of the Australian Information Commissioner (OAIC) and applies laws that the Australian parliament has misleadingly called ‘principles’.

The second challenge is describing how to read principles as laws and fit them together with other provisions in the Privacy Act that clearly are drafted as laws.

And then there’s the difficulty of trying to interpret these provisions when dealing with novel issues such as cross-border cloud deployment and access to personal information held in another jurisdiction (or jurisdictions unknown), geo-tracking of devices, data warehouses, virtualised servers, big data and customer data analytics.



With the increase in the use of online services for government transactions, datacentres are a key focus of the government’s green IT strategy and the Green ICT Delivery Unit (GDU), according to its report.  

Over 80% of HMRC’s tax returns are submitted via the internet, suggesting the growing importance of public sector datacentres.

As a result, the Department for Food, Environment and Rural Affairs (Defra) is setting out best practice guidelines for public sector organisations to procure energy efficient datacentre and cloud hosting services. The guidance has been discussed with Intellect, the UK industry body and there have also been discussions with the European Commission (EC) via its EU-wide Green Public Procurement process.

The Greening Government: ICT Annual Report 2013 by Jennifer Rigby, chair of GDU and John Taylor, SRO for Green ICT and CIO at MoD also praised government CIOs and IT staff’s progress in implementing green IT strategies.



Lancope has released a survey indicating that many enterprises possess an unrealistic confidence surrounding the security of their networks.

According to the survey, more than 65 percent of IT/security professionals did not think, or were unsure whether, they had experienced any security incidents within the last 12-18 months.

According to Lancope’s director of security research, Tom Cross, this scenario is not likely. “Any system you connect to the Internet is going to be targeted by attackers very quickly thereafter,” he said. “I would assert that if you’re unsure whether or not your organization has had a security incident, the chances are very high that the answer is yes.”

The survey also revealed that 38 percent believe recent security incidents had no impact on their organization. According to Cross, “even the most basic malware infection has some financial cost to the organization, even if it’s just the cost to clean infected machines. Not to mention the additional serious consequences that can result from a breach, including data loss, customer distrust, regulatory fines and many others.”



A crisis in 2013 vaguely resembles a crisis of 15 years ago. Today, social media can be both a curse and a blessing in an emergency. Managers must understand that with the power of real-time comes a huge responsibility to learn how to use the media responsibly. One piece of misinformation posted on social media during a crisis can start a cascade of panic that is almost impossible to stop. - See more at: http://blog.missionmode.com/blog/3-keys-to-using-social-media-responsibly.html#sthash.CBr4tPjV.dpuf

On July 2, the government of India released the National Cyber Security Policy 2013. This policy extends to a spectrum of ICT users and providers, including home users, SMEs, large enterprises, government and non-government entities. The policy aims to serve as an umbrella framework for defining and guiding the actions related to the security of cyberspace. The policy has been much delayed but is now released amid reports of snooping by the US globally - and ever-increasing threats to India as a country.

The policy defines 14 diverse objectives that provide an overview of the government’s approach to the protection of cyberspace in the country. A few objectives that will have a positive impact on S&R professionals in India caught my attention:



Today’s “social age” has brought many changes to the corporate world and increased the competitive threats enterprises have to deal with on an ongoing basis. Traditionally, competition has been upfront and direct with open head-to-head strategies to win customers and market share. But as the world approaches a complete “digital state” the competitive tactics against corporations have never been more threatening or aggressive.

As disruptive, non-traditional business competitors emerge, many of these organizations are adopting tactics that would typically be “off limits” to traditional corporations, including partnering with activist groups to attack and disrupt the market leader to damage the reputation and erode the financial state of the organization.

Many enterprises are no longer simply looking to compete, but actually to protect their operations against the disruptive, aggressive forces these non-traditional competitors are partnering with. To combat these unconventional tactics, traditional corporations are turning to real-time advanced social intelligence to receive deep, multidimensional insight on the tactics and actions.



Tuesday, 06 August 2013 17:20

Terrorism Risk and Insurers

Ratings agency Fitch has warned that failure to renew the federally backed Terrorism Risk Insurance Program could have a significant impact on the availability and pricing of workers compensation and commercial property insurance coverage.

Insurer credit ratings and the commercial mortgage backed securities (CMBS) market would also be affected.

The report comes as at least 19 U.S. embassies and consulates in the Middle East and North Africa remain closed through the week after the State Department issued a global travel alert to U.S. citizens due to potential terrorist threats.

Fitch notes that workers compensation insurers could be particularly vulnerable to large losses if an extreme terrorist event takes place without the federal terrorism reinsurance program in place:



Tornadoes, hurricanes, wildfires or other natural disasters can bring your business to a screeching halt when the office is damaged or destroyed, and critical infrastructure is offline. Axcient, the leading cloud solution for eliminating application downtime and data loss, today outlined 10 disaster preparedness tips that can help your company prepare and respond to disasters, while keeping the business up-and-running and maintaining vital revenue.

“When Hurricane Sandy hit the East Coast last Fall, it resulted in $62B in damages and economic losses from businesses that were not able to operate because of flooded buildings, power blackouts and damaged communications infrastructure,” said Justin Moore, CEO at Axcient. “However, there were several success stories, where firms had disaster plans in place and were able to leverage cloud-based disaster recovery and business continuity solutions to weather the storm. Dozens of IT providers in Sandy’s path used the latest technology to spin up virtual offices in the cloud to keep employees productive while waiting for primary systems to come back online or be restored.”

These businesses had a clear emergency preparedness plan in place for their personnel and relied on technologies that can deliver real business protection exactly when it’s needed. 

Looking at examples of what enterprises did to successfully weather Hurricane Sandy and other natural disasters, Axcient developed the following 10 Disaster Preparedness Tips for Businesses:



Monday, 05 August 2013 15:11

Instilling Ethics in a Compliance Program

I continue to be astounded by one simple fact (candidly there are others) – companies do not understand that creating and maintaining an ethical culture improves bottom-line financial performance.  A commitment to ethics as an enhancement to an existing compliance program not only improves performance of the compliance program, but improves corporate profitability and long-term shareholder value.

From my days as a history major, I am reminded of the Luddites and their rejection of technology.  To me, the issue is remarkably similar – companies ignore ethics as a driver of compliance, but more importantly fail to recognize the importance of ethics a means to ensure business success and long-term viability.

There is an abundance of research proving that an ethical culture improves financial performance.  The link appears very logical and intuitive and research confirms the improvement to the bottom line.



Monday, 05 August 2013 15:09

Business Continuity and the use of Robots

For most organisations, business continuity issues have more to do with breakdowns in everyday processes than with incidents in a nuclear reactor. However, events like the most recent catastrophe in Japan have catalysed discussions on the potential for using robots for recovery and continuity – discussions that could progressively include even ‘run of the mill’ incidents. The high radioactivity levels of the Fukushima reactor systems prevented human beings from being able to shut them off early enough to minimise damage. Correctly designed robots on the other hand might have been able to do this: however, while the use of robots in industrial applications and in space exploration is well-known, emergency situations require a different approach to robot programming.

The need to be able to issue simple, natural commands according to the need at hand, and the need for robots to respond to these commands are defining characteristics of these critical situations. Current pre-defined, pre-programmed robot activities do not allow for this. In tape archives for instance, robots organise tape cartridge picking, mounting, and storing, but do not step outside the narrow limits of an orderly process. Such robots are not designed to respond to abnormal situations such as fire or flooding. Recovery robots on the other hand would be expected to handle such events and understand spontaneous commands such as ‘shut the door’ or ‘go down the stairs’.



The all-Flash data center—it used to be considered something of a pipe dream. While solid-state storage has its uses, both costs and the complexity of modern data environments seem to demand mixed storage architectures for the time being. But as costs come down, more storage experts are looking at all-Flash, or perhaps Flash-dominant storage environments.

Storage has always been the laggard in the data-handling relay race, but recently the disparity has become stark. As virtual and cloud environments shift the burden away from processing power and even storage capacity, speed has become the determining factor in high-performance environments. According to Kaminario, more than 90 percent of the performance issues afflicting leading applications these days can be traced to storage. Whether it is web-facing OLTP or Big Data OLAP batches, the I/O culprit is almost always poor random read/write performance in legacy HDD arrays. The results were largely same across Oracle, SQL, DB2, MySQL and even unstructured data sets.



Instead of the teacher, I was the student.  I was “grasshopper”.

Recently, I had the opportunity to attend a Dale Carnegie workshop that my employer hosted as part of our employee development program.  The course was titled “How to Say What You Mean to Get the Results That You Want”.

I was pleased (confident) when throughout the class we talked about several topics that we also cover in the Community Emergency Response Team (CERT) Train-the-Trainer curriculum that I’ve been teaching for the past few years.

I thought I’d share with you some of the concepts, suggestions, and thoughts that I left the class with.



Monday, 05 August 2013 14:49

What We're Watching: 8/2/13

Posted by: Dan Watson, Press Secretary, Public Affairs 

At the end of each week, we post a "What We’re Watching" blog as we look ahead to the weekend and recap events from the week. We encourage you to share it with your friends and family, and have a safe weekend.

Photos of the Week
Here are a few of our favorite photos from the past week. Check out our Photo Library for more.

Moore, Okla., July 29, 2013 -- The American flag stands as a sign of strength in the foreground of the devastation left in the wake of the May 20th EF-5 tornado.

Old Bridge, N.J., July 27, 2013 -- FEMA Mitigation specialist Jenai Jordan and External Affairs representative Susan Langhoff provide information on mitigating disasters like Hurricane Sandy at the Home Depot Hurricane Workshop in Old Bridge, New Jersey.

White River, Mich., July 30, 2013 -- Muskegon County Road Maintenance Superintendent Laurie Peterson, views this very dangerous road washout. FEMA Public Assistance and Hazard Mitigation Grants become available following application and inspection and cover a significant portion of the cost of repair.

Weather Outlook
According to the National Weather Service, it doesn’t appear there will be any severe weather threats this weekend.  While there aren’t any significant weather threats at this time, weather conditions can change rapidly. We encourage everyone to monitor their local weather conditions online at www.weather.gov or on their mobile device at http://mobile.weather.gov.

While you’re out and about this weekend, take a few moments to make sure your family’s emergency kit is fully stocked as we head into the peak of hurricane season.  Last week we saw two Tropical Storms -- Dorian in the Caribbean and Flossie in the Pacific. These storms are great reminders that the time to prepare for tropical weather is now. Visit Ready.gov for a list of items that should be in your emergency kit and for safety tips on what to do before, during and after a hurricane.

Public-Private Partnership Conference
This week the Department of Homeland Security and FEMA, in association with the United States Northern Command and the American Red Cross, hosted the “Building Resilience through Public-Private Partnerships” conference.

The conference highlighted successful public-private partnerships, identified coordination gaps between public-private organizations, and engaged both sectors to determine how to further promote teamwork to make our communities and nation more resilient.

Here are a few tweets from the @FEMALive account, which covered live the conference live on Twitter:

Thanks to everyone who was able to participate and follow the discussion online!

For more information on how FEMA engages with the Private Sector, visit www.fema.gov/private-sector.

Have a safe weekend!


This summer’s floods in Alberta and Toronto highlight the importance of business continuity planning – a key part of any risk management strategy. It keeps employees productive and maintains essential business operations and customer satisfaction during any kind of interruption. However, according to IDC, only 44 per cent of Canadian large businesses, with more than 1,000 employees, had a continuity plan in place as of late 2011. Small businesses, with fewer than 100 employees, were even less prepared, with 25 per cent planning to launch business continuity plans in the next 12 months.

Here are some key steps to make sure your business operations can continue in the event of another major interruption:

1. Have executive buy-in. Support from executives or other senior leadership is critical for the success of a business continuity plan. Planning and execution will require their buy-in and attention to ensure that all processes are managed effectively.



Friday, 02 August 2013 15:54

NIH Announces Big Dollars for Big Data

Big Data is playing a huge role in medical research—some even believe it will be instrumental in finding a cure for cancer. Though in its early stages, harnessing the power of Big Data obviously has the potential to change medical research in a major way.

The National Institutes of Health apparently agrees. This week, the NIH announced funding for the establishment of six to eight investigator-initiated Big Data to Knowledge Centers of Excellence. The funding will be for up to $24 million per year for four years.

“The centers will improve the ability of the research community to use increasingly large and complex data sets through the development and distribution of innovative approaches, methods, software, and tools for data sharing, integration, analysis and management,” Scientific Computing reports.



As we approach the peak of hurricane season, catastrophe modeler RMS has warned that storm surge poses a greater risk than hurricane wind.

RMS says its updated North American hurricane model shows there is a 20 percent chance that storm surge loss will be greater than wind loss for any U.S. hurricane that makes landfall. And for the northeast coast of the U.S. the risk is even higher.

Dr. Claire Souch, vice president, model solutions at RMS says:

Our model shows there is a 20 percent chance that storm surge loss will be greater than wind loss for any U.S. hurricane that makes landfall, which rises to almost 40 percent along the northeast coast of the United States – this is a risk the market can no longer afford to ignore.”

RMS’ updated North Atlantic hurricane model suite includes the ability to fully quantify the risk from catastrophic hurricane-driven storm surge.



There is no doubt that companies understand the importance of business intelligence (BI) to supporting the efficient and effective running of the organisation.

Continued economic uncertainty and major industry-changing dynamics like mobility and the shift to digital business put a premium on data and information. Whether it's optimising processes, improving customer service, increasing the accuracy of marketing initiatives, breaking into new markets, or seeking ways to get ahead of the competition, firms recognise that getting the right data to the right person at the right time is a key prerequisite to business success.

However, recognising the importance of data and analytics is one thing. Actually putting in place the processes and tools required to deliver data and analytics in the most efficient and appropriate way to meet the needs of business decision-makers is a different matter:



Cloud data storage and disparate privacy laws could be hampering companies fighting cyber attacks, according to Seth Berman, UK executive managing director of digital risk management and investigations firm, Stroz Friedberg.

He urged organisations to review cloud services contracts to prevent valuable time being lost when responding to a data breach incident.

“Companies are forced to fight attackers on multiple geographic fronts, but the complexities of the internet cloud and a patchwork quilt of data privacy laws means a prompt response is often difficult,” said Berman.

Cyber incident response plans must take into account any potential restrictions to access, but providers are rarely set up to support a victim's needs to obtain forensic images of their own servers.



Heightened regulatory scrutiny and greater concerns over risk governance have led financial institutions to elevate their focus and attention on risk management, a new global survey from Deloitte Touche Tohmatsu Limited (DTTL) finds. In response, banks and other financial services firms are increasing their risk management budgets and enhancing their governance programs.

According to Deloitte’s eighth biennial survey on risk management practices, entitled ‘Setting a Higher Bar,’ about two-thirds of financial institutions (65 percent) reported an increase in spending on risk management and compliance, up from 55 percent in 2010.

A closer look at the numbers finds, though, that there is a divergence when it comes to the spending patterns of different-sized firms. The largest and the most systemically important firms have had several years of regulatory scrutiny and have continued their focus on distinct areas like risk governance, risk reporting, capital adequacy, and liquidity. In contrast, firms with assets of less than $10 billion are now concentrating on building capabilities to address a number of new regulatory requirements, which were applied first to the largest institutions and are now cascading further down the ladder.



Online threats and cyber crimes increase with intensity and complexity almost daily. Couple this with the fact that nearly all business functions rely on the Internet and IT in some way, and you have big reasons to fear a failure in your company’s online defenses.

The Department of Homeland Security has identified five main questions that c-level executives should consider when addressing cyber risks. These points are presented in the IT Download, Cybersecurity Questions for CEOs. The informative document covers these key questions and others that company leaders must evaluate in their organization to ensure company data and systems are safe from attack—questions that many executives never think to ask of their IT security team, such as:

  • How many cyber incidents do we detect in an average week?
  • How and when is executive staff notified of a breach or attack?
  • What are our current risks to attack?

According to the document, company leaders should take an active role in risk management discussions:



Friday, 02 August 2013 15:35

How to Smooth IT-Business Friction

CIO — Who loves their IT department? Only one out of 10 have a positive sentiment toward IT support or service, according to a survey by BMC Software. A whopping 63 percent have a negative sentiment, while the rest take a neutral stance.

The vast majority of end-users shake their heads when it comes to IT's ability to respond and resolve to tech problems in a timely manner. The perceived impact this has on worker productivity is pretty bad, too.

"I hate calling the help desk at work," a survey respondent writes. "Not only are they useless, but the guys also do some excessive breathing into the phone."

BMC offers a few things both end-users and IT professionals can do to reduce this friction. End-users can have a "take your techie to lunch day," while IT can deploy a digital ticketing system that drives accountability.

Another potential fix that has been gaining steam lately is the enterprise Genius Bar. Companies such as SAP are taking a page from Apple's hands-on, consumer-friendly approach to solving tech problems. This trend is in the early stages, yet an enterprise Genius Bar has the potential to change the odd-couple relationship dramatically.



I swear I could write about BYOD and the potential security problems every day until the foreseeable future. But I have to wonder if we are approaching the risks in the wrong way.

A new study by managed cloud services provider NaviSite found that while 80 percent of 700 IT decision makers agree that BYOD is the “new normal,” only 45 percent have a formal BYOD policy in their workplace.

That number is awfully low when you consider that even though BYOD is being thought of as the “new normal,” it isn’t exactly a new concept. After all, employees having been using personal computers and laptops for business purposes long before there were mobile devices. And mobile devices have now been around in the workspace for several years.



Over half of UK IT managers believe a fully outsourced managed security service is necessary to support the roll-out and management of cloud technologies, a survey has shown.

The poll of IT managers across all sectors by Vanson Bourne revealed that 78% of respondents are concerned about how to migrate to online services securely.

“As more people introduce cloud services there may be an increase in the use of security in the cloud,” said the survey report.

The report said it is likely that most businesses are trialling the technologies before taking the next step, especially with an issue as important as security.

Only 5% of all IT managers saw no benefit in using a security as a service provider, but all those in the financial sector recognised the benefit of security services.

Just over two-thirds said security service providers should be held responsible for security breaches, indicating that few are willing to accept the security responsibilities of moving to the cloud.



Thursday, 01 August 2013 15:13

Disease Spreading At Speed of Flight

Polio, not bird flu

[Updated on 1 August 2013 at end of entry]

Israel has recently reported several cases of polio.

Since Israel inoculates all children and new immigrants with anti-polio vaccine, the appearance of polio should tell risk management practitioners two things:

      One: In order to eradicate a contagious disease, the effort must be worldwide

Two: Communicable diseases can – and are – spread at the speed of flight.

According to Israeli sources ( http://www.israelnationalnews.com/News/News.aspx/), “The strain of polio virus recently discovered in southern Israel is exactly the same kind as the type of virus that is prevalent in Pakistan, and which existed exclusively in Pakistan until recently, reports the Pakistan-based publication Dawn.

“Dr. Nima Abid, a representative of the World Health Organization (WHO) in Pakistan, told Dawn that the virus was "definitely" from Pakistan, since “The virus genotype (genetic make-up) is the same as prevalent in Pakistan and this is what the research has indicated."

“The samples of the virus strain were found in sewage in Cairo, in December last year.

There had been no cases of polio in Egypt for five years previously, and the disease had been eradicated in Israel much before that, said the WHO official.”

Polio is not the only easily transmitted disease that requires international cooperation to eliminate.



Thursday, 01 August 2013 15:12

Garbage In, Garbage Out

Last week I wrote about a train derailment on the line I take to work every day. It was the third derailment in only a few months for the MTA. It turns out that two sets of tracks were destroyed as the result of a derailment of 10 cars on a CSX train hauling garbage at night.

The MTA responded promptly and by the next morning had plans in place, using buses and a subway line to get people to work in Manhattan. That was a Friday, and by Monday garbage had been removed from the tracks and one track was replaced so that service could mostly be restored. The second track was back a few days later.

But a recent letter to the editor of our local newspaper gave the incident a new perspective. The reader pointed out that a CSX garbage train makes a trip four times each day to and from the Bronx, through Albany, to Virginia.

He stated, “The garbage is loaded next door to two gas-fired electric generating plants,” and pointed out that “every advanced country is converting garbage to gas for electric production – we are not.” Instead, we are hauling it to faraway locales to be placed in landfills.



Thursday, 01 August 2013 15:09

Do 1 Thing: Family Communication Plan

By Cate Shockey

This blog is part of a series, covering a preparedness topic each month from the Do 1 Thing ProgramExternal Web Site Icon . Join us this month as we discuss family communication plans.

For Do 1 Thing this month, it was time to sit down and create a family communication plan. The point is to be able to communicate with family members during a disaster.

On vacation with my family this month, we discussed how we would stay in touch in an emergency situation. Local phone calls can be overloaded in an emergency, so it’s important to choose a person that lives outside of the area to call if you’re not able to reach each other. Because I live in a different state than my family members, it was easy to decide that I would be their out of state contact, and my parents would be mine.

The next step was entering ‘in case of emergency’ numbers (ICE) into our phones. If you are hurt and unable to use your phone, first responders can call your ICE contact for you.

Here are a few things you can do this month to make sure you can stay connected toyour family in an emergency:

  • With the prevalence of social media, many people have found that the best way to communicate in the chaos of an emergency is to check in with others on Facebook, Twitter, and Instagram. In 2012, the American Red CrossExternal Web Site Icon reported that three out of four Americans (76 percent) expect help in less than three hours of posting a request on social media and 40% of those surveyed said they would use social tools to tell others they are safe (up from 24% in 2011).
  • Fill out a family communication plan Adobe PDF fileExternal Web Site Icon at Ready.gov. Keep a copy of your plan in your emergency supply kit or another safe place where you can access it in the event of a disaster.
  • Keep a car charger for your cell phone in your car. That way, if the power goes out, you can still charge your phone.
  • Remember that if your call won’t go through in an emergency, a text message might. Make sure everyone in your family knows how to send and receive text messages.
  • The American Red Cross Safe and WellExternal Web Site Icon website helps families keep in touch during a disaster. In an emergency, visit the website and enter your information as well as find information on others.

Check out Do 1 ThingExternal Web Site Icon for more tips and information, and start putting your plans in place for unexpected events. Are YOU ready?

Leave a Comment! Do you have a family communication plan? Have you ever had to use it?


It should come as no surprise that regulators and organizations alike struggle to set and enforce guidelines for social media activity. It’s not just that the rise of social media is rapidly transforming the way we interact with people, customers, and brands; but also how many ways this transformation is happening.

The core issue is that social media alters the way we as individuals share who we are, merging our roles as people, professionals, and consumers.  As we share more of ourselves on a growing number of social networks, questions quickly surface:

  • How frequently and on what social networks should we post?
  • When should we present ourselves in our professional role versus sharing our personal opinions?
  • Is it okay to be social media friends with co-workers, clients, or your boss?

These are complicated matters for individuals, and absolute conundrums for organizations concerned with how employees behave and interact with others in, and outside of, the workplace. Their questions are even more complicated:



Thursday, 01 August 2013 15:06

Is it time for object storage to shine?

My previous column touched on the promise of storage virtualisation in an era of “software-defined everything” and other initiatives that promise to make storage much simpler to manage.

One option for time and cost-starved IT managers to rein in their storage spending is object storage.

Object storage, on paper at least, seems like an appealing option. It is radically simpler than traditional storage area networks (SAN) and even network-attached storage (NAS), it scales much better from a capacity standpoint, and it is especially well suited to cost-effectively storing lots of unstructured data – think files, videos, music and images – in this big data era.

Yet, according to our research, the adoption of object storage is a minority activity. In a recent study by 451 Research’s The Info Pro service, out of 275 storage professionals at mid-sized and large organisations, just under a quarter (24%) said they had already deployed object storage.



Wednesday, 31 July 2013 18:57

This is not a test

FORTUNE -- Manpower -- SWAT teams, bomb squads, K9 units, scores of local police officers, and citizens providing information -- will forever receive credit for bringing down the suspects linked to the Boston Marathon bombings that killed three and wounded hundreds. But there was another, little-noticed participant in the manhunt: an emergency alert platform created by Glendale, Calif.-based Everbridge.

It was Everbridge's system that enabled officers to keep locals informed -- and safe -- as they tore through suburban streets in search of the suspects. Everbridge allows single entities to send thousands of messages at the push of a button, even if cell towers are down. (The system can send texts using Wi-Fi). During Boston's marathon bombings, local companies used the system to verify the safety of employees, hospitals used it to relay information to nurses, and police updated citizens with safety alerts and messages. "We really wanted to limit people being out [on the streets] so that those law enforcement folks could maneuver around the town," says Watertown Fire Chief Mario Orangio. "By getting that message out as quickly as we did, it helped immensely." At one point during the manhunt that resulted in the capture of suspect Dzhokhar Tsarnaev, the Watertown Fire Department sent out 11,000 messages in a 15-minute span using Everbridge, he added.



CIO — Your organization will come under attack. It's not a matter of "if." It's a matter of "when." And security is no longer simply an operational concern. As technology has become the central component of nearly all business processes, security has become a business concern. As a result, information security should sit firmly on the boardroom agenda.

"If the worst were to happen, could we honestly tell our customers, partners or regulators that we've done everything that was expected of us, especially in the face of some fairly hefty fines that could be levied by regulators," asks Steve Durbin, global vice president of the Information Security Forum, a nonprofit association that researches and analyzes security and risk management issues on behalf of its members, many of whom are counted among the Fortune Global 500 and Fortune Global 1000.

"We're seeing, I think, not only that boards need to get up to speed on this, but also they need to be preparing their organization for the future," Durbin says. "They need to be determining how they can be more secure tomorrow than they were today."



Today I’m going to discuss how a company can mismanage a crisis in a way that makes their plans backfire and blow up.

Of course a crisis cannot always be perfectly planed for or averted. There are a few ways for a social web team to turn a crisis around and even reap the benefits of said crisis.

Recently, Chipotle’s Twitter account was allegedly hacked with several incoherent tweets being published.



Cloud computing gives organisations the opportunity to rethink many traditional IT practices, but it may be a particularly good fit for disaster recovery and business continuity.

Network World Editor in Chief John Dix caught up with IBM Distinguished Engineer Richard Cocchiara, who is CTO and the Managing Partner of Consulting for IBM's Business Continuity & Resiliency Services, for his perspective on the subject.

Cocchiara leads a worldwide team who work with clients on systems availability, disaster recovery planning, business continuity management and IT governance.



More than three quarters of IT professionals have experienced a data center outage in the past year, a report released on Tuesday by disaster recovery company Zerto said.

In a survey of 356 IT professionals, including IT managers, VMware and sys admins, Zerto found that 42 percent of respondents report to have experienced an outage in the last six months, with 86 percent of those incidents caused by something other than a natural disaster. The top two causes of a data center outage are hardware failure and power loss.

According to the report, 7 percent of companies have no disaster recovery plan at all, which is particularly disturbing when you see the different types of industries the respondents work in, including finance, healthcare, legal, education, pharmaceuticals and manufacturing. In a report from 2011, data center association AFCOM found that more than 15 percent of data centers have no plan for business continuity or disaster recovery.



After investigating alleged steroid use by New York Yankees third baseman Alex Rodriguez, Major League Baseball has reportedly offered him a plea deal. It’s the latest installment in a sad story, with important lessons for companies and workers, both inside and outside the ballpark.

Before allegations of his steroid use surfaced, Rodriguez had become one of baseball’s most storied – and lucrative – franchises and one of the wealthiest players in the game’s history. His annual earnings were $30.3 million according to FORBES’ latest estimates, making him #18 in the magazine’s list of the world’s highest paid athletes. Penalties and fines could mar his future earnings and what should be a hall-of-fame career.



These are some of the lessons that emerge for corporate America.



The surge of BYOD and mobile devices in general has unleashed havoc in mobile security in the enterprise. IT security managers have been attempting to deal with the fast influx of devices, but most are reeling from the overload of OSes, security issues, vulnerabilities and technologies aimed at securing such devices. In response to this, the National Institute of Standards and Technology (NIST) has provided an informative publication to assist IT organizations in securing mobile devices throughout their life cycles.

The Guidelines for Managing the Security of Mobile Devices in the Enterprise Download breaks down the issues surrounding mobile device security into manageable segments, including:

  • Defining Mobile Device Characteristics
  • Technologies for Mobile Device Management
  • Security for the Enterprise Mobile Device Solution Life Cycle

Within each section are many subsets of information to guide IT security teams in developing their own mobile device security management plan. According to NIST, organizations may not need to use all of the services covered, but services to be considered should include:



Wednesday, 31 July 2013 14:51

Are Businesses Rushing to BYOD Too Quickly?

CIO — Are you breaking the law with your BYOD policy?

In a TEKsystems June survey of 3,500 tech professionals, 35 percent of IT leaders (such as CIOs, IT vice presidents and directors) and 25 percent of IT professionals (such as developers, network admins and architects) are not confident that their organization's BYOD policy is compliant with data and privacy protection acts, HIPAA, Dodd-Frank or other government-mandated regulations.

Half of the respondents also believe that 25 percent or more of sensitive data is at risk due to end users accessing this information over personal devices.

These and other alarming findings paint a disturbing picture: The race to embrace BYOD might be outpacing sound business practices.



I’ve mentioned in previous posts that Big Data is more than just big. In order to realize its true value, it must be fast as well.

That means analysis has to approach real-time levels in order to ensure that the final product is relevant to the rapidly changing business environments in which most enterprises find themselves. And therein lies the problem, because while Big Data analytics platforms can be deployed on existing data center infrastructure, producing a real-time architecture will take a bit of work.

Hitachi Data Systems recently completed a study of UK organizations that have implemented Big Data strategies and found that more than half were still relying on outdated or inaccurate information because their legacy infrastructure could not meet the demands of real-time analytics. A key problem remains the stubborn presence of data silos within existing infrastructure, which prevent analytics engines from gaining a true picture of both structured and unstructured data sets. Not to mention, critical data is often kept hidden from decision makers because it can’t be made available on an organization-wide basis.



Truly savvy managers know the value of information. It’s the stuff intelligent decisions are borne of. But in recent weeks, the international community and the US Federal Government have been howling over the data collection efforts of the National Security Agency, making arguments as to whether or not those efforts are in the interests of US national security and whether or not data mining is an invasion of individual civil liberties. The concerns being raised may be misplaced. The major concern may not be with the data, but with the information being derived from it.

Information is distilled data. Distillation is a process that profoundly alters the natural state of the data. Anyone who has ever distilled data knows that context, sampling procedures, and data aging all play significant roles in the value of the information derived there from. As managers and executives, we need to examine four key considerations whenever we’re using data and information to make critical business decisions:



Tuesday, 30 July 2013 16:42

ERM: Old concept, new ideas

CSO - Enterprise risk management (ERM) is hardly new. Eric Cowperthwaite, CISO at the nonprofit healthcare organization Providence Health and Services, recalls hearing the term for the first time in the late 1990s, "and it existed before then, even if we didn't call it that," he said.

Indeed, the term goes back several decades, according to Jeff Spivey, who is vice president at RiskIQ, president at Security Risk Management, and international vice president of ISACA.

"My father was involved in risk management beginning in 1968," he said. "What was then called 'risk management' is now called 'enterprise risk management.'"

John Shortreed, a member of the International Organization for Standards, which developed ISO 31000, one of the most prominent frameworks for ERM, says the framework has been "evolving and maturing over the last decade, in response to the increasing risks [in] our world" brought on by such varied factors as interconnectivity, climate change and economic upheaval.



While the tragedies of April 15 and April 18, 2013, are forever etched into the minds of the greater-Boston and MIT communities, 46 participants in the MIT Professional Education course Crisis Management and Business Continuity, had the opportunity to hear first-hand accounts of the events on Boylston Street and MIT’s campus from several key responding organizations, news outlets, an MIT alumnus, and several others on July 18 at the Stata Center.

The panel titled “The Boston Marathon bombings: Exemplary response amid horror,” was moderated by WBUR’s Deborah Becker, and included Edward Davis, Boston Police commissioner; James Hooley, chief of Boston EMS; Dr. Paul Biddinger, chief, Division of Emergency Preparedness, medical director, Emergency Department Operations, Massachusetts General Hospital; Imad Mouline, SB ’91, CTO, Everbridge, a Mass and Emergency Notification software company; Joe Sciacca, editor-in-chief of the Boston Herald; and Peter Casey, programming and news director, WBZ radio. William VanSchalkwyk, managing director, Environment, Health, and Safety Headquarters Office, MIT; and Helen Privett, business continuity manager at GMO, were also on hand.



Colleges and universities are putting the financial and personal information of students and parents at risk by allowing them to submit such data to the school in unencrypted email.

That was a finding in a survey released Monday by Halock Security Labs after surveying 162 institutions of higher learning in the United States.

Half the institutions allowed sensitive documents to be sent to them in unencrypted emails, the survey said, while a quarter of the schools actually encouraged such transmissions.

"Typically, they do what they need to do to comply with regulations, but they're weak on risk management and actively controlling  and managing risk," Terry Kurzynski, a partner with Halock Security Labs, said in an interview.



Has a third-party vendor caused a data breach at your organization? If so, did the vendor notify you? If you weren’t notified during — or right after — the investigation you have plenty of company.

A new study conducted by the Ponemon Institute indicates that many business associates don’t notify their organizations of a data breach during the investigation or after determining the cause of the incident. In fact, 47 percent of those polled either have no timeframe for notification or they do not notify the organization at all.

 These facts alone are alarming but can be especially detrimental to an organization in the health care industry, where the new HIPAA Omnibus Final Rule broadens the definition of a data breach and calls for stricter enforcement and greater penalties. The Omnibus Rule took effect in March 2013, although organizations have until September to comply.



A tremendous amount of attention has been lavished on machine-to-machine (M2M) communications. One of its great selling points is its ubiquity. It holds the promise of burrowing into the nooks and crannies of everyday life and providing communications affecting a massive number of mundane uses. It’s a terrific time and labor saver – if things go according to plan.

Believe it or not – and I know this is shocking – things don’t always go according to Hoyle. M2M, if compromised, can turn those rote procedures and promises into real headaches. The Internet of Things can turn into the Internet of Troubles.



It seems like barely a week goes by that there isn’t another development in the software-defined data center.

But as the advancements keep piling up, one thing is becoming clear--or less clear when you think about it. As more and more vendors, developers, systems integrators and data operators and providers enter the field, the more muddled it becomes. What once appeared to be a fairly straight-forward, albeit highly technical, means of extending the benefits of hardware virtualization across both localized and distributed infrastructure is quickly becoming a mish-mosh of platforms, architectures and design philosophies that could very well end up destroying the broad universality that the technology was supposed to engender.

In this way, software-defined tech is no different from the many IT evolutions of the past. Yet it is still painful to see another golden opportunity for widespread infrastructure interoperability slip through the data community’s grasp.



JERSEY CITY, N.J. – ISO announced today revisions to its e-commerce (cyber insurance) product. The E-Commerce Program enhancements from ISO introduce new insurance policies designed specifically for companies with a media liability exposure. Both a "claims-made" and "occurrence" version, each providing defense within limits, are available. ISO is a member of the Verisk Insurance Solutions group at Verisk Analytics (VRSK).

The new policies complement ISO`s existing cyber liability insurance policies: the Information Security Protection Policy (for commercial risks) and the Financial Institutions Information Security Protection Policy (for all financial institutions).

ISO`s media liability policies offer eight separate insuring agreements: media liability; security breach liability; programming errors and omissions liability; replacement or restoration of electronic data; extortion threats; business income and extra expense; public relations expense; and security breach expense. All of them can be written with separate limits and deductibles. Similar to the existing ISO cyber insurance policies, the new media liability policies have associated manual rules and loss costs.



Recent developments in the cybersecurity landscape have heightened interest in the challenges associated with accurately anticipating and understanding risk, and using that knowledge to better manage organizations.

Enterprises are better delivering risk assessment and, one hopes, defenses, in the current climate of challenging cybersecurity.

Nation-state types of threats may have a very serious impact on organizations. President Obama has directed the National Institute of Standards and Technology to develop a new cybersecurity framework. The administration has sharpened its focus on what can be done to improve cybersecurity throughout the United States' critical infrastructure.

In this podcast, a panel of experts discuss how predicting risks and potential losses accurately is an essential ingredient in enterprise transformation.

- See more at: http://www.ecommercetimes.com/rsstory/78587.html#sthash.uKinWVIy.dpuf

Recent developments in the cybersecurity landscape have heightened interest in the challenges associated with accurately anticipating and understanding risk, and using that knowledge to better manage organizations.

Enterprises are better delivering risk assessment and, one hopes, defenses, in the current climate of challenging cybersecurity.

Nation-state types of threats may have a very serious impact on organizations. President Obama has directed the National Institute of Standards and Technology to develop a new cybersecurity framework. The administration has sharpened its focus on what can be done to improve cybersecurity throughout the United States' critical infrastructure.

In this podcast, a panel of experts discuss how predicting risks and potential losses accurately is an essential ingredient in enterprise transformation.

- See more at: http://www.ecommercetimes.com/rsstory/78587.html#sthash.uKinWVIy.dpuf

Considering potential threats to an organization's reputation as part of the strategic planning process can help reduce such risks and even position a company to enhance its reputation by allowing it to prepare an effective response when an event occurs.

“I think there is a very powerful connection between strategic risk management and reputation and brand management,” said James W. DeLoach, managing director at consultant Protiviti Inc. in Houston.

“As we view certain events over the last several years, we have come to realize even the best household names, the best brands face their moment of crisis. No company is immune to the risk of a crisis,” Mr. DeLoach said.



Can you imagine a major industry which suffers a near death experience, angers its entire customer base—wholesale and retail, domestic and international—and yet refuses to publicly apologise and adopt a plan of action that commits the industry to not repeating the mistakes of the past. That is where the banking industry is at right now.

This lack of decisive action on the part of the industry’s leadership will do lasting damage to not only the industry but also to its as yet unforgiving customers and the global economy. Part of the problem is that the industry does not appear to even realise that it is in a crisis—one which has been brought about by a complete loss of public faith in its activities. That is a tragedy.



Monday, 29 July 2013 15:57

The RAID5 delusion

Case in point
I spoke to the head of small company – about 25 employees – who had suffered a RAID5 drive failure. The 4TB RAID was used for file sharing.

A drive failed, reconstruction failed and vendor phone support was disastrous. All data was lost.

But the worst of it was that there was no backup. They believed that RAID5 would protect their data. They were wrong.

What RAID5 is for
RAID5 does offer some data protection assuming it works. But it's main purpose is to protect access to your data. This is why it is popular in enterprise applications where maintaining data access during a failure is of vital concern.



While there’s a tendency to think of cloud computing as a nebulous IT experience that provides continuous access to files and applications, the reality of cloud computing is governed much more by the unforgiving laws of physics. In fact, cloud computing is little more than a massive exercise in distributed computing where the location of files and applications matters more than ever.

Given that reality, there’s a lot more interest these days in putting applications in the cloud as near to the core Internet as possible without being locked into a specific carrier for network services.



Computerworld — There's a new C-level executive -- the Chief Digital Officer (CDO) -- in the boardroom, charged with ensuring that companies' massive stores of digital content are being used effectively to connect with customers and drive revenue growth.

At first blush, an executive title that includes the word "digital" would seem to encroach on IT's territory. Not so, observers say -- but that doesn't mean tech leaders don't need to be prepared to work closely with a CDO somwhere down the line.

Gartner last year reported that the number of CDOs is rising steadily, predicting that by 2015, some 25% of companies will have one managing their digital goals, according to analyst Mark P. McDonald. (See also CDOs by the numbers.)

While media companies are at the forefront of this movement, McDonald says, all kinds of organizations are starting to see value in their digital assets and in how those assets can help grow revenue.

"I think everybody's asking themselves whether they need [a CDO] or should become one," McDonald enthuses. "Organizations are looking for some kind of innovation or growth, and digital technologies are providing the first source of technology-intensive growth that we've had in a decade."



Monday, 29 July 2013 15:51

Cloud EHR Lessons Learned in Haiti

CIO — Healthcare providers in the United States have preconceived notions about electronic health records—namely, that EHR systems haven't lived up to their promise of transforming healthcare by improving efficiency and cutting costs.

The healthcare industry also has preconceived notions about cloud computing, too—namely, that the cloud isn't secure enough for patient data.

Go to Haiti, though, and the story's dramatically different. There are no preconceptions, no tales of IT implementations gone wrong and no government mandates to adopt technology. As one health worker told Pierre Valette, vice president of content communications for cloud EHR and practice management software vendor athenahealth, "They've got nothing to unlearn."



We couldn’t let this week end without leaving you with another reminder of the unaddressed risks in BYOD practices. It’s a trend that shows no sign of slowing, as the risks may be multiplying faster than IT’s ability and willingness to take control in some organizations.

In a Fiberlink survey conducted by Harris Interactive among 2,064 U.S. adults earlier this year, respondents answered questions about how they use their personal and work-provided mobile devices, how they regard those devices, and which specific risky activities they have performed with those devices.

What have they been up to? Twenty-five percent had opened or saved a work attachment file into a third-party app like Dropbox. Twenty percent had cut and pasted a work-related email or attachment from company email to personal email. Eighteen percent had accessed websites blocked by company policy. Fifty-six percent reported they had not performed any of these activities. Since this is self-reported, we can assume these numbers are skewed to make the respondents look more chaste than they may really be.



A recent study of 35 large organizations found that social data is still “largely isolated from business-critical enterprise apps” and is created in departmental silos.

The Altimeter Group study found that the average enterprise-class company owns 178 social accounts, with 13 departments “actively engaged” on social platforms. That’s creating serious social data silos, and, not surprisingly, there’s very little effort to integrate all this data.

You really didn’t need a crystal ball to see this coming. As long as businesses function in departmental silos, there will be data silos that mimic that structure.

The report also revealed it’s not always easy to integrate this data, attributing the issue to the fact that so many organizational departments touch the data, “all with varying perspectives on the information,” the article states, adding:

“The report also notes the numerous nuances within social data make it problematic to apply general metrics across the board and, in many organizations, social data doesn’t carry the same credibility as its enterprise counterpart.”

When social data is integrated with enterprise data, it’s usually through business intelligence tools (42 percent), followed by market research at 35 percent. CRM (27 percent), email marketing (27 percent) and sensor data (uh? 4 percent) are also points of convergence.



Now that energy prices seem to have stabilized once again, there has been a noticeable shift in attitude surrounding the development and design of the next-generation, “green” data center.

It’s not that the IT industry has discarded the concept entirely--indeed, a number of high-profile projects are scheduled to break ground in the next few months--but there is growing disagreement over how to ensure that everyone’s needs are being met, including data providers, data consumers and the environment itself.

A key topic of debate is the use of renewable energy. Whether it’s wind, water, solar, geothermal, etc., questions are surfacing as to whether full or even partial dependence on renewables is right for the data center. It’s important to note that some of the criticisms are coming from leading environmental researchers, not the data center industry.



CIO — Earlier this week, Intel discussed its plans to forever change the data center as we know it.

Intel, a core technology maker, is now aggressively moving from servers into networking and storage and partnering with segment leaders such as Cisco Systems and EMC along the way. This could make the near future rather interesting.


Think RAID, But With Cheap Processors

For a while, I was convinced that Intel wouldn't catch this wave. Years ago, Microsoft began an initiative to rethink the data center as kind of a modular server. Applying a RAID-like concept to low-cost processors stood at the center of this effort. Replacing the "D" in RAID with a "P" would give any CMO a heart attack, so the concept never got a catchy name—but, on paper, it was poised to reduce computing costs dramatically.



By far the majority of reputation crises I’ve been involved in have a very, very important question at the core: how do we avoid fanning the flames? There is a very real danger in communicating about an event of actually doing harm rather than improving the situation. The greatest danger, of course, is bringing a bad story to the attention of others who otherwise would not even be aware of it.

The understandable fear of this I believe is the main cause for the other problem which is “too little, too late.” When actions taken, or messages communicated about a big problem, are seen as coming slowly only as a result of outrage or pressure, then reputation damage can be severe.

This is a dilemma, a clear example of being between a rock and a hard place. And almost everyone wants to know how to make a sure-fire strategy decision that doesn’t cause harm in either direction.



Two months after Hurricane Sandy pummeled New York City, Battery Park is again humming with tourists and hustlers, guys selling foam Statue of Liberty crowns, and commuters shuffling off the Staten Island Ferry. On a winter day when the bright sun takes the edge off a frigid harbor breeze, it's hard to imagine all this under water. But if you look closely, there are hints that not everything is back to normal.

Take the boarded-up entrance to the new South Ferry subway station at the end of the No. 1 line. The metal structure covering the stairwell is dotted with rust and streaked with salt, tracing the high-water mark at 13.88 feet above the low-tide line—a level that surpassed all historical floods by nearly four feet. The saltwater submerged the station, turning it into a "large fish tank," as former Metropolitan Transportation Authority Chairman Joseph Lhota put it, corroding the signals and ruining the interior. While the city reopened the old station in early April, the newer one is expected to remain closed to the public for as long as three years.

Before the storm, South Ferry was easily one of the more extravagant stations in the city, refurbished to the tune of $545 million in 2009 and praised by former MTA CEO Elliot Sander as "artistically beautiful and highly functional." Just three years later, the city is poised to spend more than that amount fixing it. Some have argued that South Ferry shouldn't be reopened at all.



When I was 21, I almost lost several hundred million dollars by threatening to mutilate one of our customers.

In my senior year in college, I worked full time as an intern PM at NetApp NTAP -1%. I spent most of that time at work being groomed and prepared to be a full PM, and given that my background was in cryptography I got pulled into a lot of customer meetings related to security.

One of our customers at the time was undergoing a big change with their security architecture,  and I tagged along with one of the directors to the meeting. I was one of ten PMs giving talks on roadmap and our plans, and I had 30 minutes to convince their CIO and CEO that we could integrate our new systems well with the new security infrastructure they were rolling out.



WASHINGTON, D.C. — U.S. small businesses — widely recognized as the backbone of the U.S. economy — are particularly at risk from extreme weather and climate change and must take steps to adapt, according to a new report from Small Business Majority (SBM) and the American Sustainable Business Council (ASBC).

Titled “Climate Change Preparedness and the Small Business Sector,” the report concludes: “Because small businesses are distinctly critical to the U.S. economy, and at the same time uniquely vulnerable to damage from extreme weather events, collective actions by the small business community could have an enormous impact on insulating the U.S. economy from climate risk.”

Featuring case studies from the retail, tourism, landscape architecture, agriculture, roofing and small-scale manufacturing sectors of the U.S. economy, the Small Business Majority/ASBC report finds: