Spring World 2018

Conference & Exhibit

Attend The #1 BC/DR Event!

Spring Journal

Volume 31, Issue 1

Full Contents Now Available!

In today’s “24x7” technology-oriented and customer-centered world, customers expect organizations to maintain continuous availability and accessibility of products and services. As a result, managers must ensure that organizational processes and supporting systems continue to function in the event of disasters or severe outages. Business continuity planning, as defined by the DRI International and the Disaster Recovery Journal, is “the process of developing advance arrangements and procedures that enable an organization to respond to an event in such a manner that critical business functions continue with planned levels of interruption or essential change.” Done well, this planning process leads to business continuity readiness or the readiness of an organization for a disaster or severe outage.
 
This article presents the concept of perceived organizational business continuity readiness and the results of a research project that developed a tool to measure it. The questionnaire developed in this research is an assessment tool that managers can use to evaluate perceived readiness at both a firm level, and potentially, at an industry level. It is important to note that, since the questionnaire captures the perceptions of individuals about their organizations, this survey assesses perceived organizational business continuity readiness, not actual readiness.
 
We developed and tested this measurement tool using generally accepted statistical procedures for scale development proposed by Gilbert Churchill (1979). Our research steps are outlined in Table 1. The purpose of this type of research is to develop a comprehensive set of questions to measure a concept and then reduce the number of questions to a manageable number, with a minimum loss of information, while ensuring the validity and reliability of the resulting survey. In the following sections, after we describe the procedure for constructing and refining the measurement questionnaire, we demonstrate how managers can use it as a tool. We show how our results can help managers to assess the perceived readiness of their organizations, using norms developed for perceived business continuity readiness which are provided for several categories, including industry, size of the organization, and certification level and perceived expertise of the responders. We conclude by suggesting opportunities for future research.

Defining Perceived Organizational BC Readiness
We developed our definition of perceived organizational business continuity readiness based on a review of prior academic work, business-oriented research, and input from industry experts and members of professional organizations. Based on this background research, we defined perceived organizational business continuity readiness as “the perceived ability of an organization to keep functioning until its normal facilities are restored after a disaster or disruptive event.” This concept encompasses the importance of risk and business impact assessments, development of business continuity strategies and plans, implementation of infrastructure and procedures necessary to make those plans operational, and exercises and periodic updates of completed plans.

Generating Survey Items
We generated an initial sample of survey questions based on a review of prior academic and business literature on the topic. We reviewed DRI International’s “Professional Practices for Business Continuity Planners” and examined four years of weekly survey questions posted by the Disaster Recovery Journal on its Web site from 1999-2002. Additionally, we conducted experience surveys, using in-depth interviews with five industry experts as well as an online discussion involving a list-serve of 57 Master Certified Business Continuity Professionals (including business continuity planners, managers, and vendors). As a result of this process, we identified 56 survey questions which used a seven-point scale of agreement/disagreement to measure the concept of perceived organizational business continuity readiness.

Collecting Data – Survey No. 1
We implemented an on-line version of the questionnaire using Inquisite survey software and posted the survey announcement and link on the Disaster Recovery Journal Web site. Our target population consisted of executives, managers, employees, and vendors with a specific interest in business continuity planning. Two sampling units were used: Disaster Recovery Journal Web site visitors and Disaster Recovery Journal on-line newsletter recipients. Participants were offered the results of the survey as an incentive to complete it.
The questionnaire was structured (all responders replied to the same questions) and undisguised (there was no attempt to hide the purpose of the study). The survey consisted of the 56 survey items and four additional demographic questions related to perceived expertise (novice, intermediate, or expert), certification status (certified/not certified), size of the organization in terms of number of employees, and primary industry of the organization. For each of the survey items, responders were asked to indicate the extent to which they agreed with each statement on a seven-point scale ranging from “completely disagree” to “completely agree.” The survey was pre-tested for clarity and functionality by three subjects with no business continuity planning experience and for question clarity and understanding by five experts. The responses of those pretest participants were then dropped from further analysis. There were 491 surveys submitted; of those, 432 were answered completely with no missing responses to any of the items. Approximately one-fourth of the responders requested the survey results via a separate e-mail, indicating significant interest in the study.

Reducing the Number of Survey Items – Survey No. 1
We analyzed the completed surveys using a cycle of reliability analysis and factor analysis. To assess reliability, we checked the extent to which responses to each item are consistent with those for other items, using correlations and Cronbach’s alpha (a measure for reliability). Items with low consistency were dropped. We used factor analysis to determine whether a small number of underlying dimensions (called factors) could summarize the information contained in the large number of questions, and we were able to identify and name five such factors. We deleted items that were not shown to be neatly summarized by an underlying factor (in technical terms, those with low loadings and cross-loadings). Based on reliability analysis and factor analysis, we were able to reduce the number of items to 23, with five underlying dimensions that explained about 70 percent of the variance. These 23 items were then used to collect additional data in a second round of data collection.

Collecting Data – Survey No. 2
To collect a second round of data, we used a paper survey that was distributed in the on-site attendee packages at the Disaster Recovery Journal’s semi-annual conference in March 2005. The survey consisted of the 23 items identified in the prior step and other demographic questions. There were 129 surveys collected at the conference from a pool of 895 registered attendees. Of those, 126 were complete, useable surveys.

Reducing the Number of Survey Items – Survey No. 2
We reduced the number of items further using reliability analysis and factor analysis (as outlined above). Our criteria for dropping items were low consistency and lack of clear association with any underlying dimension. Based on the above analysis, we eliminated 12 of the 23 variables. For the remaining items, reliability was high – .9 for the entire scale. Four factors (or categories of items) were found, which explain 83 percent of the total variance in the 11 items. The reliability of each of the four factors was greater than .7, indicating that the component items of each factor are highly correlated and are statistically measuring the same construct. We could easily name the four factors, based on their component items: business impact analysis (recovery time objective/recovery point objective), emergency response readiness, business continuity planning exercise, and resource sufficiency. The results of the analyses are displayed in Table 1.


 
Confirming the Results of the Survey
We further assessed the validity of the 11-item, four-factor model proposed to measure perceived organizational business continuity readiness, which was finalized by analyzing data from Survey No. 2, by using confirmatory factor analysis to analyze the corresponding data from Survey No. 1. The goodness-of-fit statistics (GFI – .96, CFI –.98) are high (excellent) and support the proposed model.

Developing Norms to Use With Future Surveys
As previously mentioned, we collected demographic data from survey responders, which allowed us to develop some norms. Table 2 shows the average (mean) responder score (for the sum of the 11 items) for responders from organizations of different size (in terms of number of employees), those from different industries, those with different levels of experience, and those with different professional certification status. As you can see, responders from larger organizations (in terms of number of employees) perceived their organization’s readiness as higher than those from smaller organizations. Also, responders from financial/banking and insurance organizations perceived their organization’s readiness as relatively high, while those from the education, manufacturing, and retail/wholesale sectors perceived it as relatively low. We also found that responders who perceive their level of expertise in the area of business continuity planning as “advanced/expert” or “intermediate” perceived their organization’s readiness as higher than those who classified themselves as “novice.” Responders who have earned a professional certification in the area of business continuity planning perceived their organization’s business continuity readiness as higher than those who have not earned such a certification.
As this survey tool is used to collect more data, analysis of that data can be used to refine these norms.
It’s important to keep in mind, when looking at these results, that what is being measured with this survey is perceived, not actual, organizational business continuity readiness. It is possible that an organization that is perceived by an individual responder as ready for a disaster or severe outage situation may not actually be ready for such a situation. An organization that is perceived by an individual as not ready for such a situation may, in fact, be ready to function well if it occurs.


How Can You Use This Managerial Tool?
To use this 11-item measurement tool, a responder should answer each of the 11 items in the survey outlined in Table 3. The responses for the 11 items should then be totaled. (That total of the 11 items will range from 7 to 77.) If you use this survey for multiple responders, you can average the results of those 11-item totals.
Table 2 shows the average (mean) respondent score (for the sum of the 11 items) for the demographic factors discussed earlier. You can compare totaled scores (sum of the 11 items) from responders in your organization with those means to make a relative assessment. Readiness of your organization can also be evaluated for each of the four overall factors or dimensions (e.g. resource sufficiency). Organizations can also use this questionnaire periodically (e.g. annually) and evaluate the improvement or deterioration in perceived readiness over time.



Managerial and Research Implications
From a management perspective, survey participant feedback indicates that the study has substantial interest and value. It supports the study’s premise that a business continuity readiness scale can provide an evaluation tool for executives and an operational tool for managers to be used to measure and assess perceived readiness at both a firm level and, potentially, an industry level. The information that it provides might be used to formulate a scorecard, or balanced scorecard component, useful for updating executive management, boards of directors, internal and external auditors, industry analysts, investors, and other stakeholders.
From an industry standpoint, consultant and vendor input related to the first phase of this project indicates the study has practical value. While some non-academic work has been done on this topic by research organizations (e.g. Gartner Group and vendors), to our knowledge, this study is the first of its type to develop a measurement scale for perceived organizational business continuity readiness with high reliability and validity. This academic work, which is available to the entire business continuity community, has the potential to fill a marketing research information void for consultants and small vendors who do not have, and may not be able to afford, the expense of research tools and capabilities used in this study.
From an academic perspective, feedback from business continuity researchers and members of the academic community indicates that no previous constructs/scales for organizational business continuity readiness and its antecedents/consequences have been developed by academic researchers, and that this research has significant potential to add academic value.

Opportunities for Future Research
What’s next in terms of research on this topic? We see several opportunities. First, we’d like to use the measurement scale developed in this study within large organizations to assess whether or not the perceptions of managers and business continuity professionals within the same firm tend to be similar in terms of its business continuity readiness. We also plan to assess potential antecedents of perceived organizational business continuity readiness, such as the organizational characteristics of centralization, formalization, risk aversion, and technological turbulence. Third, we plan to conduct research to establish potential consequences (results, effects) of perceived organizational business continuity planning. Finally, a new research effort, to develop an assessment tool for actual organizational business continuity readiness, which can be compared and contrasted with this perceived organizational business continuity readiness tool, would be valuable.

 



Terri Kirchner, MBCP, is an instructor and researcher with Old Dominion University in Norfolk, Va. She has been a member of the Disaster Recovery Journal Editorial Advisory Board for six years and is a recently-elected member of the DRI International Certification Commission. She welcomes comments and questions regarding this article at This email address is being protected from spambots. You need JavaScript enabled to view it..

Kiran Karande, Ph.D., is an associate professor with Old Dominion University in Norfolk, Va. His expertise and research interests include research methodology, promotions, retailing, and international marketing.