Organizations worldwide lack the complete visibility into activities across IT infrastructures needed to reduce cyber risks, causing security incidents, failed compliance, and disruption of business processes
IRVINE, Calif. – May 3, 2016 – Netwrix Corporation, a provider of IT auditingsoftware that delivers complete visibility into IT infrastructure changes and data access, today announced the results of its global 2016 IT Risks Report. The survey aimed to identify the most common cyber risks caused by IT changes and to estimate how well companies are prepared to deal with these risks.
The 2016 IT Risks Report was conducted by Netwrix in January 2016. Researchers analyzed the responses of 826 IT professionals who represented more than 40 industries and organizations of different sizes located worldwide.
The survey’s key findings are:
Only less than one-fifth of organizations (17%) are confident about their ability to beat cyber risks.
Nearly 78% of respondents consider visibility into IT infrastructure an absolutely critical part of their security strategy.
More than one-half of respondents (58%) claim that the IT change controls or their absence are adequate to their business specifics and organization type.
At the same time, a majority of respondents have faced various cyber risks over the last year due to lack of visibility into changes. Two-thirds of organizations (67%) admit they had security incidents, 53% of respondents experienced system downtime, and 45% had compliance issues.
Organizations want deeper visibility into IT infrastructures to better prevent, detect, and respond to cyber risks. Therefore, more companies switched to automated methods of IT auditing, preferring to use third-party solutions (39% in 2016, compared to 29% in 2015). Overall, IT auditing is becoming a widely established practice, with 63% of organizations having IT auditing processes in place in 2016 vs. 52% in 2015.
“The survey discovered an inconsistency between the initial assessment of maturity and the adequacy of IT change controls deployed by organizations and their actual ability to deal with cyber risks,” said Michael Fimin, CEO and co-founder of Netwrix. “Ensuring security today can be a challenge even for experienced professionals. Due to this pressing need for stronger protection, more organizations establish IT auditing processes and automate related tasks to achieve deeper visibility into critical systems and data. Continuous control over the IT environment will enable organizations to stay on top of what is going on across the entire IT infrastructure and mitigate the impact of unwanted or unauthorized activity to timely address security issues before they inflict significant damage.”
“Effective risk and security management requires an integrated approach in which risk and security are made part of the core fabric of business processes and become key components of the organizational culture,” stated the February 2016 Managing Risk and Security at the Speed of Digital Business Report by Gartner. “This requires infusing the key components of risk and security management (i.e., policies, processes, behavior and technology) across all the dimensions of IT — business processes, applications, technology infrastructure and, most importantly, people.”
To download the full 2016 IT Risks Report, please visit: www.netwrix.com/go/2016ITRisksReport
About Netwrix Corporation
Netwrix Corporation provides IT auditing software that delivers complete visibility into IT infrastructure changes and data access, including who changed what, when and where each change was made, and who has access to what. Over 150,000 IT departments worldwide rely on Netwrix to audit IT infrastructure changes and data access, prepare reports required for passing compliance audits, and increase the efficiency of IT operations. Founded in 2006, Netwrix has earned more than 70 industry awards and was named to both the Inc. 5000 and Deloitte Technology Fast 500 lists of the fastest growing companies in the U.S. For more information, visit www.netwrix.com.
Allows for Free Migration of 500 Drives or 1TB of File Share Content to Microsoft's OneDrive for Business
SYDNEY – NetComm Wireless Limited (ASX: NTC), a leading global developer of data communications devices, today announced that Verizon Wireless has certified the NetComm Wireless 4G WiFi M2M Router (NTC-140W-01) for use on the Verizon Wireless Private Network. Mission-critical wireless Machine-to-Machine (M2M) data can now be secured using the NTC-140W-01 to create a direct connection to internal enterprise systems over a segregated private network.
Developed to accelerate the uptake of M2M in the US, the NTC-140W-01 brings Verizon Wireless LTE/XLTE network coverage, speed and security to enterprises that collect and manage large amounts of data from digital displays, smart buildings, remote healthcare, emergency response and other bandwidth-intensive M2M applications.
“We are pleased to introduce a device that separates M2M data from public Internet traffic to secure remote management and advance real world M2M applications by providing undisrupted access to the bandwidth, speed and capacity offered by the Verizon Wireless Private Network,” said David Stewart, CEO and Managing Director, NetComm Wireless.
Designed to support M2M deployments over the long term, the future-proof NTC-140W-01 enables the secure migration of machines and business assets from 2G to 4G LTE. The industrial-grade device reduces deployment risk and features automatic failover for reliable connectivity to time critical systems, tele-health services and disaster recovery applications.
The NetComm Wireless NTC-140W-01 features powerful edge processor for optimal performance, and its embedded NetComm Linux OS and Software Development Kit (SDK) enables the installation of custom software applications to the on board memory.
The NTC-140W-01 features two super-fast Gigabit Ethernet ports and high-speed WiFi connectivity to decrease data transfer delays and enable fast and reliable networking across multiple devices. The device also features a flexible range of power options, vehicle voltage support, GPS and ignition input making it ideal for mobile and tracking applications.
Its polycarbonate and rubber enclosure is mountable, designed for rugged deployments and also features temperature tolerances making the robust NTC-140W-01 ideal for remote and industrial environments.
About NetComm Wireless
NetComm Wireless Limited (ASX: NTC) is a leading developer of Fixed Wireless Regional Broadband and wireless Machine-to-Machine (M2M) technologies that underpin an increasingly connected world. Leading telecommunications carriers, core network providers and system integrators utilise NetComm Wireless' 3G, 4G LTE and new generation Fixed Wireless solutions to optimise network performance and to support their connected products and services in the M2M and regional broadband markets. For the past 34 years, NetComm Wireless has developed a portfolio of world first data communication products, and is now a globally recognised wireless innovator. Headquartered in Sydney (Australia), NetComm Wireless has offices in the US, Europe/UK, New Zealand, Middle East and Japan. For more information, visit www.netcommwireless.com.
Partnership Expands Twine's Data Distribution Infrastructure, Providing Additional Revenue Opportunities for App Publishers
SAN FRANCISCO, May 3, 2016 /PRNewswire/ — TapFwd, the premier mobile-first data management platform, today announced a partnership with Twine, the leader in mobile data monetization. As part of the partnership, TapFwd has integrated Twine's 200+ million mobile data points into its platform, further strengthening TapFwd's position as the most comprehensive mobile-first DMP.
"We're excited to announce our partnership with Twine, which provides our clients additional access to a wealth of in-app data," said Alex Wasserman, Co-Founder and CEO of TapFwd. "Incorporating Twine's deterministic mobile data into our platform makes it even easier for marketers to build highly targeted mobile audiences that share predictive attributes with their most valuable customers."
Twine's data comes directly from app publishers and is based off of in-app actions. By aggregating and distributing this data, Twine provides app publishers with the infrastructure to passively generate an incremental revenue stream. The partnership with TapFwd expands Twine's data distribution and monetization channels, providing app publishers an opportunity to safely monetize data through TapFwd's data management platform, which is built specifically for mobile marketers.
"We're always on the lookout for opportunities that allow us to expand into new platforms and provide more revenue opportunities for our publishing partners," said Elliott Easterling, Twine's Co-Founder and CEO. "Our partnership with TapFwd brings additional high quality mobile data into the market, which helps everyone—from consumers to mobile publishers—reap the benefits of more relevant and memorable mobile experiences."
To learn more about TapFwd, please visit: http://tapfwd.com.
TapFwd makes data accessible to mobile marketers. As the premier mobile-first DMP, TapFwd combines offline, online, and mobile data, allowing mobile marketers to unify disparate datasets, analyze customer segments, and build targeted mobile audiences. Founded in 2014, TapFwd has gathered 70 billion data points on over 500 million mobile devices, helping brands big and small make data-driven decisions in mobile. For more information, visit http://tapfwd.com.
Twine is the leader in mobile data monetization. Twine provides app publishers with the infrastructure to safely generate an incremental revenue stream while delivering mobile marketers high quality, 100% deterministic data to power their ad campaigns. Twine's deep network of publisher partners provides mobile marketers one-deal scale for mobile identity, audience, and location data. For more information, visit www.twinedata.com.
NetApp All Flash FAS (AFF) Enables University to Enhance E-Learning With "Bring Your Own Device" Program, Connecting Students and Teachers Anywhere in the World, at Any Time
SUNNYVALE, Calif. – (Marketwired - May 3, 2016) "We knew we needed to build a more powerful foundation to support our long-term e-learning and achieve our Bring Your Own Device goals," says Lee J. DeAngelis, senior systems administrator at The University of Scranton. "We looked at other flash storage solutions, but the plug-and-play deployment of NetApp solutions made it an easy choice."
Located in Pennsylvania, the University of Scranton decided to take e-learning to the next level by connecting students and faculty anywhere in the world, with an innovative Bring Your Own Device (BYOD) program. The university also planned to offer university-owned software free of charge to students and faculty. However, the institution was already experiencing performance issues and delays with existing resource-intensive applications. To make the BYOD program a success and to support existing applications, the university needed a highly scalable storage system.
The university moved its VMware Horizon View VDI deployment to NetApp® (NASDAQ: NTAP) AFF storage systems to expand its mobile-friendly content-delivery solutions, eliminate existing performance issues, and build a solid foundation for the future. It also deployed non-flash NetApp FAS 8040 systems to host critical applications, such as its student information system, in a virtual environment.
With the new solution in place, applications launch faster and, are more responsive and virtual desktop login times have decreased by approximately 50 percent. As a result, the average student saves at least one minute of wait time every lab session. With around 2,000 students using the labs daily over 350 school days each year, more than 11,000 hours a year are reclaimed for learning.
The University of Scranton has also been able to:
- Deliver robust performance to students using on-campus labs for resource-intensive applications to enable anytime, anywhere learning.
- Free IT employees from the time-consuming task of maintaining physical machines in labs allowing them to redirect more of their time to key educational initiatives.
- Offer high-quality learning experiences to help the university be more competitive and attract students worldwide.
Leading organizations worldwide count on NetApp for software, systems and services to manage and store their data. Customers value our teamwork, expertise and passion for helping them succeed now and into the future. To learn more, visit www.netapp.com.
NetApp and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. A current list of NetApp trademarks is available on the web at http://www.netapp.com/us/legal/netapptmlist.aspx.
Partnership Will Power Enterprise-Grade Blockchain Experiences With World Class Developer Expertise
CHICAGO, Ill. – (Marketwired - May 3, 2016) Bloq (bloq.com) announced that is working with Deloitte, which works with each of the top 30 banks across its member firms globally, to build blockchain software solutions for leading companies worldwide. Through this partnership, Bloq will help deliver enterprise-grade blockchain solutions based on the core values of open source, security and reliability.
"Deloitte's market leadership in financial services coupled with Bloq's expertise in blockchain technology, creates a very powerful combination in the industry," said Matthew Roszak, co-founder and chairman of Bloq. "Every CTO on the planet is in the process of developing their blockchain strategy, and this partnership will help further enable these discussions with the guidance and expertise they need," said Roszak.
This is part of Deloitte's initiative to develop blockchain-related prototypes addressing digital identity, digital banking, cross-border payments, loyalty and rewards, as well as products for the investment management and insurance sectors. Bloq's software and services can help provide new technological features and capabilities to Deloitte's client base of global financial institutions.
"Blockchain is proving to be a major disruptive force in financial services," said Eric Piscini, principal with Deloitte Consulting LLP and the global financial services blockchain leader. "We continue to focus on helping our clients make blockchain a reality for their business. Together, we are harvesting the benefits of blockchain technologies -- developing new revenue models, improving cost efficiencies and creating innovative solutions across the globe."
Bloq's software and expertise will help enable Deloitte to work with clients to rapidly build and test ideas, leveraging the latest solutions and integrating them into complex client environments.
Bloq delivers enterprise-grade blockchain solutions to leading companies worldwide. Bloq helps its clients and partners build key layers of infrastructure for blockchain-enabled applications with world-class developer expertise and full 24/7 customer support. For more information, please visit: bloq.com.
Deloitte provides industry-leading audit, consulting, tax and advisory services to many of the world's most admired brands, including 80 percent of the Fortune 500. Our people work across more than 20 industry sectors to deliver measurable and lasting results that help reinforce public trust in our capital markets, inspire clients to make their most challenging business decisions with confidence, and help lead the way toward a stronger economy and a healthy society.
SANTA CLARA, Calif. – (Marketwired - May 02, 2016) Violin Memory®, Inc., (NYSE: VMEM) announced today that it received notification on April 27, 2016 from the New York Stock Exchange ("NYSE") that Violin Memory's average global market capitalization over a thirty-day trading period and stockholders' equity were below the requirement set forth in the NYSE's continued listing standards.
Violin Memory intends to notify the NYSE within 45 days from receipt of the notification that Violin Memory will submit a plan to the Listings Operations Committee of the NYSE (the "Committee") that describes how the company intends, within 18 months, to regain compliance with the continued listing requirements of the exchange.
During the 45-day period, and during the eighteen-month period, if the plan is accepted by the Committee, Violin Memory will be subject to quarterly monitoring for compliance with the plan and Violin's common stock will continue to be listed and traded on the NYSE, subject to compliance with the other listing standards. If the Committee determines not to accept Violin Memory's plan, it promptly will initiate procedures to suspend trading in and delist Violin Memory's common stock. In terms of credit and debt obligations, the NYSE notification does not conflict with or violate any of Violin Memory's credit or debt obligations.
"Our market capitalization is at a level that we do not believe reflects the true value of our business and developed technology," said Kevin DeNuccio, president and CEO of Violin Memory. "Violin Memory remains committed to its strategic shift and product line transition that expands the company's offering to primary storage while maintaining its performance advantage. As a market leader in flash-based storage for enterprises, Violin's Flash Storage Platform offering is uniquely positioned to meet the demands of the world's largest enterprises and support companies' most critical applications. We expect this customer traction to accelerate our progress and success," said DeNuccio.
Violin Memory, the industry pioneer in All Flash Arrays, is the agile innovator, transforming the speed of business with enterprise-grade data services software on its leadership Flash Storage Platforms™. Violin Concerto™ OS 7 delivers complete data protection and data reduction services and consistent high performance in a storage operating system fully integrated with Violin's patented Flash Fabric Architecture™ for cloud, enterprise and virtualized business and mission-critical storage applications. Violin Flash Storage Platforms are designed for primary storage applications at costs below traditional hard disk arrays and to accelerate breakthrough CAPEX and OPEX savings while helping customers build the next generation data center. Violin Flash Storage Platforms and All Flash Arrays enhance business agility while revolutionizing data center economics. Founded in 2005, Violin Memory is headquartered in Santa Clara, Calif.
This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, including statements regarding the following: Violin Memory's ability to submit a plan to the NYSE to bring Violin Memory into conformity with the continued listing requirements that the NYSE will accept; Violin Memory's commitment to its strategic shift and product line transition and its ability to expand its offering to primary storage while maintaining its performance advantage; Violin's future competitive position in the marketplace; Violin Memory's ability to meet future demands of large enterprises and to support companies' most critical applications; Violin Memory's ability to generate sufficient customer traction to accelerate progress and success; and Violin Memory's business plans and strategy. There are a significant number of risks and uncertainties that could affect Violin Memory's business performance and financial results, including those set forth under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations," in Violin Memory's Annual Report on Form 10-K for the fiscal year ended January 31, 2016, which was filed with the U.S. Securities and Exchange Commission, and which is available on the Violin Memory's investor relations website at investor.violin-memory.com and on the SEC's website at www.sec.gov. All forward-looking statements in this public announcement are based on information available to Violin Memory as of the date hereof, and Violin Memory does not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.
Japanese Distributor to Offer Egenera's Xterity Disaster Recovery and Backup Services to Customers Throughout Japan
BOXBOROUGH, Mass. – (Marketwired - May 2, 2016) Egenera, a leading provider of wholesale cloud services and cloud management software to the channel, today announced it has signed a strategic partnership with Networld Corporation, a leading Japanese distributor of technology products with annual revenues of 66.8 billion Yen. Under the terms of the agreement, Networld will deliver Egenera's Xterity disaster recovery as-a-service (DRaaS) and backup-as-a-service (BaaS) to customers as part of its NETWORLD CLOUD MART offering.
Networld is a distributor of technology products in Japan and is one of the country's largest VMware resellers. Egenera announced the opening of its Xterity Cloud in Tokyo in November and now operates clouds worldwide in San Jose, Ashburn, VA, Boston, Dublin, London and Tokyo, with plans for additional locations as partner demand continues to grow.
"In Japan, Egenera's solutions have been widely used in the enterprise and in financial services for mission critical applications since 2001. Egenera has had an established reputation for high quality disaster recovery and backup, and has been delivering Xterity, its wholesale managed cloud service, in Japan since 2015," said Shoichi Morita, president and representative director of Networld Corporation. "Until now, we haven't had viable solutions for DR and backup of on-premise VMware-based servers, but with this partnership, Networld can add mission critical disaster recovery and backup services to our catalog."
Xterity's business continuity services deliver on-premise server-to-cloud or cloud-to-cloud backup and disaster recovery. Egenera's Xterity Cloud Services deliver a full range of dedicated, managed, private and public cloud services, including Infrastructure as a Service (IaaS), Disaster Recovery as a Service (DRaaS), Backup as a Service (BaaS), and CloudMigrate(SM) exclusively through the channel. With Xterity, resellers can quickly enter the cloud services market with no up-front capital or ongoing maintenance costs. Unlike reselling cloud services from large, commodity cloud vendors, Xterity delivers the margins resellers need to develop a profitable cloud services business.
"Our strategic partnership with Networld is a key milestone in the adoption of Xterity worldwide," said Pete Manca, president and CEO of Egenera. "We're excited about working with Networld to deliver business continuity services to its large customer base."
Egenera is a leading provider of wholesale cloud services to the channel and data center infrastructure management software. Xterity, the company's white label cloud service, offers revenue generating Infrastructure-as-a-Service (IaaS), Backup as a Service (BaaS) and Disaster Recovery as a Service (DRaaS) to managed service providers (MSPs) and independent software vendors (ISVs) seeking to deliver cloud services with no upfront costs and with margins up to 50 percent or more. Headquartered in Boxborough, Mass., Egenera has thousands of production installations globally, including premier enterprise data centers, service providers and government agencies. For more information on the company, please visit egenera.com. Follow Egenera on Twitter, LinkedIn and Facebook.
Latest HC3 release optimizes hyper-convergence by intelligently moving data based on workload priority and usage patterns INDIANAPOLIS – Scale Computing, the leader in hyper-converged technology across the mid-market today announced the integration of flash-enabled automated storage tiering into its award-winning HC3 platform. This update to Scale’s converged HC3 system adds hybrid storage including SSD and spinning disk with HyperCore Enhanced Automated Tiering (HEAT). Scale’s HEAT technology uses a combination of built-in intelligence, data access patterns, and workload priority to automatically optimize data across disparate storage tiers within the cluster. “Hyper-convergence is nothing if not about simplicity and cost. But it is also about performance, especially in the SMB to mid-size enterprises where most, if not all workloads will simultaneously run on a single cluster of nodes,” said Arun Taneja, Founder and Consulting Analyst of the Taneja Group. “Introducing flash into a hard disk based system is easy; the question is how do you do it so that it maintains low cost and simplicity while boosting performance. This is what Scale has done in these new models. The only decision the IT admin and the business user need to make is to determine the importance of the application and its priority. After that flash is invisible to them. The only thing visible is better application performance. This is how it should be.” Scale Computing’s HC3 platform brings storage, servers, virtualization, and high availability together in a single, comprehensive system. With no virtualization software to license and no external storage to buy, HC3 solutions lower out-of-pocket costs and radically simplify the infrastructure needed to keep applications optimized and running. This update to the HC3 HyperCore storage architecture combines Scale’s HEAT technology with SSD-hybrid nodes that add a new tier of flash storage to new or existing HC3 clusters. HEAT technology combines intelligent automation with simple, granular tuning parameters to further define flash storage utilization on a per virtual disk basis for optimal performance. Through an easy-to-use slide bar, users can optionally tune flash priority allocation to more effectively utilize SSD storage where needed from no flash at all for a virtual disk, to virtually all flash by “turning it to 11.” Every workload is different and even a small amount of flash prioritization tuning, combined with the automated, intelligent I/O mapping, can have a big impact on the overall performance of flash storage in the HC3 cluster. Unlike other storage systems that use flash storage only for disk caching, Scale’s HC3 virtualization platform adds flash capacity and performance to the total storage pool. Customers will immediately and automatically take advantage of the flash I/O benefits without any special knowledge about flash storage. “Like any organization, we have applications that need maximum performance, applications where performance isn’t a priority, and still others where higher performance would be helpful but not mission critical,” said Mike O’Neil, Director of IT at Hydradyne. “But unlike some organizations, we weren’t in a position to dedicate the resources needed to support these differing workloads. With Scale, we will have an architecture in place that immediately and automatically allows VMs to take advantage of flash storage without us even thinking about storage or virtualization configuration.” Scale’s HyperCore architecture dramatically simplifies VM storage management without VSAs (Virtual Storage Appliances), SAN protocols and file system overhead. VMs have direct access to virtual disks, allowing all storage operations to occur as efficiently as possible. HyperCore applies logic to stripe data across multiple physical storage devices in the cluster to aggregate capacity and performance. The HyperCore backplane network lets any node and any VM access any disk and is performance optimized to scale as nodes are added. “With this release, we radically change the economics and maximize the value of flash storage for all customer segments, from the SMB to the enterprise,” said Jeff Ready, CEO of Scale Computing. “Many vendors use a flash write-cache as a way to mask otherwise sluggish performance. Instead, we have built an architecture that intelligently adjusts to the changing workloads in the datacenter, to maximize the performance value of flash storage in every environment. “ Scale is deploying its new HEAT technology across the HC3 product line and is introducing a flash storage tier as part of its HC2150 and HC4150 appliances. Available in 4- or 8-drive units, Scale’s latest offerings include one 400 or 800GB SSD with three NL-SAS HDD in 1-6TB capacities and memory up to 256GB, or two 400 or 800GB SSD with 6 NL-SAS HDD in 1-2TB capacities and up to 512 GB memory respectively. Network connectivity for either system is achieved through two 10GbE SFP+ ports per node. The new products can be used to form new clusters, or they can be added to existing HC3 clusters. Existing workloads on those clusters will automatically utilize the new storage tier when the new nodes are added. For additional information or to purchase, interested parties can contact Scale Computing representatives at https://www.scalecomputing.com/scale-computing-pricing-and-quotes/ About Scale Computing Scale Computing is the industry leader in complete hyper-converged solutions with thousands of deployments spanning from the SMB to the distributed enterprise. Driven by patented technologies, HC3 systems install in minutes, can be expanded without downtime, self-heal from failures, and automatically optimize workloads to maximize price-performance.
The Business Continuity Institute - May 03, 2016 10:27 BST
The Business Continuity Institute's recent Horizon Scan Report identifies that cyber attacks are still perceived as the top threat by businesses. Also within the top 10 is concern about supply chain disruption, especially as they are becoming increasingly complex and often transgress international borders. Other sources of anxiety include a data breach and, for the first time this year, concerns over the availability of talent and skills. So how does business continuity help with these very real issues for businesses operating today?
The need to understand your business
Taking what is termed a 'granular approach' to your business and investing time to understand the various processes and roles within your organisation will probably provide one or two revelations. You may discover that there is duplication of processes or an incompatibility in how contact details are saved e.g. product names versus name of supplier. Could this be causing unnecessary delays or confusion between your own departments? Would the purchasing department have a plan in place if a key supplier suddenly fails? Do HR and departmental managers allow themselves the time to think about what actions may be required in the short, medium and long term if a key member of staff is unexpectedly going to be absent? Is this key person's knowledge accessible for whoever may have to fill their post on a temporary basis? Being aware of these things may improve both the efficiency of your internal systems and as a consequence the quality of service provided to other departments. So often businesses spend time worrying about the customer experience but many often ignore the fact that 'customers' i.e. people or persons requiring a product or service, exist within their own organisation, and that getting those departmental customer interactions right, can make a huge contribution to the bottom line. Gaining a better understanding of the interactions within your organisation is just one supplementary benefit of thorough business continuity planning.
Data management often comes under scrutiny during a disaster recovery (DR) programme initiative. A business that really thinks about its data will often discover the diversity and value of information that it has acquired and stored, though one aspect of this that is often overlooked or not fully appreciated is the system's ability to ‘de-duplicate’ this data. Much of the data on your organisation's live system will be copied time and time again. For example, when you cc an email to other people in the business the same data is saved multiple times across the business. With a modern DR system only one version of the email will be stored. At its most effective, this de-duplication system can deliver a staggering reduction in data storage of up to 65 percent!
What other questions should you be asking?
When planning business continuity the first question is, 'What are the vital assets without which my business can't function?' Relocating staff is inconvenient but not impossible, buildings are a shell housing your business and can be replaced. It is the records of contacts, contracts, transactions and communications that represent years of trading, and the associated applications that have been developed to manage and evaluate this knowledge and intelligence, that are the unique asset that needs protecting. Maintaining reliable and secure access to this information is key to ensuring the continuity of your business. With this in mind take some time to assess your current situation; ask yourself; ‘Am I as protected as I can be?'
Consider the following:
- Can you access your data remotely?
- Have all sources of information (data) been identified
- Is it backed up and accessible off site?
- Are staff able to work remotely, with access to relevant files?How long would it take to get alternative services up and running?
- Have you considered moving processes away from a dedicated IT infrastructure to hosted capacity and applications, delivered over the Internet?
If you answered ‘yes’ to the last question there are some supplementary points you should consider checking with your provider:
- What guarantees are within the Service Level Agreement (SLA)?
- Where is my data? Check where your data is being housed, UK, Europe, America…
Choosing a Cloud provider should be done with business continuity and due diligence in mind. Should the unthinkable happen and your day-to-day business is compromised you will need to get to that all important data so the first thing you need to ask is, “How do I get my data out?”’How do I get my data out?’
Future proofing your BC plan
A BC (business continuity) plan needs to be adaptable to Cloud technologies and these are constantly changing and improving. Your BC plans should not define how to operate with a Cloud vendor but should allow for the relationship to evolve and respond to your business' growth and evolution and that of the technology. Many Clouds are provided ’as is’ with no recourse, as long as you know that and accept the risk you can plan for it. Where there is a service level agreement, this needs to be understood and reflected in your own BC planning and may cover elements such as the speed and amount of data restored. This is where taking the time to think about your business can really improve the efficiency of your BC plan. You will need the phone numbers and emails of your suppliers and customers within the first few hours of any incident occurring, in order to keep them informed about progress should your business be compromised. What you won't need with quite the same urgency, if ever, are the photos from the last staff Christmas party!
Having the right recovery time should be decided by the business, with careful consideration around which applications should be given priority and the maximum outage period. Having near instant restores will cost more than an eight hour recovery option, but not all business functions need to be restored at the same rate and every business is different.
So to conclude, don't approach business continuity planning as another process to follow through mechanically. Embrace it as an opportunity to review, refine and reinvigorate your business and not only will you sleep at night with the knowledge that you have a backup plan, you may even find new opportunities and ideas that bring new life to you, your staff and your customers.
Russell Cook, managing director at SIRE Technology has long been an advocate of business continuity and not just because it makes sense to make a contingency plan in case of the unexpected. No longer is business continuity just about backing-up your IT systems; if implemented and maintained in a professional manner, business continuity planning becomes a valuable business tool in its own right.