But taking the business continuity lessons and procedures from the data center and trying to apply them to geographically dispersed mobile laptops and desktop PCs is a daunting task. Once the problem moves outside the proverbial “glass house,” the challenge rises exponentially as these computers and their data constantly move and change, both physically and logically.
Across the enterprise, workers use their desktops and laptops to store and process data, often creating valuable intellectual property. This data may include individual sales records, a manager’s sales forecast, strategic marketing material and financial budgets, and gigabytes of other data that company operations depend on. The PC is also a fundamental communication tool, serving as the conduit for e-mail, messaging, the Internet and the intranet. While the PC has proven to be a valuable productivity aid given these broad communications capability, interruption, loss or unauthorized access to these services can have devastating effects. Any business continuance plan to save the server while neglecting the personal computer is clearly incomplete.
Yet disaster can be avoided by adopting a pragmatic strategy that integrates the management, protection, and support of the enterprise PC fleet. The key to an affordable PC strategy is centralized management that does not require IT staff to physically visit each of the personal computers for service and support. Important elements of a BC strategy for the PC include:
- Asset management – What hardware and software assets are actually being used, who is using them, and where are they being used?
- Patch management – Which systems are protected (or not) against the latest viruses and security vulnerabilities?
- Backup – Is all data on all personal computers being securely backed up regularly?
- Help desk – How quickly are employees able to get a solution to technical and administrative PC issues?
To ensure business continuity, it is important to manage and properly track PC assets, both hardware and software. Although physical security cannot be as tight as in the data center, proper distribution and control of PC assets keeps costs low and increases security. According to analysts, effective asset management can yield cost savings of up to 30 percent per asset within the first year alone. Software consistency and proper license tracking keep systems manageable and in legal compliance. The penalty fees associated with software license violations are severe and well documented.
Solid procurement and operational procedures to tag and inventory PCs, including location, user and warranty service details are a good starting point for PC asset management. However, physical inventory data too quickly becomes stale as users add memory, additional storage devices, and a plethora of other hardware that may or may not be compatible with company infrastructure or policy. But the hardware risks can be minor compared to the addition of software to these systems, which again may or may not be authorized. Many of these issues can be minimized by deploying to all PCs an asset management system complete with remote asset tracking capabilities, so that the hardware and software details of every PC can be refreshed regularly, and provides both manual and automated report generation features.
Increasingly, protecting PC assets and maintaining continuity requires systematic “patching” – the process of continuously updating PCs for protection against security vulnerabilities in operating systems and applications. If the process depends on users to do this, or even the automatic update features available in some operating systems, it is a recipe for disaster. Most users lack the time or skills to do this properly and consistently, and often don’t understand the consequences. Also, in today’s environment, patches occur on such a frequent basis (10 per day on average), virtually no employee is capable of sorting through the technical minutia to determine which patches are relevant. Attempting to manage patches manually is costly and time consuming, even in small, centralized locations. For the distributed enterprise and mobile users, it is pretty much impossible. Administrators must have an electronic software distribution application deployed to effectively deliver patches across a distributed organization.
As with asset management, there are many software tools to choose from. Microsoft’s Software Update Services (SUS) is the most widely used. Most IT administrators use SUS, but must supplement it with additional tools. SUS lacks in reporting and policy management capabilities.
When it comes to patch management, it is critical to have an accurate, enterprise view of patch status at the individual PC level. Improperly patched systems can make the entire enterprise vulnerable to intrusion or virus attack. Patch updates can, and do, have unexpected effects on network connectivity and applications. Industry analysts estimate that unless properly tested, up to 20 percent of all patches fail to properly install, often rendering the PC inoperable. For this reason, it is incumbent on the IT staff to have a QA process in place to test the patch before wholesale deployment across the enterprise.
Protecting PC data from loss and corruption is critical, albeit the value is harder to quantify than transaction data. Industry estimates warn that up to 60 percent of corporate data is unprotected on PCs and laptops. Valuable strategic, intellectual and confidential information belonging to everyone from the CEO and CTO on down is vulnerable to corruption, attack, disk failure, loss, and theft. Laptops are especially vulnerable to loss through theft and, despite continued technology advances, more than 15 percent of laptops are replaced annually for hardware failures. These undesirable events can seriously impact business continuity. Standard enterprise business continuity operations – tape backup and offsite replication – are impractical for the users at the edges of the network. Indeed, few IT administrators would be surprised by the statistic that less than 8 percent of end-users comply with corporate backup policies.
Consider using automated, online backup to protect PC data rather than depending on end users to back up their own data or to follow procedures to be included in scheduled server backup schemas. The mobile workforce is a reality, so backup needs to be robust and flexible, as well as fast. If it is slow or obtrusive, busy mobile workers may try to avoid the backup altogether. Given that IT professionals consistently rate human error as a leading cause of outage, backup protection is essential. Ideally, perform a one-time full backup of all PC data to a secure corporate repository, then capture changes to data on a regular basis by setting policies that take Internet connectivity speed into account.
Most users do not have the requisite skills to solve the problems that cause their systems to work inefficiently, or go down altogether. Even those users who can tinker happily for hours to rebuild their systems are not necessarily doing the company any favors as they lose productivity from their primary job function. And, as any IT professional will confide, it is typically the “tinkerer” who can unintentionally do the most harm while trying to “fix” the problem. Providing users with remote diagnostics and a readily available, highly skilled help desk minimizes downtime and increases productivity. Users experiencing interruption can get their problems solved quickly over the Internet or the telephone, thus reducing downtime.
Gary Griffiths is president and CEO of Everdream. Everdream offers a comprehensive and integrated suite of hosted desktop services that protect, manage, and support the enterprise IT infrastructure, allowing for the first time the option of purchasing desktop support software as a service. He may be reached at (510) 818-5500. For more information about Everdream, visit http://www.everdream.com.