As mission critical business applications continue to generate and collect business data, recovering these systems becomes more of a challenge. The time it takes to restore the application back to the point when the disaster occurred can increase as the size of the database grows.
In order to keep the MTTR constant as the database grows requires either an investment in a disaster recovery (DR) solution that will maintain performance or a strategy for managing growth.
Traditional methods for disaster recovery, such as backup and snapshots, may become inadequate if the time it takes to recover exceeds the allowed window of MTTR. As a result, synchronous and asynchronous replication technologies may need to be deployed. This can significantly add to the total cost of ownership for the application. One approach to improving MTTR while maintaining existing DR strategies for a mission critical database application is to keep the size of the production database as small as possible through database tiering or archiving.
Database Archiving Improves MTTR
Database archiving solutions classify and tier archive eligible data that no longer needs to reside in the production database. Classification is determined based on application and business logic. The data is migrated to an online active archive that keeps the data available for end users to access while reducing the size of the production database. The online active archive can reside on a lower tier infrastructure that maintains a lower level service level agreement (SLA) for MTTR.
Restoring from Backup is Faster with Archiving
Smaller databases backup faster
Database archiving significantly reduces production backup volumes because the archive data is moved out of the production database to an online active archive. Incremental backup volumes will continue to be the same as before archiving. However, as archived data has been moved from production, full backup volumes are reduced because archived data doesn’t need to be backed up on an incremental basis as it doesn’t change.
Most companies deploy a monthly or quarterly archive process which requires backing up only before and after it is archived.
Archiving Improves DR windows
In the event of a disaster recovery situation, the data that needs to be recovered first is typically the mission critical business application data. Since the full backup volume of production is smaller, the restore window is reduced, many times significantly. Archived data can be recovered subsequently after the business systems are back up and running.
Recovery with Continuous Data Protection Technologies
Improve recovery with minimal impact to production
Continuous data protection (CDP) technologies offer an alternative approach to DR over traditional backup and snapshot approaches. The benefits of some CDP technologies include the ability to:
u Replicate production without having to quiesce the production application
u Recover the production system to any point in time
u Replicate to a lower tier D/R target infrastructure
u Leverage the CDP technology for other uses, such as creating copies for test and development purposes
The IT organization evaluating its data protection deployment strategy with archiving should consider the various solution alternatives. Block-based CDP, for instance, works at the storage device rather than the logical level of the application stack. File and application-based CDP work at the logical level and are architecturally closer to the application semantics. Different CDP solutions may utilize multiple interfaces to applications to ensure varying levels of application-awareness and consistency. In some cases, the application or database may need to be momentarily quiesced to capture and mark an application-consistent view. In other systems, all application-consistent views are recognized transparently to the application.
Recovery for database archives
It may not be necessary to have the same level of recovery protection for the archive database as the production database. For example, CDP technology may also already be in place. Some companies may use a synchronous data replication technology to keep production and the DR copy synchronized. Since the archive database does not change frequently, it may not be necessary to deploy the same replication technology to the archive database. In some cases, an asynchronous replication option or restoring from backup tapes may be less costly alternatives.
When selecting a different DR approach for the archive, it is important to be aware of backup windows and the risk of data loss. If a disaster occurs between the last archive cycle and the delay between when the archive database at the DR site is synchronized, there may be a potential for data loss. This loss can be mitigated by including the database logs in the synchronous replication stream.
Application-Based Data Classification and Archiving
Understanding the value of your data
Data classification, as defined by the Storage Networking Industry Association (SNIA) in the SNIA Dictionary of Storage Networking Terminology, is "an organization of data into groups for management purposes. A purpose of a classification scheme is to associate service level objectives with groups of data based on their value to the business."
As advocated by the SNIA Data Management Forum (DMF), this requires:
u Understanding which business process the data belongs to;
u The policy that determines when the business value of the data changes, potentially impacting the SLA, and
u How to enforce the policy based on the type of data, such as e-mail, files, or database data.
For example, financial data, as part of the financial general accounting business process, is managed by an integrated database and financial accounting application. Consider the following implications:
u How often the data changes: Generally, financial data is subject to monthly and quarterly booking periods, after which the books are closed and financial statements are generated.
u Data security and access: In open booking periods, every transaction needs to be protected and the data readily available.
u What happens as data changes from dynamic to static: Once periods are closed the data is considered frozen and read-only. Data from closed periods is no longer changed but retains importance from a reporting and compliance perspective.
u Determining its long-term business value: Typically financial data from the current and past year are most valuable for reporting and decision-making comparison.
u The regulatory impact: The data generally needs to be retained and available for exception reporting, audit, and compliance for seven years or more.
u When to purge: Once the retention period passes, the data no longer needs to be protected and available and may be purged.
u Identifying all impacted data sources: General ledger data typically consists of files and database data that are driven by a financial application that may be federated across several systems.
Data Classification and Tiered Protection Strategies
Simplifying recovery of critical data
Once data is properly classified and archived or tiered, it can be easier to establish and enforce appropriate policies for its protection, recovery, and retention. By classifying and protecting data in logical content groups and moving the data to a separate table space or database, retention management and recovery is easier to manage. By maintaining the database and related log files for a specific application in a single, manageable content group, protection and retention can be managed throughout the lifecycle of the data so that recovery is simplified. Data is migrated by the application itself or by database archiving solutions.
In the case of the financial data classification example, capturing every transaction change as it occurs and maintaining high availability of the data is most likely a requirement for your D/R strategy. With CDP, every invoice, payment, etc. is captured as it occurs. In the event of a problem, even human errors such as accidental deletion, the lost or corrupted data can be recovered in the production instance to the exact point in time before the mishap occurred.
After the booking period is closed and the financial statements are prepared, the data doesn’t change. When this data is archived or tiered out of the production database, this level of data protection may not be required.
Conclusion
Classifying, archiving and securing data is critical to improving disaster recovery
Business continuity is ranked among the top IT initiatives in 2007 according to industry analysts. As a result, continuous data protection as a function of business continuity is a growing market segment. IT departments tasked with implementing a data protection strategy or supporting business continuity should consider their data classification policy and archiving to lower costs and improve recovery times. IT departments can make better choices about where and how to utilize various recovery technologies to meet their short and long-term requirements for data protection and availability.
Classifying data by application for logical grouping and lifecycle management will make it easier to manage data protection, retention, and ensure effective, efficient recovery while keeping storage volumes manageable in both the short and long term.
v
Julie Lockner is treasurer of the SNIA DMF Board of Directors and vice president sales operations, Solix Technologies. Lockner has more than 10 years of experience architecting, marketing, and managing database applications in the ERP, CRM, and marketing analytics space. She has held various engineering, sales and marketing positions in companies such as EMC, Oracle, Verbind, and Raytheon. As vice president of sales operations at Solix Technologies, she is responsible for defining and implementing sales and product strategies for Solix’s data management suite. Lockner holds a BSEE, graduating with honors from Worcester Polytechnic Institute.
"Appeared in DRJ's Fall 2007 Issue"




