Fall World 2014

Conference & Exhibit

Attend The #1 BC/DR Event!

Summer Journal

Volume 27, Issue 3

Full Contents Now Available!

Maintaining Efficiency in a Mission-Critical World

Energy consumption has become a hot topic in the data center industry over the past few years. According to survey results from the Data Center Users’ Group – an organization of data center managers and decision-makers – power usage of data centers (average kW use per rack) jumped 23 percent from 2006 to 2009, and respondents predict per-rack averages of 10 kW by 2012. The Uptime Institute reported data center energy use doubled between 2000 and 2006 and predicts it will double again by 2012. Rising energy costs, coupled with a move toward environmental responsibility, have pushed many companies to look at energy efficiency as a way to cut data center operation costs.

More recently, however, many high-profile data center outages have proved that availability cannot be sacrificed when maximizing efficiency. Availability was the No. 1 concern reported by respondents in the fall 2009 Data Center Users’ Group survey, having dropped out of the top three concerns behind energy efficiency and heat density in previous years. Depending on the industry, downtime can cost a business hundreds of thousands – if not millions – of dollars per hour.

Modern data centers have evolved as a result of new technologies, but in the process the business world has become increasingly dependent on the IT infrastructure that supports those applications. With the progression of technology and unprecedented business demands, a new challenge has emerged: maintaining availability while improving efficiency in an environment where computing demand is growing and IT budgets are shrinking.

downtime_costTactics to Increase Efficiency Without Compromising Availability

Living in a world where businesses are dependent on access to technology despite natural (storm surges) or man-made disasters that may interrupt continuity, there are various tactics to optimize energy efficiency without compromising availability. Here are a few of the best practices:

High-density design

Data centers are moving toward high-density computing environments as newer, more dense servers are deployed. Sixty-three percent of the respondents to the fall 2009 DCUG survey indicated they plan to make their next data center new-build or expansion a high-density (>10kW/rack) facility. This indicates that although there is growing understanding of the savings that can be achieved through efficiency, the magnitude of the savings available through increasing density continues to be underestimated.

The average cost to build a data center shell is $200 to $400 per sq ft. By building a data center with 2,500 sq ft of raised floor space operating at 20kW per rack versus a data center with 10,000 sq ft of raised floor space at 5 kW per rack, the capital savings could reach $1-3 million. Operational savings also are impressive, with about 30 percent of the cost of cooling the data center eliminated by the high-density cooling infrastructure.

It’s important to note that moving to a high-density computing environment does require a different approach to infrastructure design, including:

  • High-density cooling: This approach brings cooling closer to the source of heat through high-efficiency cooling units located near the rack to complement the base room air conditioning. These systems can reduce cooling power consumption by as much as 30 percent compared to traditional room-only designs.
  • Intelligent aisle containment: Aisle containment prevents the mixing of hot and cold air to improve cooling efficiency. While hot-aisle and cold-aisle containment systems are available, cold aisle containment presents some clear advantages. By integrating the cold-aisle containment with the cooling system and leveraging intelligent controls to closely monitor the contained environment, systems can automatically adjust the temperature and airflow to match server requirements, resulting in optimal performance and energy efficiency.
  • High-density power distribution: Power distribution has evolved from single-stage to two-stage designs to enable increased density, reduced cabling, and more effective use of data center space. Single-stage distribution often is unable to support the number of devices in today’s data center as breaker space is expended long before system capacity is reached. Two-stage distribution eliminates this limitation by separating deliverable capacity and physical distribution capability into subsystems. The first stage receives high-voltage power from the UPS and can be configured with a mix of circuit and branch-level distribution breakers. The second stage or load-level units can be tailored to the requirements of specific racks or rows. Growing density can be supported by adding breakers to the primary distribution unit and adding additional load-level distribution units.
Ensuring availability

Availability was a major concern for 56 percent of respondents to the fall 2009 DCUG survey versus just 41 percent in the spring 2009 edition. Understanding that a large percentage of outages are triggered either by electrical or thermal issues, the challenge is optimizing the efficiency gains related to power and cooling approaches while understanding IT criticality and the need for availability. Some of the choices to be made and the potential trade-offs between efficiency and availability include:
  • Uninterruptible power supply: Data center managers should consider the power topology and the availability requirements when selecting a UPS. In terms of topology, online double conversion systems provide better protection than other types of UPS because they completely isolate sensitive electronics from the incoming power source, remove a wider range of disturbances and provide a seamless transition to backup power sources.
  • Energy optimization features can help minimize the amount of energy being lost, by allowing data center managers to tailor the performance of the UPS system to the specific efficiency and availability requirements of the site. Energy optimization modes enable the UPS to switch to static bypass during normal operation. When power problems are detected, the UPS automatically switches back to double conversion mode. This allows double conversion UPS systems to achieve 97 percent full-load operating efficiency; however, it could also allow certain faults and conditions to be exposed to the load.
  • Economization: Economizers, which use outside air to reduce work required by the cooling system, can be an effective approach to lowering energy consumption if they are properly applied. Two base methods exist: air side and water side. Water-side economization allows organizations to achieve the benefits of economization without the risks of contaminants presented by air-side approaches. All approaches have pros and cons. Data center professionals should discuss the appropriate applications with local experts.
  • Service: A proactive view of service and preventive maintenance in the data center can deliver additional efficiencies. Making business decisions with the goal of minimizing service-related issues may result in additional expense up front, but it can increase life cycle costs. Meanwhile, establishing and following a comprehensive service and preventive maintenance program can extend the life cycle of IT equipment and delay major capital investments.
top5-data-concernsProviding flexible support

IT demand can fluctuate depending on everything from weather disasters to strategic organizational changes and new applications. Responding to those swings without compromising efficiency requires infrastructure technologies capable of dynamically adapting to short-term changes while providing the scalability to support long-term changes. Previous generations of infrastructure systems were unable to adjust to variations in load. Cooling systems had to operate at full capacity all the time, regardless of actual load demands. UPS systems, meanwhile, operated most efficiently at full load, but full load operation is the exception rather than norm. The lack of flexibility in the power and cooling systems led to inherent energy inefficiency.

There are now technologies available that enable the infrastructure to adapt to those changes. Where previous generation data centers were unable to achieve optimum efficiency at anything less than full load, today’s facilities can take full advantage of these innovative technologies to match the data center’s power and cooling needs more precisely, regardless of the load demands and operating conditions.
  • Cooling systems: Newer data center cooling technologies can adapt to change and deliver high-efficiency at reduced loads. Specifically, digital scroll compressors allow the capacity of room air conditioners to be dynamically matched to room conditions, minimizing compressor cycling, which reduces wear and creates energy savings of up to 30 percent over traditional technologies. Variable speed drive fans allow fan speed and power draw to be increased or reduced to match the load resulting in fan energy savings of 50 percent or more.
  • Power systems: New designs in power systems allow improved component performance at 40 to 60 percent load compared to full load. Power curves that once showed efficiency increasing with load now have been effectively flattened as peak efficiencies can be achieved at important 40 to 50 percent load thresholds. Scalable UPS solutions also allow data center managers to add capacity when needed.
  • Distribution systems: Modular in-rack PDUs allow rack power distribution systems to adapt to changing technology requirements through the addition of snap-in modules. They also provide monitoring at the receptacle level to give data center and IT managers the ability to proactively manage changes.
Visibility and control enable optimization

Monitoring and controlling infrastructure performance is vital to making system improvements. Management systems that provide a holistic view of the entire data center are key to ensuring availability, improving efficiency, planning for the future, and managing change. Today’s data center supports more critical, interdependent devices, and IT systems in higher density environments than ever before. This fact has increased the complexity of data center management and created the need for more sophisticated and automated approaches to IT infrastructure management.

Gaining control of the infrastructure environment leads to an optimized data center that improves availability and energy efficiency, extends equipment life, proactively manages the inventory and capacity of the IT operation, increases the effectiveness of staff, and decreases the consumption of resources. The key to achieving these performance optimization benefits is a comprehensive infrastructure management solution.
  • Data center assessment: The first phase should involve a data center assessment to provide insight into current conditions in the data center and opportunities for improvement. After establishing that baseline, a sensor network is strategically deployed to collect power, temperature, and equipment status for critical devices in the rack, row, and room. Data from the sensor network is continuously collected by centralized monitoring systems to not only provide a window into equipment and facility performance, but point out trends and prevent problems wherever they may be located.
  • Optimization: A comprehensive infrastructure management system can reduce operating and capital expenses by helping data center managers improve equipment utilization, reducing server deployment times, and more accurately forecasting future equipment requirements. Managers not only improve inventory and capacity management, but also process management, ensuring all assets are performing at optimum levels. Effective optimization can provide a common window into the data center, improving forecasts, managing supply and demand, and improving levels of efficiency and availability.
Conclusion

Although energy consumption has been a key concern for data center managers, the trend of using efficiency as a cost-cutting tactic has led to the reemergence of availability as a priority in the business world. Despite this dependence on continuity through natural or man-made interruptions, there are tactics – such as high-density power and cooling, flexible/scalable infrastructure, and data center assessment and optimization – to help improve efficiency while maintaining the 24x7 availability that businesses require.

Ron Bednar leads the strategic marketing and marketing services teams for the Liebert division of Emerson Network Power. Additionally, he is the chairman of the Green Grid’s Data Collection and Analysis (DC&A) working group, and also manages programs and industry research for the Data Center Users’ Group (DCUG).