The data center is a critical element in an organization’s disaster recovery plan, but technology trends are imperiling the stability of in-house data centers. Most companies are acutely aware of the threats to their core IT infrastructure from natural disasters, terrorist threats, fire, sabotage and other phenomenon. However, most do not recognize the equally ominous risk to in-house data centers posed by recent evolutions in technology.
Applause is universal for the technological leaps in server technology that have delivered more bang for the buck. Undeniably, today’s blade servers are faster, more powerful, and capable of handling more data than ever before. But there is a downside – today’s servers have a voracious appetite for power and cooling. In fact, a recent study on power usage by servers in the data center, including the electricity used to cool servers and related equipment, found that electricity consumption doubled from 2000 to 2005 domestically and worldwide. Conducted by Jonathan G. Koomey, Ph.D., a staff scientist at Lawrence Berkeley National Laboratory and professor at Stanford University, and commissioned by AMD, the study pegged the total power (including cooling) used by servers as a staggering 1.2 percent of the nation’s total electricity usage, "an amount comparable to that for color televisions."
Understandably, power consumption is putting a serious burden on in-house data centers and heightening the risk of systems failure. Because it is far more likely to occur, the risk to an organization’s key IT infrastructure from the demands of powerful servers exceeds that posed by more high-profile events like natural disasters and terrorists. The threat to core business systems from technology itself develops innocently and organically. To accommodate the need for more servers, a typical company undergoing technology expansion simply plugs in another server, paying little heed to the commensurate requirements for power and cooling. This continues unabated until either the building’s facilities manager demands that the IT people cease adding servers, or the firm’s electricians can no longer drop additional power circuits into the server room. Another way this haphazard, yet common, expansion ends is when the company’s equipment physically trips breakers, and the IT staff finally realizes it is overburdening its power supply. These are the more benign scenarios. Unfortunately, only when overburdened systems fail do some companies realize how critical it is to properly design a data center to handle high power consumption.
Many in-house data centers are quickly expanding beyond their capacities to sufficiently power and cool the IT systems they house. More power directly translates into a need for more cooling. HVAC systems require power and space, and soon data centers run into physical limitations. Industry research firm Gartner predicts that by 2008 half of all data centers will lack the power and cooling to adequately protect IT equipment. This means that without some intervention, the business operations of scores of organizations are at risk.
Power and Cooling Dominate
The good news is that servers have likely reached the peak of the power and cooling required to keep them safe on a per server basis, but customer demand continues to outpace the type of IT rooms most organizations can build in a private office, and costs are rising. In fact, Gartner predicts that by 2009 energy costs will be the second highest operating cost in 70 percent of data center facilities worldwide. Power is a huge cost burden, and many small- to medium-sized businesses are finding it exceptionally difficult to keep pace with the mounting power and cooling levels necessary to keep their equipment running at optimal levels.
In addition to servers, there are other evolving technologies that threaten the stability of in-house data centers. While server virtualization and the focus on producing more efficient machines appear to be headed in the right direction, this does not mean that enterprises are out of the woods. The continuous increase in general technology dependence, and the critical nature of applications today continue to push the support needs of technology beyond the infrastructure design of most enterprises.
Moreover, it is often difficult for an enterprise to receive the resources to build an infrastructure room that will prove adequate for three to five years into the future. Well-designed power distribution, cooling flow, cooling redundancy and room for expansion of critical components is a luxury not afforded to most IT managers.
The more typical approach is to evolve the current data room to temporarily address any existing and near future needs. This approach almost always leads to inefficiencies with regard to power distribution and managing heat loads.
For example, power loads and cooling aisles are often compromised to make room for the expansion of critical business applications. This mismanagement of space can severely reduce cooling efficiencies by taxing chillers. And, compromising the data center’s design in this fashion puts the uptime of the entire facility in jeopardy.
Industry and Federal Response
Server manufacturers are feeling pressure from users and the government to reduce the power their servers require. Manufacturers are taking action to design servers that do more but require much less power. While a federal response is forming, and initiatives are afoot to improve the efficiency of servers, data centers are also moving towards "green power" and "green building" to help resolve the problem.
The effort to utilize available green power from the commercial utility grid is environmentally friendly and the right thing to do. However, it rarely results in lower short-term costs to the user. On the other hand, utilizing green building designs can reduce the energy required to power infrastructure components, which usually accounts for 50 percent of the overall power consumed by a data center.
For example, by insulating an in-house data center with "green roofs" and cooling equipment with filtered, outside air (when temperatures allow), a center can substantially reduce the amount of power required to run chilled air systems.
In addition to server technology increasing power demands within the data center, the nation’s power grids are under enormous strain, which place data centers relying on commercial power supplies at a higher risk of failure than ever before. This instability increases the need for better back-up power supplies, and that is not always an easy item to augment at an in-house data facility. Recent major failures have brought the fragile state of the country’s power grid to the attention of lawmakers, who now realize that action to rectify the problem cannot be put off much longer.
As these initiatives take root, however, it would be foolhardy to rely on federal or industry action to solve the power problem for in-house data centers.
So for the immediate future, power issues will continue to pose difficult challenges for those managing in-house facilities.
Until server manufacturers can reduce the power that servers require, it will continue to be a struggle for most organizations to keep up. But companies that fail to stay on top of their equipment’s power and cooling needs run the risk of costly systems failures that disrupt business. These inefficient centers will also pay higher cooling costs and cause unnecessary strain on commercial power supplies.
What to Do
While server manufacturers, politicians, and green power practices should help mitigate the pressure on in-house data centers, in the meantime there are several actions IT managers can take.
For one, they should assess the engineering capabilities of their existing infrastructure, (meaning the full relationship between power, cooling, and space), implement a server growth plan based on that inventory and stick to their plan. Any ability to augment their data center inventory should be considered well in advance of the exhaustion point of current components. IT managers should be cognizant that expanding a data center can easily take six to 12 months from design to commissioning.
For businesses of all types and sizes, a professionally run third-party data center can provide an optimal environment. Placing the responsibility for scaling power, for meeting changing density and cooling needs, and for managing back-up power generation systems properly, in the hands of an accountable (via a service level agreement) data center facility is a prudent choice to ensure that IT equipment is always protected.
Reduce Risks With Proper Planning
A short-term design for an in-house data center can be disastrous because it does not consider the long-term implications of increasing power and cooling demands. Such an approach is myopic and invites a slew of inevitable risks to core applications. Data centers, both in-house and commercial facilities, necessitate a five-year plan. Designing a facility that will sufficiently support an organization’s core business infrastructure without compromising uptime includes a careful assessment of, and commitment to, the following:
u A solid understanding of business demand and forecast, plus a contingency addition
u A technology shift impact assessment +/- (bigger/smaller, more/less power)
u Building lease and engineering study (space, weight issues). Is there capacity for more condensers on the roof? Is there space for another generator? What is the fire code on tankage for the generator? etc.
u Power grid assessment. Is it single threaded? Is the power supplier reliable? Is power delivery aerial or buried?
u Factor in the degree of ease for maintenance and the replacement of large components such as UPS and CRAC
u Hire a good design/engineering firm
u Keep the design as simple as possible with a straightforward layout that is well documented
u Have a third party commission the new facility before deploying to ensure functionality as designed
u Stick to the equipment deployment plan
Jim Weller is the president of Baltimore Technology Park (www.baltimoretechnologypark.com), a carrier-neutral data center providing clients with the highest levels of security and redundancy. He is a 20-year veteran of the data center industry.
"Appeared in DRJ's Fall 2007 Issue"