There is a perfect storm brewing in IT – fuel costs are rising exponentially, business managers are looking for ways to cut costs to address current economic conditions, and IT departments are being asked to provide higher levels of availability than ever before with flat or even shrinking IT budgets.
There is also constant pressure to implement new programs – from enterprise mobility to the latest and greatest in security, and there is growing complexity with a mix of mainframe, midrange, virtual servers and more making up the backbone of enterprise infrastructure.
IT infrastructure is piling up, layer upon layer, as data centers, many of which were originally designed 30-40 years ago, are becoming overloaded and overheated as the hardware and network capacity is stretched to the max. This all adds up to a serious issue with power and cooling, one of the costliest line items of IT and facilities budgets.
CIOs and IT decision makers are being forced to develop solutions to address – and essentially overcome – this power and cooling crisis. They must weigh their options carefully to meet competing responsibilities – to reduce costs for heating and cooling without incurring large capital expenses. Other options do exist that can help to resolve this issue in the near-term, while also helping to ensure the IT organization remains agile and flexible within an increasingly competitive landscape.
The Power and Cooling Crisis
As enterprises accelerate initiatives around growing their businesses, many data centers are laying out a new vision for their technology infrastructure in order to meet the data center power and cooling challenges ahead.
Many data centers are running out of power and physical rack space, leaving no room for additional hardware and insufficient cooling designed for the “pre-server sprawl” period – when a few servers was all that was required to keep the corporate applications running. Add to this the fact that many utility power grids are increasingly maxing out, creating power and distribution restrictions that prevent data center expansion.
But IT also faces user demands that applications be more universally available anytime anywhere. These demands create pressure for higher levels of availability, reliability, and continuous operations – all issues that are typically addressed with more computing hardware.
Balancing these challenges has become a priority for IT.
New Technology + Old Technology = Complexity
As new servers and network devices provide faster and more processing power, the issue of providing more power and cooling becomes apparent to data center managers and IT executives. Power continues to lead as a concern for data center operations. Per-cabinet power consumption has steadily increased over the past several years, and faster CPUs, larger memory chips, and smaller disk drives also continue to increase power demands.
Most of the U.S. data center facilities that were built in the late 1970s and 1980s are finding themselves inundated with unused coax cables (from the mainframe days), insufficient air flow, multiple hot zones (where hardware creates pockets of heat), air conditioning units that were configured to support computing power equal to 1/50 of today’s average load, and extra jammed cabinets and relay racks.
The continuous rise of higher speed processors combined with the ever increasing power consumption and heat generation by server components (power supply, fans, etc.) and communication devices (core network switches and distribution gateways) have created power availability obstacles for continuous and available applications needed in the new on-demand user and cloud computing paradigm. Even organizations that have virtualized and consolidated their infrastructure experience power distribution and link capacity issues due to high performance server hardware and network. Having fewer servers does not mean using less power – virtualized servers are essentially working harder than their single-use predecessors, creating significant amounts of heat.
Careful planning and growth projections must be maintained to ensure that power requirements can be met. Approximately half of the power consumed by a data center is required for cooling, which points to the mantra “power in equals power out.” The power coming into the facility to keep the equipment online must be cooled as the hardware creates heat, essentially doubling the power costs. Alternate approaches, such as reducing the amount of equipment housed in each cabinet, must be considered.
Consider Your Options
The outlook for running 200-300 watts per square foot, with forecast for 500 watts per square foot is just around the corner – this is the power crisis data center managers need to overcome. As more power is consumed, you can also expect more regulations for environmental impacts. How do you balance these IT challenges with a scalable approach that can position your organization for future growth, save money riding the next wave of high powered devices and grid computing?
There are several options to consider:
- Retrofit – a phased retrofit of the current 20- to 40-year-old building
- Build new – choose a green site, designing and building a new data center
- Outsource -- move your infrastructure to a managed services and hosting provider.
Retrofitting an existing data center can be a tricky proposition. One must consider all of the related costs of the assessment, code alignment and redesign of the current center. You also have to consider the scalability and limitation of your current facility. Are you going to have enough capacity to prevent another expansion as your business and its IT needs grow?
Given the current economic climate, major data center overhauls or redesigns are few and far between for the average company, but there are ways organizations can prolong the life of their existing data center. Some short-term fixes include a review and re-architecture of data center floor design to minimize overall space requirements, or evaluation and alignment of hot/cold aisles, to help reduce the energy consumption of each cabinet or rack.
Companies can further prolong the life expectancy of an outdated data center by properly adjusting cooling and power distribution, along with implementing a plan for server consolidation or virtualizing existing hardware and software to reduce the overall data center footprint.
Designing a new data center can be a costly endeavor. Total raised-floor data center costs (excluding building shell and land acquisition but including construction for walls, ceilings, office areas, etc.) range from a low of $652 per square foot for a Tier 2 center (lower level of redundancy) to $1,189 per square foot for a Tier 4 data center (highest level of redundancy).
There are times when taking this green-field approach can make sense, based on an in-depth comparison of the costs and benefits associated with a new build and the evaluation of risk. Keep in mind that if your mission-critical center cannot be shut down or if you cannot coordinate a planned outage, building a new facility may be the only acceptable path to take.
Partnering with a third-party provider is an attractive option for many organizations. You can get the scalability, flexibility, and lower your risk of operations by out-tasking functions within your infrastructure. Such managed hosting providers must have flexible and scalable data center infrastructure, security, and architecture.
Small to medium businesses may also find the outsourcing option particularly attractive, as it allows them access to IT infrastructure, computing resources, and IT expertise at a much lower cost than developing these capabilities internally.
Likewise, many companies today are looking at a mix of options, perhaps outsourcing mission-critical systems to a third party – to ensure the highest levels of information availability – while keeping lower priority systems within their own data center. This approach allows enterprises to reap the benefits of working with a third party, including efficiencies of scale, higher levels of availability, disaster recovery capabilities, and managed services (to free up internal IT resources to focus on higher priority projects).
Making the Right Choice
Organizations today are facing an IT resources crunch. It’s important to focus your attention on your core business – considering your data center and computing options carefully to ensure your systems meet the level of availability required by your business.
It’s also important to remember that most organizations will face facility challenges and capacity issues within their infrastructure – if not today, then certainly in the months ahead. Data center managers would be smart to begin evaluating their future needs so these decisions can be pursued with purpose.
Dr. Mickey S. Zandi is an international expert in data center design and architecture. He has more than 14 years experience in data center design and architecture, business continuity, and IT consulting services delivery, managing complex IT projects worldwide in industries such as communications, distribution and retail, education, banking and financial services, government, healthcare, IT services, manufacturing, pharmaceutical, and utilities. He is the managing principal for SunGard Availability Services Consulting Group.
"Appeared in DRJ's Winter 2009 Issue"