The Essential Components of Data Center DesignWritten by Larry Stancil
The modern data center spans the gamut from the tiny “cargo container” style to gigantic data center campuses that sprawl across hundreds of acres, and from energy gluttons to theorized electricity sippers. The best design is not only focused on energy efficiency but must consider flexibility to meet a constantly changing environment. Since requirements are constantly changing, we are continually shooting at a moving target. Datacenters built only a few years back didn’t think of cooling with outside air, using hot and cold aisle containment, or anticipate the kind of power densities required of a modern day facility. If you’re building a datacenter today you want to look at maximizing profit by minimizing operational costs, while at the same time providing a bullet proof solution for your customers. Consequently, solid initial design must be paired with the ability to adapt to new customer requirements and emerging technologies.
Location, location, location. The first step to providing a reliable datacenter, and building one that can be a disaster recovery site, is to locate the facility in a location unlikely to suffer from a disaster. Sounds simple enough but sometimes easier said than done as every location has some level of risk. But just like some streets are safer to walkthrough at night than others, so are some datacenter locations safer than others. Look for a location that is seismically neutral, and unlikely to have forced outages due to natural or man made causes. Secondly, this initial location will determine for instance how many hours out of the year the building could be supplemented with outside air for cooling. A cooler environment could substantially cut the operating costs of the building. All of these factors must be taken into account during this initial design.
After determining where you want the building, then it becomes a matter of which building. Ideally this would be a structure that you have built expressly for your purpose. Time constraints or economic concerns can often send you shopping for a pre-existing structure to convert. Since most modern structures tend to be built to adequate standards and codes for the area this usually is not a problem; however there is one part of the structure that is frequently overlooked and needs to be reviewed, the roof.
The roof is a critical component of the data center. Weaknesses ,even slight, can cause future problems and limit flexibility. Construction to minimum wind and equipment load standards is not uncommon. A strong load bearing roof will allow for future modifications. Covering it with a white long lasting material will save money by reducing any extra heat load and extend the life of the roof.
Is a raised floor necessary? For nearly half a century raised floors have been a benchmark of data center design and many now feel it is unnecessary. While overhead delivery of the communication cable, power and even the chilled air may seem to indicate the raised floor is no longer needed there are other factors that should be considered. Delivery of chilled water/fluid to racks is becoming more popular for high density applications and the use of the sub floor to distribute it makes a great deal of sense when you consider a plumbing problem above the equipment. It does make a more flexible design allowing for distribution of air from the top into a contained cold isle. If air distribution is removed as a function of the raised floor plenum, this space with appropriate cable management, provides a better alternative to ladder rack thus leaving space for high density cooling.
Cooling has become a major focus in recent years due to the ever increasing appetite for power. As the density of equipment increases along with the watts per square foot, the cooling technology has tried to keep pace. It is no longer possible to provide a total cooling solution using a raised floor plenum. While the American Society of Heating, Refrigeration and Air Conditioning Engineers standards for inlet temperatures have been increased and the equipment becomes more tolerant of higher inlet temperatures there is a trade off in the power consumption. As the temperature of supply air rises, the power used to cool is reduced; however the fan speed in the equipment is increased to compensate thus requiring more conditioned power. Your PUE (power utilization effectiveness) numbers will look good but your overall power consumption will actually rise. Not to mention that people working on the floor (especially customers) don’t like a warm environment. Cooling an entire room with the mixing that occurs with the uncontrolled air discharge from floor or ceiling supply vents as well as the rack discharge is simply no longer an option. Cold aisle containment is proving to be the most effective way to deliver cooling and reduce power consumption.
Since mechanical cooling can be provided from a number of sources and each has its’ own best application based on many factors not the least of which is cost, it would be difficult to select one as being superior in all circumstances. That being said there is one option that is available for all areas and must be included in datacenter design going forward. Recently the use of outside air to supplement and in some cases replace mechanical cooling for portions of the day is becoming standard design criteria. With the broader standards for temperature and humidity tolerance of new hardware this method will not only save money in operation but reduces the ever increasing carbon foot print. Each area will benefit differently from the use of outside air and different treatments will be required. Regardless the reduction in cooling cost will make this a standard for future design. Keep in mind “cap and trade” legislation is looming on the horizon. Not only could this reduce your yearly kilowatt-hour consumed, but it could reduce your carbon credits purchased.
The strategy for delivery of the cooling to the equipment is becoming standardized around cold aisle containment with spot control of volume using variable air volume, variable frequency drives or similar methods. This will provide a more effective use of the cold air than the alternative hot isle containment method where the entire room will be cooled thus requiring compensation for any solar heat load as well as lighting and hot aisle bleed. Providing direct cooling that takes into account the actual air volume required by the equipment will tightly control the power used to cool and ultimately reduce cost of operation while allowing supply air temperatures to be at optimal settings. This system of overhead cooling can also be very flexible by the use of semi rigid ducting and flex to allow almost any required configuration.
Conditioned power is the cornerstone of the datacenter and as such needs to be given the highest priority in design consideration. While a failure of cooling will impact the operation of the data center, it occurs over a period of time. In fact much of the equipment can run fine for months and months at ambient temperatures of near 100 degrees. We’ve proven this with an onsite test lab that contains a mix of servers and network equipment from major vendors. However, power must be constant and reliable. Unfortunately this article does not allow for an exhaustive evaluation of the pros and cons of each type of available UPS (uninterruptible power supply) technology therefore we will assume a standard UPS system with a battery string that will provide a minimum run time for the load of 10 minutes and standard generator back up to take over for the utility. Flywheel UPS units should be disregarded for the time being as the current standard model can only hold the load for 15 seconds which can be less time than it takes for generators to start and parallel. There is however a promising new type of UPS unit that has recently entered the market that will change not only the foot print but the heat load output. Additionally they are extremely efficient at low as well as high load ratings which will allow a unit to be used as a reserve without sacrificing efficiency. These new units are based on the IGBT which stands for insulated gate bipolar transistor. Use of this type of UPS is an all around better choice to the conventional static UPS with large transformers and will replace these older style units in the near future.
The most flexible design for the datacenter hardware and the equipment maintenance is the use of an automatic static transfer switch in conjunction with a power distribution unit. This system allows two sources to be available to every receptacle and an instant switch over in the event of a failure of the primary source. Potentially if the hardware has two power supplies, each can have two different primary sources and a total of three sources supporting it. If each UPS is supplied from a different switch and potentially each switch is supplied from a separate utility feed this would allow for outage free maintenance of all components.
Far too often data center design concentrates on the hardware, architecture, and other hard aspects but ignores one of the most important facets of the data center; that is the people aspect of the facility. Data centers that are far too complex will not achieve the desired results if they cannot be operated effectively by the enterprise’ staff. All too often the design is a closely guarded secret and is sprung upon on the staff at start up. Employee involvement throughout the design and implementation stage is something that is of paramount importance. Not only will the implementation be relatively smooth but the sense of ownership will pay huge dividends in the future.
Ultimately the best data center design is one that provides the necessary level of redundancy and is delivered at a reasonable cost. However, as important as redundancy is, flexibility in design will ensure that the money spent on the facility will continue to provide a reasonable return on the investment for years while supporting the ever changing information technology requirements.
Larry Stancil is the director of facilities for Herakles, LLC and has more than 17 years of experience in project management, Facilities IT and critical systems design and maintenance. Prior to joining Herakles he was employed with Health Net, Inc. At Health Net he was responsible for all Health Net IT facilities including the design and engineering of critical facilities. Herakles, LLC is a privately owned company that operates a 92,000 square foot data center in Sacramento, Calif., with more than half of its clients being disaster recovery installations.