As originally published in Electric Environments Infrastructure Solutions written by By Kevin O’Brien, President, Mission Critical Construction Services, EEC
My experience in the data center industry goes back to the nineteen-eighties while working as a facilities manager for a large financial services company headquartered in NYC. Data centers were commonly located in New York City in the same building where their trading and office spaces were located.
The 1980s and 1990s
In 1988, we built our first remote site data center facility outside NYC, dedicated to only data and telecommunications. The site was an old ITT communication HUB in New Jersey that used to house the link for the ‘Hot Line’ between Washington DC and Moscow. Everything was pretty much analog in those days. Having the remote site allowed us to increase the redundancy and reliability of the electrical and mechanical systems. There was no Tier-certification system back then, but we were able to meet what would now be considered an equivalent of a Tier II standard on the electrical and even went to the equivalent of 2N on the UPS. The load-in data centers back then ranged from only 35 to 50 watts per square feet maximum. More and more companies began choosing remote sites throughout the 90s as fiber and demands for more computers at a higher reliability grew. It was not surprising that in 1989, the 7×24 Exchange started to publish articles and share common experiences on how to improve reliability. Then in the early 90s, The Uptime Institute was born, as was the creation and administration of the widely adopted, “Tier certifications”.
The Dotcom Era
The next change in the market came with the dotcom boom. Then, companies were starting to build buildings of 100,000 square feet and more and filling them with racks that would probably equate to 50 to 75 watts per square feet, once they were fully loaded (in most cases never were). The rapid spread of fiber and the economic upswing allowed more companies to build anywhere in the world. In fact, we all know the story of overbuilding and being too optimistic too soon. September 11 deflated that bubble with a stock market crash in 2001. Yet, just a few years later, the predication of demand for servers started to come to fruition.
After the dotcom collapse, Sarbanes Oxley (SOX) came out in 2002. This law required data centers that supported ‘trading’ to be located within so many fiber miles of Wall Street to prevent manipulation during a trade transaction. SOX not only dictated building within close proximity via fiber miles to NYC, but it also required building a separate, synchronous data center for redundancy. Many of these sites were built in New Jersey. Now, these sites started to creep up to the 100-watt per square foot barriers and many were Tier III & IV. The costs per square foot rose as well as the building square footage to support the more robust infrastructure.
Demand for Density and Redundancy
In addition to the increased need for density and higher redundancy, the building square foot ratio of raised floor to infrastructure began to change. As an example, 100,000 square feet of raised floor area (white space) at 100 watts per square foot in a Tier III configuration, would have ratio of 1-to-1. If the density of the technical space went up to 150 watts per square foot, then the ratio would increase to 1-to-1.5, meaning one would need 150,000 square feet of space to support the infrastructure with the same 100,000 square feet of raised floor. This level of infrastructure was needed to support an IT load that typically never materialized. With the increase of density, the industry started to use a proper measurement of KW per rack as opposed to watts per square foot. The KW measurement is more accurate.
More Power/More Efficiency
Once the industry started rating in KW or cost per KW, people started to realize how much power they were actually using and what it actually cost. This triggered an energy rating system called Power Utilization Effectiveness (PUE). This created a measuring system of the efficiency or inefficiency of data center sites. Now, there was a way to hold data center managers accountable for energy consumption. This trigger called for free cooling and more efficiency, despite densities were significantly increasing from three to four KW per rack, to 16 to 25 KW per rack. Some facilities such as Yahoo! went as far as to eliminate mechanical cooling (chillers) and using 100 percent of outside air for “free cooling” in a very simple structure called the “chicken coop”. This is not a solution for most enterprise data centers, but definitely works for the Yahoos of the world. “Hot aisle, cold aisle”, also became the norm for most data centers. This helped isolate the load, therefore becoming more energy efficient. This movement also helped start the trend of elevating the temperatures inside the data hall. This was really the low- hanging fruit that most definitely saves energy. ASHRAE helped clarify the raised temperatures of server inlet temperatures. Now, the big battle is determining whose PUE is smaller.
This energy efficiency drive and claim to be “greener” is also causing the industry to rethink and to be more innovative with the use of items such as high efficiency chillers, adiabatic cooling, fuel cells, solar power as well as 380-volt DC in data center. Maybe in the future, we will see the elimination of multiple AC/DC conversions, more distributed generation with cleaner power and maybe even the elimination of mechanical compressors. All alternative energy sources such as fuel cells are slowly becoming more cost effective. Bloom Energy under the guidance of Peter Gross is seeking to make their “Bloom Box” fuel cell more cost efficient in a critical environment.
It’s been over 26 years for me watching the changes in the data center environment, having to adapt design standards and construction standards. It’s been quite a wild ride. From the Internet data center, to cloud data centers, big open box space to modularized data halls and pods — I’m looking forward to the next 10 years and the changes and innovation that will come with it!
To read the original post, please go to the Electric Environments Infrastructure Solutions website here.