What Commercial Building FM’s Can Learn About Sustainability From Data Centres
In built environment terms, data centres are relatively new kids on the block. Dedicated data centre buildings became part of the commercial construction industry around 25 years ago with the advent of pervasive internet use and the birth of a plethora of internet based private and public services.
Today large data centre buildings come in three main types: Hyperscale, Commercial Colocation and Enterprise. Typically, each data centre is measured in megawatts of IT load. Globally there are over 600 hyperscale data centres in operation today, with hundreds more planned in the tens to hundreds of megawatts range. In the colocation market, an entire global commercial data centre industry has emerged. Finally, there are enterprise data centres which are owned and operated by large companies such as banks.
The Data Centre Boom
The first data centre building boom was largely the result of the internet wave in the late 90s and early 2000s, when the success of mobile internet and the iPhone made consumers the main drivers of data generation.
A new industry was born as architects, mechanical and electrical engineers were contracted to design and construct large, low and wide technical buildings where few humans would work, but which housed tens of thousands of pieces of IT equipment together with power systems to provide many megawatts of stable, continuous power and large-scale air-conditioning infrastructure to control the operating thermal envelope.
Initially the primary goal was uptime - keeping the computers running at (almost) any cost. Buildings were designed with spare space, power and cooling capacity to facilitate growth. However, many would never use more than 30% of the available power and even today data centres rarely reach more than 75% of their power capacity.
Measurement And Management
With 60-70% of the cost of building and operating a data centre coming from mechanical and electrical infrastructure, developers are under pressure to reduce energy waste and consumption and reduce carbon emissions. But this challenge is nothing new.
Since energy is the largest cost of operating a data centre, the industry is attuned to making the best use of the resource – something which first came to a head during the energy crisis in 2006 when oil prices went north of $100/ barrel. This proved an inflection point for a fast-growing industry, which responded in a number of ways:
- Data centre designs evolved to deliver greater efficiency. For example, cooling architectures were developed to prevent the mixing of supply and return air streams, greatly improving their effectiveness and allowing higher power density racks to be deployed to increase space utilisation. At the same time, IT shutdowns caused by “hot spots” - which had proven a major cooling challenge in legacy data centres - were able to be eliminated, giving greater IT service reliability.
- Manufacturers introduced more efficient physical infrastructure equipment, with UPS able to operate at >97% efficiency.
- Greater use of modular, standardised equipment reduced the number of custom engineered systems meant that data centre performance became more predictable and lower cost as economies of scale came into play.
- The modular approach meant that data centres could adopt a pay-as-you-grow approach to infrastructure deployment. While not necessarily the most capital efficient approach to building a data centre, it did reduce initial cost and standing power losses.
- The introduction of the EU Code of Conduct for Data Centres gave the industry a well-publicised target for achieving efficiency in data centre design, construction and operations.
- Probably most importantly, the data centre sector started to monitor and measure its use of energy.
PUE – The First Efficiency Game Changer
You can’t manage what you can’t measure has been a mantra in data centres for more than a decade. Measurements need agreed metrics and the launch in 2007-8 of Power Usage Effectiveness (PUE) by the Green Grid – a consortium of industry players – made PUE the most important industry efficiency metric.
PUE has a simple formula to address a complex problem, describing the ratio between how much energy enters the facility and the fraction which is utilised to power IT equipment. A perfect PUE number is 1, meaning 100% of the power coming into the data centre is used to supply the IT load.
Today, PUE is an accepted global metric and has led to other data centre metrics being developed, e.g., CUE, IUE and WUE (respectively Carbon, Infrastructure and Water Usage Effectiveness) which are in widespread use across the industry. CUE measures data centre sustainability in terms of carbon emissions, being the ratio between total CO2 emissions caused by overall data centre energy consumption and IT equipment energy consumption. Meanwhile, IUE helps determine how much design infrastructure capacity an operational data centre is able to utilise.
Sustainable Operations; from Using Renewables to Demand Response
All of the big tech global brand data centre operators have made public and very welcome commitments to become carbon neutral. This includes only using renewable power sources upstream.
However, it’s clear that as the energy sector transitions, power will no longer flow in one direction from a large faraway power station as a response to user demand. Power will have to flow, e.g., to and from data centre microgrids.
Big data centre campuses will not solely rely on utility power. Already designers are evaluating how power at the data centre is generated and consumed, together with how waste heat from the IT equipment can be harvested and re-deployed as usefully. For the data centre sector, onsite microgrid power generation is undoubtedly the way of the future.
There are other ways for data centres to be part of the solution for supplying power back to the grid – helping to decarbonise utility operations. The traditional approach that uses unidirectional battery energy storage and conventional diesel generator fuel is not sustainable. Which is why progressive data centre companies are re-evaluating every aspect of the power chain.
For example, some have discontinued the use of conventional diesel fuel instead adopting a range of technologies from biofuels and fuel cells to combined heat and power (CHP) and natural gas. Some data centres are moving towards sustainable energy generation and storage using blended hydrogen-fuelled reciprocating engines, turbines, and new battery types.
For existing data centre sites seeking to move operations to net zero, Adaptable Redundant Power (ARP) is an innovative new design concept that holds the promise to positively transform data centre power topologies and provide flexibility that addresses waste and stranded capacity.
In addition, there are different types of Demand Response (DR) options available for data centres to support the power utility and overcome some of the challenges of intermittency associated with renewables. Whether they participate through load curtailment (also called load shedding), Short Term Operating Reserve (STOR) and Load Reduction, or Frequency Response, DR Programs offer the potential of new revenue streams which can help finance GHG abatement efforts.
For data centre engineers and the FM community, reaching net zero targets means meeting serious challenges at the systems, equipment and component levels in the operation of existing buildings as well as demanding new approaches to building design. However, for the data centre sector increasing energy efficiency and reducing waste and emissions goes hand in glove with lowering the cost per watt of delivering computing services. In every respect, this represents a big win for both the sector and the planet.
First published on FMUK