Modern computers and data centers consume an enormous amount of energy. Only in the USA, the consumed energy throughout the country is accounted for 3% by computers and related equipment, according to the 2013 Annual Energy Outlook. Let alone cooling systems that protect computers and servers against overheating. The task of increasing energy effectiveness and driving down the cost of power consumption is overwhelming. It means much to the environment within the framework of conservation of natural resources, work simplification of particular companies, and the national economy on a gross scale.
How Is PUE Calculated?
Power Usage Effectiveness is the relation between the total energy entering a data center and the energy used by IT equipment inside the data center (cooling, heating, ventilation, power conversion and distribution, lighting, utility plugs). Moreover, the total energy can be produced not only from electricity but from other sources, for example, natural gas, fuel, water (used for adiabatic cooling). The energy consumption of IT equipment is defined as the amount of energy used to manage, store, process, and route data within a center as well as to operate the networks and additional devices such as monitors and workstations.
Whence, the typical PUE formula is as follows:
PUE = the total facility power / the energy used by IT equipment
The formula has to be used to determine the efficiency of a particular data center over time, not to compare different ones.
To make it clear, I would like to introduce an example of PUE calculating:
Let’s say that the total facility energy of a data center is 12.000 MWh and the IT equipment consumes 9.000 MWh. Thus, PUE = 12.000 MWh / 9.000 MWh = 1,333.
Of course, the power used by an entire data center will be higher than the energy consumed by IT equipment. So that benchmark always will be higher than one. But how much greater?
What Is the Normal PUE?
Obviously, the PUE ratio can range from 1.0 to infinity. An ideal PUE is 1.0, that means 100% efficiency (i.e., all consumed energy is used only on IT equipment, no power distribution loses). But it is almost impossible to achieve.
Such industry giants as Google and Microsoft are creating data centers for PUE`s of 1.2 or better. But they are leaders in the industry. According to the Uptime Institute research, an average US data center has a PUE of 2.5. However, servers with a PUE of 3.3 and higher are common to find as well. This means that only 1/3 of all the energy consumed by the data centers is used by IT equipment and 2/3 of that power is wasted.
The Temperature of the Environment and Its Impact on the PUE
According to the latest researches, data centers consume about 420 terawatts representing 3% as massive as global energy demand. Moreover, cooling systems take approximately 45% of that energy. Within this framework, the location of the servers of the data center is a great deal. The colder the climate, the lower the energy is, the more efficient its operation, the better (, the lower) is the PUE.
Nowadays, since the creation of eco-friendly computing centers is one of the prioritized tasks for manufacturers, they are striving to implement modern technologies and out-of-the-box solutions to make their cooling systems more effective and less energy-intense. For example, in 2018, Microsoft submerged their data center at the bottom of the North Sea, near Scotland’s Orkney Islands to the depth of 35.5 meters. Self-sufficient underwater data centers are intended to be more power saving due to their chilling is free and exercised in the cold waters of the North Sea.
How Can the Data Center Become More Efficient?
Saving energy is more straightforward than many IT companies owners might suppose. By implementing such strategies, you can save your money on electricity bills and make the work of the servers more efficient.
- Reduce the load of IT equipment. Saving 1 watt at a server level will morph into nearly 3 watts of total saving in a data center due to less power consumption. That strategy of reducing the load includes buying energy-efficient equipment, removing servers that are not used, and so-called server “virtualization.”
- Airflow management. This strategy means delivering cold airflow from conditioning units to where it is needed the most, specifically to the fronts of servers as well as removing hot airflow from the servers` backs as efficiently as possible.
- Mind temperature and humidity level. As has been already mentioned before, the temperature and microclimate, where the server is located, are a significant factor that influences the Power Usage Effectiveness. The cooler is the place, the less energy is consumed for the chilling.
- Improve the cooling system. Use “Free Cooling” systems wherever it is possible. Moreover, using the air conditioning for an entire place is much less effective for preventing the overheating of a data center in comparison to local modular cooling. The location of a data center is essential, as well. It is better to allocate the modules according to their power density and presupposed load.
Engineers grab every opportunity to reduce the power consumption of data centers. Taken account of the relevance of that task, irregular software and hardware solution, focused on energy consumption within the framework of data centers effectiveness, will become more widespread. Some of them probably will be originated at the confluence of several technologies.
Sooner rather than later, AI systems will be capable of managing various energy sources to find the optimal scenario for data center supplying. Such solutions will help data center operators to implement more efficient practices as well as save on electricity bills.