Energy Efficient Thermal Management of Data Centers
Book file PDF easily for everyone and every device.
You can download and read online Energy Efficient Thermal Management of Data Centers file PDF Book only if you are registered here.
And also you can download or read online all Book PDF file that related with Energy Efficient Thermal Management of Data Centers book.
Happy reading Energy Efficient Thermal Management of Data Centers Bookeveryone.
Download file Free Book PDF Energy Efficient Thermal Management of Data Centers at Complete PDF Library.
This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats.
Here is The CompletePDF Book Library.
It's free to register here to get Book file PDF Energy Efficient Thermal Management of Data Centers Pocket Guide.
The room scale has been the focus of efforts to enhance the effectiveness of the HACA layout and prevent recirculation. Proposed solutions for avoiding the mixing of cold supply air and hot return air include hot-aisle containment as illustrated in Fig. An alternative or companion concept is cold-aisle containment, as seen in Fig. Additional concepts include directed hot-air extraction above the rack, Fig. Airflow delivery advancements can be targeted at the room, row, and rack levels, as seen in Fig. Coupled consideration of rack and room airflows to optimize cooling has been considered [55, 56].
Alterna- tive layouts to the HACA to improve air delivery have also been studied, as the pod layout , which is described in detail in Chap. This scale includes air-delivery plenums, typically below or above the data center room space. Hot air return to the CRAC may be through a ducted passage b. In several existing facilities it may not be possible to provide for a raised floor. Distributed overhead cooling air supply units can be used to create an alternating hot- and cold- aisle arrangements c.
Alternately, wall-mounted CRAC units may provide ducted overhead supply of cold air d. Moreover, because chilled water pipes for the CRACs and any liquid-cooled cabinets as well as electrical cabling often pass through the plenum, it plays an increasingly significant role in cooling future data centers.
The effective use of this scale, in combination with the previous ones provides multiple options in configuring the overall multi-scale cooling systems of data centers. Data center facility infrastructure is typically designed and constructed for a period of 15—20 years, while the IT equipment is more frequently upgraded, sometime in as little as 2 years. Through the use of the multi-scale design approach discussed above, it is possible to plan ahead for the systematic growth of thermal management capabilities for the data center facility for its entire design life, while optimizing the usage of energy.
This is demonstrated in Chap. Physical access to operating data centers for the purpose of thermal characterization is usually limited due to security and high reliability constraints. Also, large variations in data center architectures limit the extendibility of measurements across facilities.
A A Hot Aisle. These simulations can enable identification of potentially dangerous local hot spots and provide rapid evaluation of cooling alternatives, as the facility IT loads or equipment change. With continual upgrades, the optimal arrangement of new heterogeneous IT equipment needs to be determined to minimize its effect on neighboring equipment. Additional constraints imposed by the grouping of IT equipment by functionality and cabling requirements often conflict with thermal management strategies, and data center managers need to provide localized supple- mental cooling to high-power racks.
Thermal simulations are the best way to determine these requirements. The first published results for data center airflow modeling appeared in . Raised floor plenum RFP airflow modeling to predict perforated tile flow rates 2. Thermal implications of CRAC and rack layout and power distribution 3.
Alternative airflow supply and return schemes 4. Energy efficiency and thermal performance metrics 5. Rack-level thermal analysis 6. Compact models often offer an acceptable tradeoff between modeling details and computational expense.
- The Edge Is Hot, But Keep Cool!
- The Brazen Head.
- Stop hurting the woman you love : breaking the cycle of abusive behavior?
- Data Center Cooling - Heatex.
- Advances in Cancer Research.
The latter are input—output functions obtained from physical conservation laws, or curve fits to experimental data. For example, fan and resistance models that specify a pressure—velocity relationship are common lumped models that compute the velocity through a surface as a function of the local upstream and downstream pressure.
Compact models in the form of thermal resistance networks have been widely used in thermal design of electronics packaging [59, 60]. The model accuracy in these strongly depends on how the resistance network is constructed and how multi- dimensional effects are approximated with equivalent one-dimensional resistances.
Arbitrary velocity and Velocity and temperature temperature inlet profiles outlet profiles. Compact models use slightly more degrees of freedom DOF than lumped models, but provide internal states based on additional details that allow for further examina- tion of the model. The main features of the model include some flow obstructions created by drive bays and power supplies that provide a representative system pressure drop. A number of power dissipating components are also modeled with arbitrarily com- plex details including multiple materials and chip-level thermal management devices such as heat sinks.
The model includes an induced-draft fan to pull the air through the server box and may also include a lumped resistance to simulate the pressure drop through screens at the front of the server. The model thermal inputs are inlet temperature and component-wise power dissipation rates. The flow inputs may vary depending on the scope of the compact model, with the simplest being a fixed velocity that also fixes the outlet velocity by continuity. More advanced strategies would use a fan model to drive the flow against the server system resistance accounting for the inlet and outlet static pressures.
This means that the compact models can be formulated to accept and output either profiles or area-averaged quantities. The model parameters are the geometry and material properties, which may range over specified values, if the model is constructed to accommodate such variability. The process of taking a model from a large number of degrees of freedom DOF , either from detailed numerical simulations or full-field experimental measurements, to a model involving significantly fewer DOF is termed model reduction .
The three main thrusts are evaluation of global cooling schemes, investigating the effects of local and supple- mental cooling schemes, and improving energy efficiency at all levels of the data center. Any downtime in the operation of data centers incurs expense to the user, and typically the equipment owner, if a different party, as in colocation data centers.
Under the service level agreement SLA , the owner may have to pay, or forfeit being paid by, the user under such conditions. A company computer may lose information currently being processed if it goes down. A data center controlling an industrial process could extend the process downtime caused by a power failure as the data center is being restarted, and unavailability of Web sites may cause missed opportunity for sales.
In general, an IT cluster that loses power unexpectedly may have a complex restart requiring expert IT support. While some of these power failures are too short in duration to affect facility operation, their percentage is still very high in terms of actual number of impacted facilities. A failure at a Dallas Rackspace data center illustrated that even the best prepared facility can stop operating, given a series of events that repeatedly caused infrastructure to fail . Shields  considered a number of failure events in a raised floor air-cooled facility of the type illustrated in Fig.
Heat flows from the IT equipment to the data center air, and the energy convected by the air is released to the circulating chilled water via an air-to-water heat exchanger within the CRAC, such as in Fig. The chiller, located outside the data center, then removes the heat from the warm return water from the CRAC, through coupling with a refrigerant-based vapor compression system.
There are various levels of backup power infrastructure that are designed to maintain data center operation during a power failure. A classification of mission critical facilities developed by the Uptime Institute  divides these into categories called Tiers. This classification requires evaluation of other factors besides power and cooling infrastructure, as discussed in detail in Chap. The backup power infrastructure provided, or not provided, to each level of cooling infrastructure determines the most likely transient scenario during a power failure.
Several possible scenarios are shown in Fig. Some data centers do not have any backup power or provide backup power within the rack for the head nodes only, just enough to minimize the amount of labor necessary to bring the IT equipment back online after the power is restored. In this case, cooling of IT equipment is generally not an issue, since the power density of data center drops drastically during the power failure.
Such a data center falls in the category of Tier 1. A Equipment with various levels of Backup. Running IT equipment without cooling infrastructure is a cause for concern.
This design may cause IT equipment air inlet temperatures to rise unacceptably if the power failure disrupts cooling for an extended period, and the problem is compounded as power density increases. It will likely be necessary for this type of data center to shutdown IT equipment automatically or manually in case of an extended power failure. In some cases, a data center with lower power density may be able to continue operation without downtime if an emergency generator is also installed.
The addition of an emergency generator to the backup power infrastructure shortens the maximum theoretical time span of a power failure for any equipment that it supplies. Adding power infrastructure to the emergency generator tends to be substantially less expensive than providing it with UPS. Typically, an emergency generator can restore power within about 30 s after a power failure. At this point, the CRACs and chilled water CHW pump with emergency generator backup power infrastructure would start almost immedi- ately, even before the chillers restart.
Chillers typically require 2—3 min for a restart sequence for each compressor. Thus, a facility provided with emergency generator backup power infrastructure for its IT equip- ment and cooling infrastructure needs to determine what additional steps are needed to ensure the facility does not reach unacceptable temperatures before the chiller has a chance to come back online.
With higher power density compute equipment, this delay can lead to recirculation of hot air from IT equipment outlets to inlets. An expected benefit from running the CRAC during the power failure is circulation of the air within the data center, which will maintain the airflow patterns that pressurize the cold aisle and minimize recirculation and hot spots. Another benefit of running the CRAC is the ability of heat transfer to other media besides the room air. This includes the cooling coil within the CRAC and the solid surfaces in the facility, such as the concrete floor, raised floor plenum tiles, walls, and ceiling.
Any extension of the time window within acceptable operating temperatures gives the facility more time for power to be restored, either by emergency generator or by the utility service, and cooling infrastructure to come back online. In many cases, providing backup power infrastructure—especially UPS—to the CRAC may be enough to keep temperatures within acceptable ranges long enough for the emergency generator and chillers to come back online. However, some owners may desire a greater degree of reliability.
Under this scenario, the data center will continue operating at steady state, with no temperature rise, until all the CHW in the piping has made a complete circuit through the AH. With sufficient chilled water storage, the facility can operate with no rise in air temperature for an extended period. This would allow for false starts of both the generator and chiller. As discussed in Chap. Redundancy allows pieces of equipment or systems to fail without disrupting operation of the data center.
This chapter has provided an introduction to the energy and thermal management of data centers. Trends such as sharp increases in chip and cabinet-level power density are discussed, along with computing paradigms such as virtualization and cloud com- puting. Early energy usage benchmarking studies for data centers, and scenarios for their sustainable growth are identified.
The energy flow chain from the grid to the data center facility, which is key to reducing energy usage is introduced. Environ- mental guidelines for the operation of data center facilities, which dictate the cooling requirements for data centers are discussed. During the past decade, significant advances in energy usage have been achieved by the industry.
Best practices based on these experiences are briefly discussed. Thermal management of data centers requires a multi-scale approach. This is illustrated through the adoption of an open approach that allows continued growth of a facility. In order to optimize thermal management, it is essential to have efficient and accurate thermal modeling approaches.
An introduction to reduced order modeling is provided. Finally, the dynamic nature of the data center operation is explored through facility provision- ing for mitigation of power failures. The following 12 chapters explore many of these ideas in greater depth. Chapter 2 explores the current state-of-the-art of the mechanical and thermal design of data centers, with a focus on airflow management. In Chap. Chapter 4 focuses on emerging trends in the IT component of the overall power consumption. We discuss in detail how emerging usage models of computing are profoundly impacting.
Monitoring of facility and IT data and its use in control of data centers is discussed in Chap. The path to sustainable data center growth requires quantifying and improving the meaningful metrics for energy efficiency.
Due to the multitude of energy flow and loss streams, this requires careful consideration, as explained in Chap. The use of sensing in concert with modeling offers great promise in improving energy efficiency of data centers. Chapter 7 focuses on metrology tools and their use in data centers, in concert with modeling. Exergy analysis provides a promising avenue to identify and reduce the thermodynamic inefficiencies in data center facilities. This is the focus of Chap.
Computational simulations for design and optimization of data centers must be able to provide rapid responses. Reduced order modeling to achieve this, and its use in improved designs of data centers are discussed in Chap. Handling the large amount of data gathered from data center facilities and using it in a meaningful fashion requires the use of statistical techniques is the focus of Chap. The entire incoming electrical energy into a data center facility is ultimately converted into waste heat. Recent work on this topic is presented in Chap.
Cooling technology advances impacting energy efficiency in data centers are being rapidly explored worldwide. Patterson MK, Fenwick D The state of data center cooling: a review of current air and liquid cooling solutions. Brocade Communications Systems Inc. United States Environmental Protection Agency Energy Star Program Report to congress on server and data center energy efficiency public law — Mitchell-Jackson J Energy needs in an internet economy: a closer look at data centers. Masters thesis, University of California, Berkeley 9.
Garday D Reducing data center energy consumption with wet side economizers. Somani A Advanced thermal management strategies for energy efficient data centers. Patterson MK The effect of data center temperature on energy efficiency. Itherm conference, Orlando, Florida, 28 May to 1 June Sullivan RF Alternating cold and hot aisles provides more reliable cooling for server farms. Beaverton Data Center World. Data Center Dynamics. Data Center Alliance. Hewlett-Packard White Paper. HPL Eighth workshop on computer architecture evaluation using commercial workloads, February Nathuji, A.
CN103076855B - 一种微数据中心系统 - Google Patents
Somani, K. Schwan, and Y. Joshi, HotPower Conference, In: IEEE transactions on components and packaging technologies, vol 31, no. Maximo Asset Management.
Intel White Paper, January Intel Technol J 12 1 —76 Wei J, Suzuki M Thermal management of fujitsu high-end unix servers. Microsoft Best practices for energy efficiency in microsoft data center operations. Microsoft Fact Sheet , February An introduction to the design of warehouse-scale machines. Millpress, Rotterdam Wei XJ, Joshi Y Experimental and numerical study of a stacked microchannel heat sink for liquid cooling of microelectronic devices. Joshi Y, Wei X-J Micro and meso scale compact heat exchangers in electronics thermal management-review. Electronics Cooling Magazine. Coggins, D.
Gerlach, Y. Joshi, and A. Compact Low Temperature Refrigeration of Microprocessors. Herrlin MK , Rack cooling effectiveness in data centers a look at the mathematics, energy and power management. May, pp 20—21 Rambo J, Joshi Y Modeling of data center airflow and heat transfer: state of the art and future trends. Distributed and parallel database, special issue on high density data centers, vol 21, pp — Schmidt R, Iyengar M Effect of data center layout on rack inlet air temperatures.
Optimization of data center room layout to minimize rack inlet air temperature. Somani A, Gupta T,Joshi Y Scalable pods based cabinet arrangement and air delivery for energy efficient data center. Bar-Cohen A, Krueger W Thermal characterization of chip packages - evolutionary design of compact models. In: IEEE transactions on components, packaging, and manufacturing technology - part A, vol 20, pp — Shapiro B creating reduced-order models for electronic systems: an overview and suggested use of existing model reduction and experimental system identification tools.
Millpress, Rotterdam, pp — The U. Golson J. How Rackspace Really Went Down. Shields S Dynamic thermal response of the data center to cooling loss during facility power failure, M. Industry standard tier classifications define site infrastructure performance, The Uptime Institute Inc. Abstract Airflow management is probably the most important aspect of data center thermal management. It is an intricate and challenging process, influenced by many factors.
As such, it provides a foundation necessary for understanding the remaining topics discussed in the book. The chapter begins by introducing the concept of system pressure drop and its influence on the computer room air conditioning CRAC unit performance. Various factors contributing to the overall pressure drop, such as plenum design, perforated tile open area, and aisle layouts, are described. Some of the key aspects of room and rack airflows are also discussed. The second part of the chapter highlights the importance of temperature and humidity control in data centers.
The basic concepts of psychrometrics are introduced. The concept of airside and waterside economizers for data center cooling are introduced. The third and final part of the chapter describes an ensemble COP model, for assessing the overall thermal efficiency and performance of the data center. Use of the material presented in the chapter for data center design or for other practical applications is offered as suggestions only.
The authors do not assume responsibility for the performance or design issues which may arise due to implementation of the suggestions. Kumar and Y. The term data center refers to a facility where Information technology IT equipment and the associated supporting infrastructure such as power and cooling needed to operate the IT equipment are located. Success of a data center lies in retrieving information, either being stored or processed in the IT equipment and making it readily available on demand to the end user.
The concept of data centers came into existence almost with the birth of the mainframe computer ENIAC at the University of Pennsylvania. Early data centers were mainly limited to space technology, defense, and other government establishments, and had highly restricted access. Only in the past decade or so data centers have become mainstream, due to the rapid evolution of IT and telecommunications products and technologies.
Figure 2. Two factors contribute heavily to the successful operation of a data center: reliable power and reliable cooling. For many decades, reliable power was a primary concern, and as such emphasis was placed on improving the power quality, availability, and reliability. Today, with the availability of reliable and. Although electrical power is also required to operate the cooling hardware, reliable power still does not ensure reliable cooling, and hence adequately provisioned cooling continues to be a major concern for data center operation.
Most of these data centers were co-located with common office spaces. Since the average heat densities were low, the room air conditioners, which were primarily meant for human comfort, could handle the extra heat of the computing equipment. This made cooling a minor concern.
Over the years, the increase in demand for IT, coupled with the shrinking form factors resulted in unprecedented levels of power and heat densities in data centers. The heat load of IT equipment alone routinely exceeded the entire building heat load. Large chillers and air handlers had to be installed to combat the increase in heat densities of the IT equipment. The noise generated by the fans in the air handlers made the environment inhospitable for human comfort.
This led to the isolation of office spaces from computer rooms, thus marking the emer- gence of stand-alone computer rooms or data centers. This facility, a host of IBM Mainframes, was designed to handle large databases, remote computing, and high-throughput multi- programming . Today, modern data centers are warehouse size independent facilities, solely dedicated to housing digital IT equipment and associated cooling and power infrastructure. Poor thermal management can have a number of adverse implications, such as premature failure of severs due to inefficient airflow distribution, increased down- time, poor reliability, all of which result in a significant increase in operating cost.
Thermal management is particularly critical to data centers housing financial information. Recent legislations in the USA such as the Sarbanes—Oxley act makes it mandatory for financial companies to disclose information on demand, which makes it imperative that companies not risk down time, or lose data which could jeopardize availability of information in a timely manner . Successful thermal management can significantly reduce operating costs as well. According to a number of recent studies e. These statistics suggest that improving the cooling efficiency is a step in the right direction.
What makes data center thermal management challenging is the dynamic and thus unpredictable nature of the IT equipment heat load. In addition, most data centers house racks of various airflow configurations, e. These nonuniform airflow directions, coupled with large diversity in equipment from different vendors presents a hurdle in optimizing the air distribution. Unlike power, which is self tuning and easily deliverable through well-defined paths such as cables or overhead bus ways, the absence of well-defined or confined paths for air, often makes it difficult to deliver adequate quantities of cold air, at the inlet of each and every server inside the data center.
As an added challenge, provisioning more cooling to one rack could affect the cooling to neighboring racks. While these guidelines provide an envelope for safe operation, methods of operating within the safe envelope are application depen- dent and left to the end user. Presently, most data centers are managed either based on intuition or accumulated experience. The success of data center thermal management lies in providing the right airflow rate, at the right temperature, to all the IT equipment, under all operating conditions. The primary source of thermodynamic inefficiency in an air-cooled data center is due to intermingling of hot and cold streams.
As explained in Chap. In this arrangement, the racks housing the IT equipment act as physical barriers, isolating the hot and cold airstreams. Once the physical separation is achieved, the next major task is the supply of cold air and extraction of hot air in the most effective manner, expending minimum energy.
There are a number of methods of supplying cold air to the IT equipment, classified as room-, row-, and rack-based methods. In case of room-based cooling, cold air is distributed via an under- floor plenum or using a system of overhead ducts as illustrated in Fig. Both these methods are widely used in modern data centers. In both these systems, large air handlers employing fan coil systems known as computer room air conditioning CRAC units are used to force air through the overhead or under-floor plenum.
The cold air supplied by the CRAC units in the cold aisle at the inlet of the racks is heated as it passes through the IT equipment and discharged into the hot aisle. In case of an under-floor air distribution UFAD system, the hot air collected in the hot aisle is returned to the CRAC units either through the room or by a dedicated ceiling return plenum as illustrated in Fig.
The water circulating through the heat exchangers in the CRAC unit is cooled through a secondary loop connected to a chiller or a refrigeration unit, which eventually rejects heat to the ambient environment. The primary air-cooling loop and the secondary chilled-water loop are illustrated in Fig. From Fig. It is the complex interdependency of these factors which makes data center thermal management a challenging task. As stated earlier, there is no universal thermal management recipe. A solution that may be very effective in one case may prove to be ineffective, or worse detrimental in another case.
Presently, most data centers are managed by following a set of best practices suggested by ASHRAE  or other sources. This chapter elucidates some of the key scientific concepts underlying the best practices. This will help the user develop an insight into the fundamental aspects of data center thermal management The chapter is divided into three sections.
The first section highlights the importance of airflow distribution in a data center. Since majority of the data centers today employ a raised air distribution system, emphasis is laid on the factors affecting air distribution in these facilities. The overhead air distribution system is also briefly discussed. The second section focuses on temperature distribution and on humidity control in data centers.
Basic psychrometric concepts are used to analyze the various cooling processes. The last section describes a methodology for assessing the data center thermal efficiency using an ensemble COP model. We know that movement of air is influenced by static pressure variation. Therefore, the first step towards successful air management begins by understanding the static pressure distribution within the data center.
To characterize the static pressure distribution, the entire data center comprising of IT equipment and the cooling infrastructure has to be treated as a single unified system. The following terminology used in the context of data center air distribution is introduced. It varies locally along the flow and is obtained either experimentally using a pitot static probe or compu- tationally by solving the fluid flow equations.
The CRAC unit must overcome the total static pressure imposed by system to move the air into system. The dynamic pressure is used to calculate air velocity along the path of flow. Static pressure and dynamic pressure can change due to acceleration or deceleration of the flow as the airflows though the under-floor plenum or ducts in case of overhead distribution , perforated tile vents, rack doors, etc.
The total pressure is conserved unless lost due to frictional losses. The facility consists of a CRAC unit comprising of a fan coil system, a raised-floor plenum for distri- buting the air from the CRAC unit to the cold aisle, a set of perforated tiles for delivering air to the IT equipment and a ceiling return for diverting the hot air back to the CRAC unit.
The associated pressure drop across each of the devices is shown alongside in Fig. The corresponding changes in the airstream static pressure from the point of entry into the CRAC unit are illustrated in Fig. This static pressure profile of the data center is useful in illustrating the necessary static pressure rise across the CRAC unit. The CRAC fans increase the air static pressure from room return to a few inches above ambient to pressure at the perforated tile exits are slightly above the atmospheric pressure. Static Pressure inch of water 0. System resistance is an obstruction to flow and is used in context with the static pressure loss.
In the present discussion, we refer to the entire data center comprising of IT equipment and the cooling infrastructure air delivery equipment only as a system . The system resistance is the sum of static pressure losses across all the components participating in air delivery in the data center.
It is a function of the flow rate, and is represented as a system resistance curve, as illustrated in Fig. From the basic principles of fluid mechanics, it is found that the system resistance varies with the square of the volumetric flow rate in laminar flow. The system resistance has a major role in determining the performance and efficiency of air distribution.
Any change in the data center operating conditions will result in a change in the characteristic system resistance curve. For example, if the CRAC filters are clogged, new racks are added, or containment systems are introduced, the system resistance would increase. As a result, the system resistance curve will shift upwards as indicated in Fig. Alternatively, if racks are removed, or if the perforated tiles are replaced with higher open area tiles, the system resistance would decrease, resulting in a downward shift in system resistance curve as illustrated in Fig.
The system resistance curve provides information on the static pressure head the fans in the CRAC must overcome, in order to deliver desired airflow rate required by the IT equipment. A basic knowledge of the laws governing the operation of the fan is necessary to understand the effect of system resistance on volume of air delivered by the fans in the CRAC unit.
These are introduced in the following section. All fans operate under a set of laws concerning speed, power, and pressure drop. The fan speed, expressed in revolutions per minute rpm , is one of the most important operating variables. The volume of air displaced by the fan blades depends on the rotational speed.
The airflow rate increases with increase in fan speed. Thus, changes in the demand of airflow in a data center can be efficiently met by altering the fan speed, within the safe operating range, specified by the manufacturer. The second fan law relates the static pressure rise across the fan with fan speed. The airstream moving through the fan experiences a static pressure rise due to the mechanical energy expended by the rotating fan blades. The general equation for calculating the static pressure rise across a fan is provided below. Note that the static pressure rise across the fan rises rapidly with increase in fan operating speed.
Fan speed variation also affects the fan power consumption. The fan power is proportional to the cube of the fan speed as stated below by the third fan law as,. Note that the above fan laws are valid for geometrically similar fans. The relationship between the volume of air delivered by the fan and the corresponding static pressure rise, for various flow rates at a specific speed, is specified by the manufacturer in the form of a fan performance curve.
The system resistance curve presented in Fig. This data is specific to each CRAC and is based on the fan geometry and operational speed. Different operating speeds will yield different characteristic curves. With this information, decisions can be made regarding the volumetric flow rate of the CRAC unit, fan operating speed, and the pressure drop of the system. This, however, is only an ideal condition and in reality, data centers rarely operate at a fixed operating point.
As described previously, the system resistance can change due to a number of normal operational factors, such as clogging of the CRAC filters, and relocation, addition, or removal of IT equipment. Small changes in airflow requirement by the IT equipment due to variation in sever fan speed can also impact the system resistance. The extent of the variation in airflow delivered by the CRAC depends on the characteristic fan curve and the system resistance curve. Some of these effects are highlighted in Fig. Major changes such as introduction of containment systems or introduction of new IT equipment would increase the system resistance, causing the system resistance curve to shift upwards.
This increased airflow resistance will shift the operating point to B as illustrated in Fig. Alternatively, if racks are removed or if the perforated tiles are replaced with higher open area tiles, the system resistance would decrease, resulting in a downward shift in system resis- tance curve as illustrated in Fig. Many fans exhibit an unstable behavior if the static pressure exceeds a certain limit. Some of the factors listed above, contributing to a change in the system curve are considered normal, and CRAC fans are designed to accommodate these changes, without appreciably impacting the airflow rate.
The flow rate from the CRAC is significantly impacted by a change in room or aisle layout, variation in plenum height, obstruction in the air delivery path, or due to implementation of an improp- erly designed containment system. Variable speed operation of the CRAC involves controlling the speed of the fan to efficiently meet the dynamic cooling air requirements of the data center.
CRAC fan performance can be predicted at different speeds using the fan laws. Since power input to the fan varies as the cube of the airflow rate, varying the fan speed is the most efficient form of capacity control. If cost is a driving factor in a facility with multiple CRAC units, some units may be fitted with VFDs to accommodate capacity variations. The benefit of variable fan speed can be impaired by the efficiency of the VFD drive and the associated control system. This should be accounted for in the analysis of power consumption. In many cases, increasing the CRAC fan speed within the manufacturer recommended limits can mitigate the impact of reduction in airflow due to an increase in system resistance.
From the first fan law in 2. These observations are valid provided there is no change in system resistance. However, if the system resistance decreases, for example, due to installation of high flow tiles, the fan speed could be decreased to maintain identical flow conditions. The economic benefits of operating at higher fan speed must be carefully considered, as according to the third fan law 2.
The airflow requirement of a data center is based on the cooling demands of the IT equipment, which is either specified by the equipment manufacturer or is deter- mined from supply temperatures and the equipment heat loads. Precise determi- nation of airflow and outlet pressure at the perforated tile or ceiling grit in case of an overhead system is important in sizing the CRAC unit.
Detailed CFD analysis, as discussed further in Chap. The analysis should account for pressure drop across the plenum, obstructions due to blockages, pressure drop across CRAC filters, perforated tiles, and rack grills. Only when the system flow and pressure requirements are determined, the CRAC unit should be selected. Since, system pressure drop is difficult to predict accurately, often a conservative approach is adopted, with large safety margins.
This leads to substantial oversizing of the CRAC units, which operate at flow rates much below their design values, resulting in poor efficiencies. The subsequent sections of the chapter focus on factors affecting pressure drop across various components and their influence on air distribution in the data center. Referring back to Fig. Under normal operating conditions the pressure drop across the CRAC is largely unaffected by static pressure variations across plenum, perforated tiles, or the room. The room pressures space above the plenum housing the IT equipment are relatively close to the atmospheric pressure.
Since the volume of the air above the raised floor is large compared to the plenum volume, variations in room pressure are usually insignificant, compared to the pressure variations across the perforated tiles and the subfloor. On the basis of this assumption, one can neglect the room-level pressure variations relative to the subfloor. This not only simplifies the understanding of flow physics but also provides the flexibility to independently analyze the raised-floor plenum, perforated tiles, and room.
Large variations in horizontal velocities in the vicinity of the CRAC unit are incurred, which setup large pressure gradients, giving rise to highly nonuniform flow across the perforated tiles. The decrease in the volume of air due to partial venting through the perforated tiles, results in a gradual decrease in the horizontal velocities. This is accompanied by a corresponding increase in static pressure. The gradual increase in static pressure downstream of the CRAC unit in effect increases the net flow of air from the tiles placed downstream of the CRAC unit as illustrated in Fig.
In some cases, the static pressure near the CRAC exhaust can fall below the atmospheric pressure, giving rise to negative flows. The low pressure regions originate due to sudden expansion of the impinging jets of air on the plenum floor,. Cross Sectional Area of the plenum affecting the flow AreaPlenum. This is sometimes referred to as Venturi effect.
The variation in the air discharged from the perforated tiles is influenced by the static pressure distribution in the plenum. The static pressure in the plenum is influenced by many design variables such as plenum height, location of the CRAC units, blockages under the perforated floor and open area of the perforated tile. The influence of these factors is discussed in the following sections.
Changes in local plenum pressure are caused due to variation in velocities in the plenum. The magnitude of variation in the average horizontal velocity is dependent on the cross-sectional area of the plenum AreaPlenum which is normal to the flow as shown in Fig. For identical CRAC flow rates, shallow plenum depths will result in higher plenum velocities compared to deeper plenums.
Lower plenum velocities lead to higher plenum pressures. The degree of flow maldistribution is significant only if the variation in plenum pressure is comparable to the pressure drop across the perforated tile. Hence, both the perforated tiles and the plenum height play an important role in plenum pressure distribution. The effect of plenum height on perforated tile flow is illustrated in Fig. The results presented in Fig. The data center facility consisted of 30 tiles arranged in two rows to form the cold aisle in front of the CRAC unit as shown in Fig.
As observed from Fig. The authors also found that the intensity of reversed flow in the perforated tiles placed close to the CRAC unit considerably reduces with increase in plenum depth. The pressure variations in the horizontal plane under the plenum are more pronounced for shallow plenums with large negative pressure zones near the CRAC unit, which is mainly responsible for reversed flows. The pressure variations decrease with increase in plenum depths, resulting in more uniform pressure distributions. For the layout illustrated in Fig. The flow through the perforated tile is influenced by the open area.
The discharge velocity from the tile is related to the plenum pressure under the tile via the following relationship. The perforated tile model presented in the following sections is based on the work reported in [10—13]. DpPerf:tile is the difference between the static plenum pressure and room pressure above the raised floor and is defined as. The flow resistance factor is related to the loss coefficient for the tile and is defined as.
K is empirically evaluated using the following relation . The flow direction is determined by the sign of DpPerf:tile. Negative DpPerf:tile values indicate reverse flows, where the plenum pressure is lower than the room pressure. For the air to flow from the plenum to the room the plenum pressure has to be greater than the room pressure.
The plenum air has to overcome the pressure drop in the tile. In a physical sense, the KPerf: tile factor indicates how many velocity heads are lost due to pressure drop across the tile. When the air jets discharged from the tile vents spread into the open space, the pressure change of one velocity head is produced. Since this is small compared to the pressure drop across the perforated tile, the pressure variations in the space above the raised floor can be neglected.
Under this assumption, the room is considered to be at uniform atmospheric pressure throughout. The only other regions where the static pressure variations are significant are near the air velocity and CRAC unit. The effect of pressure variation at the rack inlet is investigated later in the chapter. The pressure drop calculated using the above expression in 2. The tile open area plays a major role in maintaining the plenum static pressure. In general, as the tile open area decreases, the flow across the perforated tiles tends to become more uniform.
This is because the variations in the horizontal pressure gradients in the plenum under the perforated tile become less significant compared to the pressure drop across the perforated tile. However, the decrease in the perforated tile area is marked by an associated increase in plenum pressure. The increase in plenum pressure leads to loss of air through gaps between perforated tiles and openings in the raised floor provisioned for bringing in electrical cables. As the open area of the perforated tiles decreases, the leakage area becomes comparable to the total perforated tile open area.
The flow resistance of the perforated tiles now becomes comparable to the flow resistance of the openings in the raised floor, leading to an increase in plenum air leakage. The effect of plenum air leakage is discussed in the following section. The blockages impede the plenum flow causing non uniformity in plenum pressures. This results in maldistribution of flow across the perforated tiles.
To quantitatively ascertain the extent of flow mal- distribution the obstructions have to be included in the CFD model for the plenum. The impact of the flow maldistribution due to under-floor blockages using CFD tools is presented in Chap. The obstructions reduce the effective area available for the flow of air. The variation in plenum area available for flow causes variation in horizontal velocities in the plenum. Since Dp a V 2 , a more significant variation in plenum pressure is observed. Due to retardation of the flow in the upstream side of the blockage, the static pressure increases, resulting in higher perforated tile flow rates in the upstream compared to downstream side of the blockage, where the static pressure is low.
In some cases the static pressure can drop below ambient pressure, which would result in reversed flows. Significant variation in flow is usually observed when the dimension of the obstruction normal to the flow is comparable to the plenum height. For example, a cm diameter circular pipe would hardly affect the flow in a cm high plenum.
However, this would significantly affect the flow if the plenum height were reduced to 30 cm 12 in.
- Remarkable Mathematicians: From Euler to von Neumann (The Spectrum Series)!
- 3 Green Strategies;
- Safe Design and Operation of Process Vents and Emission Control Systems (CCPS Concept Books)?
- Recommended for you?
The cm diameter pipe is placed parallel to the CRAC unit between tile 5 and tile 6, leaving a clearance of 15 cm above the pipe for the air to flow. Referring to Fig. The pressure variations in turn affect the perforated tile flow rates as illustrated in Fig. Note that the increase in plenum pressure in the upstream region of the pipe results in higher flow rates in the perforated tiles placed near the CRAC unit.
In general, the obstructions alter the plenum pressure distribution resulting in flow maldistribution across the perforated tiles. The extent of flow maldistri- bution not only depend on the size of the obstruction but also on the location of the obstruction. Further details on impact of blockages on perforated tile flow rates for various cases are presented in Chap. Air leakage from the plenum is a major issue. The lost air represents wastage of the precious cooling resource, as it does not contribute to cooling of the compute equipment.
If the source of leakage is located in the hot aisle, mixing of the cold and warm airstreams will drop the average hot air return temperature to the CRAC unit. This not only impacts the blower energy use but also impairs the CRAC cooling performance. Air often leaks from the plenum through minute gaps between the perforated tiles and cutouts provisioned for routing the electrical and network cables housed on the raised floor.
The air loss through the panel gaps in the perimeter of the tile between the raised floor and the tiles, as illustrated in Fig. The leakage area can be in the range of 0. It is estimated that in a data center operating with 10 cooling units, airflow from one of the CRAC unit is utilized to make up for the air lost due to floor-tile gap leakage.
During data center design, distribution losses are accounted as fixed losses and generally based on rules of thumb, or limited experimental measurements. However, in reality these losses are related to variations in plenum pressures, and significantly increase at higher plenum pressures. Decrease in perforated tile area increases the plenum pressure, resulting in an increase in distribution loss. A model developed in the literature  for these losses is described in the following section.
The expression for pressure drop across the tile represented by 2. Equation 2. Since the plenum pressure is identical we can express the gap air velocity in terms of tile velocity as. The total quantity of air discharged from the gap and tile is obtained using the following expression:. Using 2. The above expression is valid under the following assumptions: a The plenum pressure is uniform and not affected by the leakages or blockages.
The pressure drop under the floor becomes more uniform as the number of perforated tiles decrease or if more restrictive tiles are used. As discussed earlier, increasing the plenum height can also result in a more uniform pressure distribu- tion. The above equation 2.
The strength of the above expression is exhibited in Fig. Hence, sealing the openings in the perforated tiles and plenum walls would only partially increase the available air for cooling. Typically, the distribution losses are high in large data center facilities operating at partial capacity, as the area occupied by the perforated tiles is small compared to the total floor area. The distribution leakage increases with facility age, due to wear in the tile seals.
Depending on the wear in the tile seal, the corres- ponding distributed leakage area can vary between 0. Achieving uniformity in flow rates plays a major role in data center floor design. Successful floor layouts exhibit uniform air distribution across all the perforated tiles. Owing to design constraints such as blockages due to obstructions from chilled water pipes, cable trays, and cables it is difficult to achieve this in practice.
Furthermore, in many cases the perforated tile flow rate may need to be tuned to rack airflow requirements. Hence, the solution for tuning airflow has to be flexible without involving infrastructural changes such as increasing the raised floor height or installing new CRAC units. There are many ways to balance the flow across the perforated tiles. Two most commonly used methods are discussed below: a Variable open area perforated tiles The nonuniformity in the flow across various perforated tiles is primarily due the variation in plenum static pressure.
As explained in Sect. From 2. The velocity of the air discharged from the tile depends on the open area of the tile. In general, installing perforated tiles with larger open area will reduce the pressure drop across the tile. This will increase the flow rate from the tile. As data centers look to reduce their environmental impact on the local communities, the reduction or complete elimination of water has been an important focus.
Additionally, the ongoing cost of water in areas where it may be a relatively scarce commodity and the reoccurring cost of water treatment continues to increase operating costs. Finally, the sheer volume of water consumed — estimated at 6. These factors, as well as many other advantages, are part of why water-free cooling systems have grown in popularity in recent years.
By utilizing pumped refrigerant technology in place of energy intensive mechanical cooling through parts of the year, industry-leading systems can deliver annual mechanical power usage effectiveness PUE ranging between 1. Pumped refrigerant systems use a refrigerant pump to save energy instead of compressors in low to moderate ambient situations. The systems rely on outdoor ambient temperatures and IT load to optimize operation instead of defined outdoor temperature set points, allowing the operator to maximize their potential economization hours. Such systems deliver a consistent data center environment through physical separation of indoor and outdoor air streams, preventing the cross-contamination or transfer of humidity.
Energy Efficient Thermal Management of Data Centers
The systems can be arranged in a variety of capacities and configurations and are available as split systems or as package systems. In large deployments, units are purchased at high volumes, so pumped refrigerant systems also save costs by providing added capacity without the need for large, centralized chiller plants, pumps or cooling towers. Alternative cooling solutions such as direct evaporative cooling systems are also being deployed, particularly in regions of moderate temperatures and low levels of humidity.
These systems take advantage of the cool external air for free cooling. During times of higher external temperatures, the system uses a wetted media pad to cool the air as it is drawn in. Such a system offers advantages such as lower peak power requirements compared to traditional compressor-based systems and lower overall energy consumption while consuming minimal water throughout the year but require a wider allowable temperature and humidity operating range to truly capture these savings. Additionally, consideration as to how to maintain cooling for the data center in the event outside air is not available must be considered and may drive the requirement for a full mechanical system to be in place for such an event.
More flexible. Slab floor construction, in tandem with hot-aisle containment, is becoming the norm to drive down building costs and increase speed of deployment. This changes the cooling profile as airflow can no longer be altered by simply moving floor tiles. Advanced thermal controls may be used to integrate between rack sensors and cooling units to ensure the system is working properly and efficiently.
Energy Efficient Thermal Management of Data Centers | Ebook | Ellibs Ebookstore
In addition to pumped refrigerant or evaporative cooling, there are several other cooling enhancements or alternatives that can be incorporated into raised floor and non-raised floor environments to improve efficiency or solve specific challenges, such as expanding capacity or supporting higher density racks:. When the air is allowed to mix, the temperature of the air returning to the cooling units is lower, which reduces their efficiency. The heated air is then returned to the computer room air conditioning unit for room recirculation. While this maximizes server space, it could limit future flexibility as racks are basically fixed in position by the ducting.
The rear door heat exchangers can be installed on the back of racks and use the server fans to push the air through the heat exchanger passive or fans can be added to assist in drawing air through the coil active. Keeping data and compute nearer to the customer — is behind the growth of the edge. The Internet of Things IoT and 5G networks are 2 factors driving this growth, but there are many more. Small data centers, or even converted storage spaces or closets make up this powerful market.