18
Optimizing data centers for high-density computing technology brief Abstract .............................................................................................................................................. 2 Introduction ......................................................................................................................................... 2 Power and heat load............................................................................................................................ 2 Power consumption .......................................................................................................................... 2 Heat load........................................................................................................................................ 5 Cooling requirements ....................................................................................................................... 7 Optimizing the effectiveness of cooling resources .................................................................................... 7 Raised floors.................................................................................................................................... 7 Perforated tiles ............................................................................................................................. 8 Air supply plenum ........................................................................................................................ 8 Racks.............................................................................................................................................. 9 Cooling footprint .......................................................................................................................... 9 Internal airflow............................................................................................................................. 9 Hot and cold aisles ..................................................................................................................... 10 Rack geometry ........................................................................................................................... 11 Computer room air conditioners (CRAC) ........................................................................................... 11 Capacity of CRAC units .............................................................................................................. 11 Placement of CRAC units ............................................................................................................. 12 Discharge velocity ...................................................................................................................... 12 Optimized cooling configurations for high-density data centers ............................................................... 13 Ceiling return air plenum ................................................................................................................ 14 Dual supply air plenums.................................................................................................................. 14 Thermal management techniques ......................................................................................................... 15 Static Smart Cooling....................................................................................................................... 15 The Need for Planning ....................................................................................................................... 16 Conclusion ........................................................................................................................................ 16 For more information.......................................................................................................................... 18 Call to action .................................................................................................................................... 18

Meet the HP Superdome servers

Embed Size (px)

Citation preview

Page 1: Meet the HP Superdome servers

Optimizing data centers for high-density computing technology brief

Abstract.............................................................................................................................................. 2 Introduction......................................................................................................................................... 2 Power and heat load............................................................................................................................ 2

Power consumption .......................................................................................................................... 2 Heat load........................................................................................................................................ 5 Cooling requirements ....................................................................................................................... 7

Optimizing the effectiveness of cooling resources .................................................................................... 7 Raised floors.................................................................................................................................... 7

Perforated tiles ............................................................................................................................. 8 Air supply plenum ........................................................................................................................ 8

Racks.............................................................................................................................................. 9 Cooling footprint .......................................................................................................................... 9 Internal airflow............................................................................................................................. 9 Hot and cold aisles..................................................................................................................... 10 Rack geometry ........................................................................................................................... 11

Computer room air conditioners (CRAC) ........................................................................................... 11 Capacity of CRAC units .............................................................................................................. 11 Placement of CRAC units ............................................................................................................. 12 Discharge velocity ...................................................................................................................... 12

Optimized cooling configurations for high-density data centers ............................................................... 13 Ceiling return air plenum ................................................................................................................ 14 Dual supply air plenums.................................................................................................................. 14

Thermal management techniques ......................................................................................................... 15 Static Smart Cooling....................................................................................................................... 15

The Need for Planning ....................................................................................................................... 16 Conclusion........................................................................................................................................ 16 For more information.......................................................................................................................... 18 Call to action .................................................................................................................................... 18

Page 2: Meet the HP Superdome servers

Abstract This paper describes factors causing the increase in power consumption and heat generation of computing hardware. It identifies methods to optimize the effectiveness of cooling resources in data centers that are beginning to deploy high-density equipment and data centers that are fully populated with high-density equipment. The intended audience for this paper includes IT managers, IT administrators, facility planners, and operations staff.

Introduction From generation to generation, the power consumption and heat loads of computing, storage, and networking hardware in the data center have drastically increased. The ability of data centers to meet increasing power and cooling demands is constrained by their current designs. Most data centers were designed using average (per unit area) or "rule of thumb" criteria, which assumes that power and cooling requirements are uniform across the facility. In actuality, power and heat load within data centers are asymmetric due to the heterogeneous mix of hardware and the varying workload on computing, storage, and networking hardware. These factors can create "hot spots" that cause problems related to overheating (equipment failures, reduced performance, and shortened equipment life) and drastically increase operating costs.

Due to the dynamic nature of data center infrastructures, air distribution problems cannot always be solved by installing more cooling capacity or localized cooling technologies. A more sophisticated scientific method can help to find the most effective solutions. Research at HP Laboratories has found that proper data center layout and improved computer room air conditioner (CRAC) utilization can prevent hot spots and yield substantial energy savings.

This paper is intended to raise the level of awareness about the present and future challenges facing data centers beginning to deploy high-density equipment and data centers fully populated with high-density equipment. This paper describes power consumption and heat load, recommends methods to optimize the effectiveness of data center cooling resources, and introduces thermal management methods for high-density data centers.

Power and heat load In the past when data centers mainly housed large mainframe computers, power and cooling design criteria were designated in average wattage per unit area (W/ft2 or W/m2) and British Thermal Units per hour (BTU/hr), respectively. These design criteria assumed that power and cooling requirements were uniform across the entire data center. Today, IT managers are populating data centers with a heterogeneous mix of high-density hardware as they try to extend the life of their existing space. This high-density hardware requires enormous amounts of electricity and produces previously unimaginable amounts of heat.

For example, IT infrastructures are now using 1U dual-processor and 4U quad-processor ProLiant blade servers that can be installed together in a rack-mounted enclosure, interconnected, and easily managed. This high-density server technology lowers the operating cost per CPU by reducing management expenses and the requirements for floor space. Despite speculation that high-density server technology has the opposite effect on power consumption and heat load, a closer server-to-server comparison reveals that HP p-Class blades consume less power and generate less heat load.

Power consumption HP provides online power calculators to estimate power consumption for each ProLiant server. The power calculators provide information based on actual system measurements, which are more accurate than using nameplate ratings. Figure 1 shows a screen shot of a power calculator, which is

2

Page 3: Meet the HP Superdome servers

a macro-driven Microsoft Excel spreadsheet. Power calculators for all current HP ProLiant servers can be found at http://www.hp.com/configurator/calc/Power Calculator Catalog.xls.

Figure 1. Screen shot of ProLiant ML530 G2 server power calculator

3

Page 4: Meet the HP Superdome servers

From generation to generation, the power consumption of high-density servers is increasing due to the extra power needed for faster and higher capacity internal components. For example, the power required by a ProLiant DL360 G3 featuring a 3.0-GHz Intel® Xeon™ processor is 58 percent higher than its predecessor with a Pentium III 1.4 GHz processor (see Table 1).

Table 1. Increase in power consumption from generation to generation of ProLiant DL servers

Rack Unit

(CPUs/Memory/Drives/Adapters)

Previous Generation Present Generation Power Increase

1U

(2P, 4 GB, 2 HDD, 1PCI)

DL360 G2

246 W

1.2A @ 208V

848 BTU/hr

DL360 G3

389 W

1.6 A @ 208V

1109 BTU/hr

2U

(2P, 4 GB, 6 HDD, 2 PCI)

DL380 G2

362 W

1.8 A @ 208V

1233 BTU/hr

DL380 G3

581 W

2.1 A @ 208V

1450 BTU/hr

4U

(4P, 8 GB, 4 HDD, 3PCI)

DL580 G1

456 W

2.3 A @ 208V

1608 BTU/hr

DL580 G2

754 W

4.0 A @ 208V

2770 BTU/hr

The table above compares the power consumption of individual servers; however, standard racks can house several of these servers. Estimating the power consumption of a rack of servers is more difficult because several variables contribute to the amount of power consumed (number of servers per rack, type and number of components in each server, etc.). For racks, a very useful metric is power density, or power consumption per rack unit (W/U). Power density captures all of the key variables that contribute to rack densification. Figure 2 illustrates the overall power density trend from generation to generation of HP ProLiant BL and DL servers.

Figure 2. Power density of HP ProLiant BL and DL servers (Watts/U)

4

Page 5: Meet the HP Superdome servers

Heat load Virtually all power consumed by a computer is converted to heat. The heat generated by the computer is typically expressed in BTU/hr, where 1 W equals 3.413 BTU/hr. Therefore, once the power consumption of a computer or a rack of computers is known, its heat load can be calculated as follows:

Heat Load = Power [W] × 3.413 BTU/hr per watt

For example, the heat load for a DL360 G3 server is 325 W × 3.413 BTU/hr/W =1109 BTU/hr. The heat load of a 42U rack of DL 360 G3 servers is almost 47,000 BTU/hr, which is as much as a typical one-story house. Table 2 lists the power requirements and heat loads of racks of density-optimized ProLiant DL and BL class servers. The table shows the trend toward higher power and heat load with rack densification.

Table 2. Power and heat loads of fully-configured, density-optimized ProLiant servers*

DL 580 G2 DL380 G3 DL360 G3 BL20p G2

ProLiant Server

Servers per Rack

10 21 42 48

Power 10 × 790W =

7.9 kW

21 × 500W =

10.5 kW

42 × 325W =

13.65 kW

6 × 2458W =

14.75 kW

Heat load 26,940 BTU/hr 35,840 BTU/hr 46,578 BTU/hr 50,374 BTU/hr

* These calculations are based on the product nameplate values for fully configured racks, which may be higher than the actual power consumption and heat load.

IT equipment manufacturers typically provide power and heat load information in their product specifications. HP provides a Rack/Site Installation Preparation Utility to assist customers in approximating the power and heat load per rack for facilities planning (Figure 3). The Site Installation Preparation Utility uses the individual platform power calculators and allows customers to calculate the full environmental impact of racks with varying configurations and loads. This utility can be downloaded from http://www.hp.com/configurator/calc/Power Calculator Catalog.xls.

5

Page 6: Meet the HP Superdome servers

Figure 3. The ProLiant Class, Rack/Site Installation Preparation Utility available on the HP website

6

Page 7: Meet the HP Superdome servers

Cooling requirements Experts cannot agree on how many kilowatts can be cooled with the resources that exist in today's data centers. The answer goes beyond adding up the heat loads of all the equipment in a given facility. Experts do agree that the cooling demands will continue to increase and that the designs of data centers will have to take a more holistic approach that examines cooling from the chip level to the data center level.

The main challenge for today's data centers is getting the needed cooling to each piece of equipment and successfully extracting the heat generated with the cooling resources already in place. The following section investigates several methods to optimize the effectiveness of cooling resources that are common in today's data centers.

Optimizing the effectiveness of cooling resources This section recommends methods to optimize the effectiveness of cooling resources in raised floor infrastructures, a common configuration used in today's data centers.

Raised floors Most data centers use a down draft airflow pattern in which air currents are cooled and heated in a continuous convection cycle. The down draft airflow pattern requires a raised floor configuration that forms an air supply plenum beneath the raised floor (Figure 4). The CRAC unit draws in warm air from the top, cools the air, and discharges it into the supply plenum beneath the floor. Raised floors typically measure 18 inches (46 cm) to 36 inches (91 cm) from the building floor to the top of the floor tiles, which are supported by a grounded grid structure. The static pressure in the supply plenum pushes the air up through perforated floor tiles to cool the racks. Most equipment draws in the cold supply air and exhausts warm air out the rear of the racks. Ideally, the warm exhaust air rises to the ceiling and returns along the ceiling back to the top of the CRAC units to repeat the cycle. Many traditional data centers arrange rows of racks in the front-to-back layout shown in Figure 4. While this layout can handle lower power densities, as the power density increases, the equipment inlet temperatures will begin to rise (shown in the figure) and overheat critical resources. This constant mixing of cold and hot air is very inefficient and it wastes valuable cooling resources and energy.

Figure 4. Traditional raised floor configuration with high-density racks

7

Page 8: Meet the HP Superdome servers

Perforated tiles Floor tiles range from 18 inches (46 cm) to 24 inches (61 cm) square. The percentage and placement of perforated floor tiles are major factors in maintaining static pressure. Perforated tiles should be placed in front of at least every other rack. In higher density environments, perforated tiles may be necessary in the front of each rack. Perforated tiles are classified by their open area, which may vary from 25 percent (the most common) to 56 percent (for high airflow). A 25 percent perforated tile provides approximately 500 cubic feet per minute (cfm) at a 5 percent static pressure drop, while a 56 percent perforated tile provides approximately 2000 cfm.

Air supply plenum The air supply plenum must be a totally enclosed space to achieve pressurization for efficient air distribution. The integrity of the subfloor perimeter (walls) is critical to moisture retention and the maintenance of supply plenum pressure. This means that openings in the plenum perimeter and raised floor must be filled or sealed. Subfloor plenum dividers should be constructed in areas with large openings or that lack subfloor perimeter walls.

The plenum is also used to route piping, conduit, and cables that bring power and network connections to the racks. In some data centers, cables are simply laid on the floor in the plenum where they can become badly tangled (Figure 5). This can result in cable dams that block airflow or cause turbulence that minimizes airflow and creates hot spots above the floor. U-shaped “basket” cable trays or cable hangers can be used to manage cable paths, prevent blockage of airflow, and provide a path for future cable additions. Another option is to use overhead cable trays to route network and data cables so that only power cables remain in the floor plenum.

Electrical and network cables from devices in the racks pass through cutouts in the tile floor to wireways and cable trays beneath the floor. Oversized or unsealed cable cutouts allow supply air to escape from the plenum, thereby reducing the static pressure. Self-sealing cable cutouts are required to maintain the static pressure in the plenum (Figure 6).

Figure 5. Unorganized cables (left) and organized cables (right) beneath a raised floor.

Figure 6. Self-sealing cable cutout in raised floor

8

Page 9: Meet the HP Superdome servers

Racks Racks (cabinets) are a critical part of the overall cooling infrastructure. HP enterprise-class cabinets provide 65 percent open ventilation using perforated front and rear door assemblies (Figure 7). To support the newer high-performance equipment, glass doors must be removed from older HP racks and from any third-party racks.

Figure 7. HP enterprise-class cabinets

10000 Series Cabinets

Cooling footprint The floor area that each rack requires must include an unobstructed area to draw in and discharge air. Almost all HP equipment cools from front to rear so that it can be placed in racks positioned side-by-side. The cooling footprint (Figure 8) includes width and depth of the rack plus the area in front for drawing in cool air and the area in back for exhausting hot air.

Equipment that draws in air from the bottom or side or that exhausts air from the side or top will have a different cooling footprint. The total physical space required for the data center includes the cooling footprint of all the racks plus free space for aisles, ramps, and air distribution. Typically, a width of two floor tiles is needed in front of the rack, and a width of at least one unobstructed floor tile is needed behind the rack to facilitate cable routing.

Figure 8. Cooling footprint

Internal airflow Front and rear cabinet doors that are 65 percent open to incoming airflow also present a 35 percent restriction to air discharged by the equipment in the rack. Servers will intake air from the path of least

9

Page 10: Meet the HP Superdome servers

resistance. Therefore, they will access the higher-pressure discharge air flowing inside the cabinet easier than they will access cooling air coming through the front of the cabinet. Some configurations such as those with extreme cable or server density may create a backpressure situation forcing heated exhaust air around the side of a server and back into its inlet. In addition, air from the cold isle or hot isle can flow straight through a rack with open "U" spaces. Gaskets or blanking panels must be installed in any open spaces in the front of the rack to support the front-to-back airflow design and prevent these negative effects (Figure 9).

Figure 9. Airflow in rack without blanking panels (top) and with blanking panels (bottom)

Hot and cold aisles The front-to-rear airflow through HP equipment allows racks to be arranged in rows front-to-front and back-to-back to form alternating hot and cold aisles. The equipment draws in the cold supply air and exhausts warm air out the rear of the rack into hot aisles (Figure 10). The amount of space between rows of racks is determined as follows.

• Cold aisle spacing should be 48 inches, two full tiles, and hot isle spacing should be at least one full tile, 24 inches minimum. This is required for equipment installation and removal and for access beneath the floor.

• Cold aisles should be a minimum of 14 feet apart center-to-center, or seven full tiles.

Figure 10. Airflow pattern for raised floor configuration with hot aisles and cold aisles.

10

Page 11: Meet the HP Superdome servers

Rack geometry Designing the data center layout to form hot and cold aisles is one step in the cooling optimization process. Also critical is the geometry of the rack layout. Research by HP Laboratories has revealed that minor layout changes in rack placement can change the fluid mechanics inside a data center, leading to inefficient utilization of CRAC units. See the "Static Smart Cooling" section for more information.

Computer room air conditioners (CRAC) A common question with respect to cooling resources is how many kilowatts a particular CRAC unit can cool. Assuming a fixed heat load from the equipment in its airflow pattern, the answer depends largely on the capacity of the CRAC unit, its placement in the facility, and its discharge velocity.

Capacity of CRAC units The heat load of equipment is normally specified in BTU/hr. However, in the U.S., CRAC unit capacity is often expressed in "tons" of refrigeration, where one ton corresponds to a heat absorption rate of 12,000 BTU/hr. The "tons" capacity rating is measured at 80˚F; however, the recommended operating conditions for CRAC units are 70˚ to 72˚F and 50 percent relative humidity (RH). At 72˚F, the CRAC unit output capacity is considerably reduced. Furthermore, the tons rating is very subjective because it is based on total cooling, which is comprised of "sensible cooling" and "latent cooling."

Computer equipment produces sensible heat only; therefore, the sensible cooling capacity of a CRAC unit is the most useful value. For this reason, CRAC unit manufacturers typically provide cooling capacities as "total BTU/Hr" and "sensible BTU/Hr" at various temperatures and RH values. Customers should review the manufacturer's specifications and then divide the sensible cooling capacity (at the desired operating temperature and humidity) by 12,000 BTU/Hr per ton to calculate the useable capacity of a given CRAC unit, expressed in tons of cooling.

Cooling capacity is also expressed in volume as cubic feet per minute (cfm). The volume of air required is related to the moisture content of the air and the temperature difference between the supply air and return air (∆T):

Cubic feet per minute = BTU/hr ÷ (1.08 × ∆T)

The cooling capacity calculations presented here are theoretical, so other factors must be considered to determine the effective range of a particular CRAC unit. The effective cooling range is determined by the capacity of the CRAC unit and the heat load of the equipment in its airflow pattern. Typically, the most effective cooling begins about 8 feet (2.4 m) from the CRAC unit. Although units with capacities greater than 20 tons are available, the increased heat density of today's servers limits the cooling range to approximately 30 feet or 9.1 m (Figure 11).

Figure 11. Cooling ranges of CRAC units

11

Page 12: Meet the HP Superdome servers

Placement of CRAC units The geometry of the room and the heat load distribution of the equipment determine the best placement of the CRAC units. CRAC units can be placed inside or outside the data center walls. Customers should consider placing liquid-cooled units outside the data center to avoid damage to electrical equipment that could be caused by coolant leaks.

CRAC units should be placed perpendicular to the rows of equipment and aligned with the hot aisles, discharging air into the supply plenum in same direction (Figure 11). This configuration provides the shortest possible distance for the hot air to return to the CRAC units. Discharging in the same direction eliminates dead zones that can occur beneath the floor when blowers oppose each other. Rooms that are long and narrow may be cooled effectively by placing CRAC units around the perimeter. Large, square rooms may require CRAC units to be placed around the perimeter and through the center of the room.

Figure 11. CRAC units should be placed perpendicular to hot aisles so that they discharge cool air beneath the floor in the same direction.

Discharge velocity To force air from beneath the raised floor through the perforated tiles, the static pressure in the supply air plenum must be greater than the pressure above the raised floor. Typically, the plenum pressure should be at least 5 percent greater than the pressure above the floor.

Excessive discharge velocity from the CRAC unit reduces the static pressure through perforated tiles nearest the unit, causing inadequate airflow (Figure 12). The static pressure increases as the high-velocity discharge moves away from the unit, thereby increasing the airflow through the perforated tiles. To counter this situation, airfoils under the raised floor can be used to divert air through the perforated tiles.1 Another option is to use a fan-assisted perforated tile to increase the supply air circulation to a particular rack or hot spot. Fan-assisted tiles can provide 200 to 1500 cfm of supply air.

1 From Changing Cooling Requirements Leave Many Data Centers at Risk. W. Pitt Turner IV, P.E. and Edward C. Koplin, P.E. ComputerSite Engineering, Inc.

12

Page 13: Meet the HP Superdome servers

Figure 12. Plenum static pressure greater than pressure above the floor (left). High-velocity discharge reduces static pressure closest to the unit (right).

Optimized cooling configurations for high-density data centers To achieve an optimum down draft airflow pattern, warm exhaust air must be returned to the CRAC unit with minimal obstruction or redirection. Ideally, the warm exhaust air will rise to the ceiling and return to the CRAC unit intake. In reality, only the warm air closest to the intake may be captured; the rest may mix with the supply air. Mixing occurs if exhaust air goes into the cold aisles, if cold air goes into the hot aisles, or if there is insufficient ceiling height to allow for separation of the cold and warm air zones (Figure 13). When warm exhaust air mixes with supply air, two things can happen:·

• The temperature of the exhaust air decreases, thereby lowering the useable capacity of the CRAC unit.

• The temperature of the supply increases, which causes warmer air to be recirculated through computer equipment.

Figure 13. Mixing of supply air and exhaust air.

13

Page 14: Meet the HP Superdome servers

Ceiling return air plenum In recent years, raised floor computer rooms with very high heat density loads have begun to use a ceiling return air plenum to direct exhaust air back to the CRAC intake. As shown on the right of Figure 15, the ceiling return air plenum removes heat while abating the mixing of cold air and exhaust air. Once the heated air is in the return air plenum, it can travel to the nearest CRAC unit intake. The return air grilles in the ceiling can be relocated if the layout of computer equipment changes.

Figure 15. Ceiling return air plenum.

Dual supply air plenums As power and heat densities climb, a single supply air plenum under the raised floor may be insufficient to remove the heat that will be generated. High-density solutions may require dual supply air plenums, one above and one below (see Figure 16). In this configuration, additional supply air is forced downward in the cold aisle

Figure 16. Dual air supply plenum configuration for high-density solutions

14

Page 15: Meet the HP Superdome servers

Thermal management techniques The heat load within a data center varies due to the heterogeneous mix of hardware types and models, changing compute workloads, and the addition or removal of racks over time. The variation in heat load is too complex to predict intuitively and to solve by adding cooling capacity.

HP Laboratories has devised a thermal analysis approach known as Static Smart Cooling2 which models heat distribution throughout a data center using computational fluid dynamics (CFD). CFD modeling predicts the changes in heat extraction of each CRAC unit when the rack layout and equipment heat load are varied. The heat extraction of each CRAC unit is compared to its rated capacity to determine how efficiently (or inefficiently) the CRAC unit is being used, or "provisioned." The provisioning of each unit in the data center is presented as a positive or negative percentage as follows:

• An under-provisioned CRAC unit (positive percentage) indicates that the cooling load is higher than the capacity of the unit.

• A properly provisioned CRAC unit (small negative percentage) signifies that the cooling load is less than but reasonably close to the capacity of the unit, leading to efficient use of energy resources.

• An over-provisioned CRAC unit (large negative percentage) operates significantly below the capacity of the unit. This results in wasted energy if the unit's operation cannot be adjusted to match the lower cooling load.

Static Smart Cooling Static Smart Cooling uses CFD modeling to determine the best layout and provisioning of cooling resources based on fixed heat loads from data center equipment. For example, Figure 17 shows the row-wise distribution of heat loads (41 kW to 182 kW) for a combination of compute, storage, and networking equipment in a typical raised floor data center with four CRAC units. The CFD model shows that the provisioning of the CRAC units is completely out of balance.

Figure 17. Poorly provisioned CRAC units

2 For more information, please read Thermal Considerations in Cooling Large Scale High Compute Density Data Centers at http://www.hpl.hp.com/research/papers/2002/thermal_may02.pdf.

15

Page 16: Meet the HP Superdome servers

In Figure 18, the 102-kW row and the 182-kW row are swapped to better distribute the heat load. This CFD model shows that the CRAC units are now provisioned within 15 percent of their capacity.

Figure 18. Statically provisioned CRAC units

The Need for Planning The lack of sufficient installation planning is one of the major factors affecting the deployment of today’s high-density compute platforms. In contrast, customers that deploy larger mainframe enterprise solutions are typically provided with detailed “Site Preparation Guides” to ensure that the customer’s environment is prepared for installation of the equipment. These guides facilitate the engagement of all parties concerned with the data center, from Facilities Engineering to Network Operations. Consequently, larger data center products seldom create thermal and power management problems after they are installed.

The high-density platforms such as the ProLiant BL and DL lines require enterprise-level power and produce enterprise-level heat loads. This is driving more and more organizations to begin the planning process earlier in the procurement cycle. Site Preparation Guides for these products will be available in the near future. In the mean time, planning should begin as early as possible in the procurement cycle, and it should engage all facility resources to ensure trouble-free installations.

Conclusion This paper described some of the challenges that must be addressed during the design of a new or retrofitted data center. The trends in power consumption of computing components necessitate modular data center designs with sufficient headroom to handle increasing power and cooling requirements. To determine these requirements, several factors should be considered, including the capacity and placement of the CRAC units, and the geometry of the room. In addition, high-density data centers require special attention to factors that affect airflow distribution, such as supply plenum static pressure, airflow blockages beneath raised floors, and configurations that result in airflow mixing in the data center.

HP is a leader in the thermal modeling of data centers. HP Professional Services can work directly with customers to optimize existing data centers for more efficient cooling and energy consumption. The modeling services can also be used to confirm new data center designs or predict what will happen in a room when certain equipment fails. As long as the data center has the power and

16

Page 17: Meet the HP Superdome servers

cooling resources to support the expected loads, Static Smart Cooling can rectify cooling problems as well as enhance the overall efficiency of air conditioning resources. In most cases, the energy savings alone may pay for the cost of the service in a relatively short period.

17

Page 18: Meet the HP Superdome servers

For more information For additional information, refer to the resources detailed below.

Resource description Web address

Thermal Considerations in Cooling Large Scale High Compute Density Data Centers white paper

http://www.hpl.hp.com/research/papers/2002/thermal_may02.pdf

HP Rack/Site Installation Preparation Utility

http://www.hp.com/configurator/calc/Power Calculator Catalog.xls

Power calculators http://www.hp.com/configurator/calc/Power Calculator Catalog.xls

Static Smart Cooling Services

http://cscewlkginet.lkg.cpqcorp.net/cscma/weec_web/smartcooling.html

Call to action To help us better understand and meet your needs for ISS technology information, please send comments about this paper to: [email protected].

© Copyright 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

TC040202TB, 02/2004