16
DATA CENTER CABINET DYNAMICS UNDERSTANDING SERVER CABINET THERMAL, POWER AND CABLE MANAGEMENT BY BRIAN MORDICK, RCDD SENIOR PRODUCT MANAGER HOFFMAN THERMAL MANAGEMENT CABLE MANAGEMENT POWER MANAGEMENT MANAGEMENT STRATEGIES FOR:

Data Center Cabinet DynamiCs

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Data Center Cabinet DynamiCs

Data Center Cabinet DynamiCsUnderstanding server Cabinet thermal, Power and Cable management

by brian mordiCk, rCddsenior ProdUCt managerhoffman

thermal management

Cable management

POWer management

management strategies for:

Page 2: Data Center Cabinet DynamiCs
Page 3: Data Center Cabinet DynamiCs

summary:

Today’s IT professionals face many challenges in running an efficient data center, whether it is maintaining current installations or planning for future applications. They must protect the productivity of their company’s network end-to-end and research the latest technologies as networking requirements evolve. To ensure the proper IT systems environment, it is essential to consider thermal, power and cable management in today’s server cabinets.

IT professionals put significant emphasis on protecting communications equipment from potential outside threats. Meanwhile, increasing thermal densities, power shortages and fluctuations, and poor cable management may be compromising system operations or destroying the equipment from the inside.

In a recent survey, data center managers indicated they are concerned about the following issues.

Data Center Cabinet Dynamics

thermal Cable POWer

Top Concerns of Data Center Managers

Chart 1Reference: Data Center User’s Group Conference, The Adaptive Data Center: Managing Dynamic Technologies

Used with permission

Page 4: Data Center Cabinet DynamiCs

Data Center Cabinet Dynamics

thermal Cable POWer

securing your network against the Dangers of Overheating

IT professionals take all the necessary precautions to ensure that computer networks and communications equipment are secure and protected. Locks, firewalls, passwords and other protection protocols are in place—but an invisible enemy lurks within and could wreak havoc on the carefully configured and guarded systems.

As equipment heats up, performance slows and productivity drops. It can happen at any time and can be directly attributed to heat buildup in and around electronic equipment. Many companies don’t realize that excessive heat shortens the life of electronic equipment and can even shut it down permanently. Heat may be invisible, but its effects are devastating and costly. According to the Uptime Institute, for every 18 degrees Fahrenheit (10 degrees Celsius) that internal cabinet temperatures rise above normal room temperature, the life expectancy of the enclosed electronics drops by 50 percent.

Advances in technology allow equipment to become faster and more compact, but there are consequences: increased thermal densities. Some industry executives predict that at the current growth rate, thermal heat densities could reach nuclear proportions within a decade if unchecked. Understanding how to temper those densities is becoming increasingly critical to ensure system reliability and availability.

He

at

loa

d p

er

pro

du

ct

foo

tpri

nt

- w

att

s/f

t

He

at

loa

d p

er

pro

du

ct

foo

tpri

nt

- w

att

s/m

Servers & Disk Storage Systems

(1.8-2.2m tall)

Communication Equipment

Tape Storage Systems

© 2000-2006 The Uptime Insititute, Inc. Version 1.2

Workstations (standalone)

(frames)

2

2

Year of First Product Shipment Year of First Product Announcement

blade servers’ impact

Blade servers are the latest in high-density network equipment. They use a common chassis and provide slots for “blades” to be installed. These new levels of power density dramatically increase thermal loads. A single blade server with all slots filled and running at capacity can produce more than 3 kilowatts of heat. Theoretically, a cabinet filled with blade servers (seven or eight chassis) can produce 21 to 24 kilowatts of heat.

Although blade servers represent less than 10 percent of overall server sales, they are growing rapidly and likely to become the industry norm within the next few years. This represents significant challenges to thermal and power management. “How am I going to get that much power to my servers, and how will I get rid of all the heat?” is a common sentiment expressed by most data center managers.

Page 5: Data Center Cabinet DynamiCs

Understanding Server CabinetThermal, Power, and Cable Management

3

Current Practices may not be Working

When it comes to protecting data center servers, IT professionals should think inside the box and select data cabinets that are not only well built but also help manage heat buildup. Thermal management is a growing concern, because many existing data centers weren’t built to handle the thermal densities of next-generation blade servers and networking equipment.

Many organizations believe the answer is simple: Cool the ambient air to lower the inside cabinet temperature. While this approach seems logical, it is problematic. Issues still present are:

• Continued hot spots and overheating.

• Massive increases in energy costs.

• Recirculation air flows are not addressed.

• Using very cold air flows can cause condensation, leading to corrosion, equipment failure, poor or intermittent contacts, thermal expansion or contraction failures, etc.

The best way to measure the amount of heat produced in a cabinet is to measure the

power being consumed. Every watt of power consumed nearly equals every watt of heat produced. The key to keeping equipment cool is channeling or ducting cool air into the equipment and providing a path for the heated air to escape out of the cabinet.

Power Consumption Considerations are Significant

Power management is equally as important as thermal management. As power density requirements continue to climb, data center managers are increasingly asking, “How do I get the power to and distributed within the cabinet?” In addition, there is a direct relationship between power used and heat generated.

Power, defined as voltage x current, is expressed in terms of watts (w) or kilowatts (kW) (1,000 watts). Watts cooling is also the expression used when discussing cooling capacity. The connection is simple: Power in = heat out.

Figure 1This computer- generated image

illustrates heat buildup in the upper portions of this data cabinet.

Figure 1This computer- generated image

illustrates heat buildup in the upper portions of this data cabinet.

For every 18 degrees Fahrenheit (10 degrees Celsius) that internal cabinet temperatures rise above normal room temperature, the life expectancy of the enclosed electronics drops by 50 percent.

—Uptime Institute

Page 6: Data Center Cabinet DynamiCs

Data Center Cabinet Dynamics

thermal Cable POWer

4

POWer in = heat Outpower in = voltage x current (amps)

Example: 208 vac x 30a (amps) = 6,240 watts or 6.24 kW.

The amount of power required provides a direct relationship to the amount of heat generated and the cooling capacity required. For example, power in = voltage x current (amps) e.g. 208 vac x 30A (amps) = 6,240 watts or 6.24 kW.

In the design stage, before the cabinet is put into place and power is measured, the amount of power required and the amount of heat generated can be estimated by taking a

percentage of the “Name Plate” power that is stated on the equipment. Network equipment is required by UL and other agencies to list the equipment’s power requirements. Since this rating accounts for the maximum power that can be consumed by the power supply, only a percentage of this should be used. Typically, power supplies are designed to provide many times the power output than the network equipment actually needs. Using 50 to 75 percent of the “Name Plate” power provides a good estimate for calculating the amount of heat the cabinet will produce.

It should be noted that it takes more power to cool than to heat. While network equipment readily converts its power usage to heat, e.g. 5,000 watts of power in produces 5,000 watts of heat, cooling systems do not. Five thousand watts of cooling could require 10,000 watts or more of power.

What causes the rapid increase in power and thermal loads?

When a cabinet is filled with blade servers, the average power consumption of that cabinet can increase from 1,500 watts to more than 20,000 watts (20 kilowatts). This increase in

power and the resulting increase in heat impact a data centers’ capacity to service customers. This level of power demand changes the way power is distributed inside the cabinet. Where a basic 15A power strip with multiple outlets was required, a three-phase 208 vac capable of more than 16.6 kilowatts of power provided by a PDU (Power Distribution Unit) is now needed to handle greater power demands.

The solution seems simple: ensure that the data center is capable of providing 20 kilowatts or more of redundant power and cooling to every enclosure. While that may seem easy, it’s not always economical, practical or even technically possible because of up-front infrastructure capital cost and ongoing operational costs for the life of the data center. The capital cost to provide this level of thermal and power service is typically beyond the reach of many companies, because even though they are dependent on their data centers and the services they provide, companies are forced to make compromises due to budget realities.

a brief look at thermal basics

Network equipment requires a stream of cool air to continually run via convection. There are only two components that a data center manager can manipulate to dissipate the heat generated inside the cabinet: the amount of air and the data center temperatures. The very best designed data center typically can provide air temperatures around 55 degrees Fahrenheit, thus a ∆T° (in °F) of about 45 degrees Fahrenheit.

Page 7: Data Center Cabinet DynamiCs

Understanding Server CabinetThermal, Power, and Cable Management

5

As cooling strategies become more complex, the resulting increase in the number of components and their potential failure can result in rapid temperature rise in the cabinet in as little as 5 to 10 minutes. Choosing the best thermal and power management solution is essential to help facilitate optimal component speed and processing power in your data center without sacrificing reliability and performance.

Cabinet Design’s role in heat Dispensation

Cabinets can be designed with features that facilitate heat dispensation and be placed in a data center to define specific thermal zones for air intake and exhaust to create maximum cooling efficiencies.

Hoffman has tested several cabinet configurations to determine how cabinet design and data center placement can maximize heat dispensation and established best practices for keeping electronic equipment cool and reliable.

Passive Cooling versus active Cooling

Passive cooling uses louvers, vents and perforated panels, along with the equipments’ fans, to exchange ambient air. Active cooling uses cabinet venting fans to exhaust hot air and can be used in conjunction with piped-in chilled air.

Critical Formulas For thermal management

Watts (power) = voltage x current (amperes) = Watts (heat load)Watts (thermal convection cooling) = .316 x CFM x ∆T° (in °F)

orCFM = Watts (cooling) / .316 x ∆T° (in °F)

or∆T° (in °F) = Watts (cooling) /(.316 x CFM)

this equation can be manipulated to solve any of the three variables: Watts (cooling), CFM or ∆T° (in °F), and is invaluable in the design and operation of a data center.

CFm = cubic feet per minute (quantity of air and its velocity)

∆T° (in °F) = Delta T (the difference between the coolest air (55°F) and the maximum allowable temperature (95°F).

example:10 kW of heat load in a typical data center with a (30 ∆T°) will need 1,055 CFM

btus (british thermal units) = Watts cooling x 3.413example: 10 kW cooling = 34,130 btus

Page 8: Data Center Cabinet DynamiCs

Data Center Cabinet Dynamics

thermal Cable POWer

6

hot aisle/Cold aisle Data Center layout

A hot aisle/cold aisle data center layout has specific hot and cold areas. Computer room air conditioners (CRAC) are placed strategically to create cold aisles. The cabinets on both sides of those aisles have network equipment installed that draws the cold air through the cabinet fronts and into its intakes. The equipment exhaust

exits through the cabinet rear, creating hot aisles that alternate with the cold aisles. The hot air is then re-circulated to the CRAC unit. This airflow management strategy addresses adverse equipment airflow, preventing equipment exhaust from being drawn into other equipment intakes. This type of data center layout has been universally accepted and is being actively deployed in most data centers.

Hot Aisle/Cold Aisle Configuration, Passive CoolingWhen hot aisle/cold aisle data center cabinet positioning is implemented and heat buildup is 1,500 to 2,000 watts, passive cooling can be utilized. In this configuration, cold air is pulled from the floor to cool equipment as it moves from the front to the back of the cabinet. The resulting warm air is then exhausted out the cabinet top and back.

Hot Aisle/Cold Aisle Configuration, Active CoolingHot aisle/cold aisle cabinet configurations in conjunction with active cooling are the most efficient cooling solutions for components with heat dispensation levels ranging from 4,000 to 6,000 watts. Cabinets that have a perforated front and a rear fan door are the most efficient for this type of application.

Hot Aisle/Cold Aisle Configuration, Active Cooling with Floor DuctingHot aisle/cold aisle cabinet configurations in conjunction with active cooling plus floor ducting will help manage heat buildup when heat dispensation levels reach 6,000 to 10,000 watts. The most effective cabinets for these applications have a front window door, a rear fan door and a floor-ducted base with ple-num front.

three types of hot aisle/cold aisle cabinet designs are:

Page 9: Data Center Cabinet DynamiCs

Understanding Server CabinetThermal, Power, and Cable Management

7

random Data Center layout

The random data center layout is typically associated with older or legacy data centers, where the entire room is cooled with no specific hot or cold area strategies. In many cases, data center managers do not have the capital to upgrade the data centers to more efficient designs, but they still need to increase the cabinets’ thermal density.

layout summary

Air-cooling continues to be the most economical means of dissipating heat. All commercially available servers continue to use airflow to dissipate heat out of the equipment (cold intake air from the front while exhausting hot air out the back). Careful consideration should be taken to determine the best cabinet configuration for your data center.

Random Configuration, Passive CoolingWhen a data center has random cabinet positioning and a relatively low heat dispensation volume of 1,000 to 2,000 watts, passive cooling will manage heat buildup. Cabinets that have a perforated front, rear and top perform the most efficiently in this type of application.

Random Configuration, Active CoolingAs heat loads increase to a range of 2,000 to 3,000 watts in random cabinet positioning data centers, active cooling can be employed. The cabinets used in this type of application have a perforated front, a louvered lower-one-third rear door and a top fan. Legacy data centers typically use this type of configura-tion to increase thermal densities without incurring costly facility reconstruction.

two types of legacy systems are:

Page 10: Data Center Cabinet DynamiCs

Data Center Cabinet Dynamics

thermal Cable POWer

8

Data Center Design Considerations

When determining the placement of high-density cabinets into a data center, there are several practical and effective strategies.

utilization of load spreading

The most popular solution for incorporating high-density equipment into many of today’s data centers is load spreading. When the power required and heat generated by the equipment inside a cabinet exceeds the cabinet’s cooling capacity, installing the equipment in multiple cabinets, or spreading the load, more evenly distributes the power and cooling demands between cabinets. Within the data center many 1U servers and blade servers do not need to be installed in the same cabinet and can be spread out across multiple cabinets. Load spreading can be a good option, because it may be less costly to enlarge or expand a data center than to add complex supplemental cooling systems. A careful analysis of real estate, power, technical labor force, connectivity and other costs needs to be conducted in order to make proper decisions.

It should be noted that spreading equipment among multiple cabinets can result in a sizable amount of unused vertical space within each cabinet. The unused space must be filled with blanking panels to prevent hot air recirculation, which reduces cooling performance. Load spreading can also cause data cabling issues. Proper cable management techniques will be discussed later in this paper.

the borrowed Cooling Option

When borrowed cooling is utilized, cabinets containing low heat producing equipment are strategically placed throughout the data center next to cabinets containing high heat generating equipment. This enables the higher heat load cabinets to use, or borrow, the adjacent cabinet’s unused cooling capacity. This cooling option can reliably and predictably enable cabinets to be cooled to more than twice their

average design value.

Cabinet heat capacity rules can be established with compliance verified through power consumption monitoring. However, many IT professionals find that this cooling method requires them to enforce complex rules, occupy more floor

space and limits them to about twice the design power density.

implications of liquid Cooling

Another solution for removing excessive heat loads from data center cabinets is liquid cooling. Liquid cooling solutions are either water or refrigerant based. Many IT professionals are hesitant to use water in data centers because of leakage. Also, moving cooling pipes, tubes or hoses requires time and money, thus making moves, adds and changes (MACs) a challenge.

Liquid cooling systems operate similar to a heat exchanger, but supply chilled liquid instead of cold air, to the system. The cabinet heat transfers to the liquid, which is then piped out to be reconditioned (chilled back down). The systems must be leakproof, reliable,

“...it may be less costly to enlarge or expand a data center than to add complex supplemental cooling systems. A careful analysis needs to be conducted to make proper decisions”.

Page 11: Data Center Cabinet DynamiCs

Understanding Server CabinetThermal, Power, and Cable Management

expandable and flexible enough to allow easy reconfiguration in a data center space.

The following should be considered before installing a liquid cooling solution:

• Liquid supply lines and warm water return lines need to be installed.

– Pipe runs must not interfere with already installed connectivity or power cables.

– Future flexibility can be limited.

– Every threaded or welded fitting presents a potential leak; pipe runs need to be reviewed for condensation.

• Additional electrical circuits are required.

• Multiple independent systems will be needed to provide redundancy or backup systems, which are required in most data centers.

• Future MACs can be more costly.

In applications of extreme heat, when spreading the load and increasing the size of the data center aren’t possible, liquid cooling solutions can be an alternative. However, facility design considerations must be fully understood.

Challenges of a Dedicated high-Density area

When power density exceeds 10 kilowatts per cabinet, unpredictable airflow is a problem. To remedy this, the airflow path between the cooling system and the cabinet must be shortened. Creating a special high-density row or zone in a section of the data center, cooled with the center’s CRAC, is a solution. This approach is likely temporary though, due to data center

thermal management best Practices:

• Avoid restricted, cascading and short circuited airflows.

• Install blanking panels in all unused rack spaces.

• Neatly rout cables to prevent air restrictions.

• Take a holistic approach to the data center (raised floor, CRAC units, cabinets, etc.).

• Avoid the use of cable support arms and slide outs that may restrict airflows.

• Spread the load to the available spaces (cabinets).

• Strategically locate low and high heat loaded cabinets within the data center.

• Create special high heat zones within the data center.

• Consider the addition of a supplemental (liquid) cooling system.

• Increase the size of the data center (new addition or building).

• Adopt hot aisle / cold aisle cabinet layout.

• Avoid large temperature swings – thermal expansion and condensation issues.

• Avoid temperatures below the dew point (condensation).

• Strategically place CRAC units to provide airflow to aisles.

• Position perforated tiles to uniformly provide cold air to equipment aisle.

Page 12: Data Center Cabinet DynamiCs

Data Center Cabinet Dynamics

thermal Cable POWer

10

growth and change. Cabinet density must also be predictable or known in order to determine power and cooling requirements.

Design Wrap-up

It is important to remember that a cabinet, no matter what the design, cannot make up for insufficient total cooling within the data center. A cabinet using fans, deflectors, blocking plates or any other similar devices can never cool itself below the surrounding ambient air temperature, however, it can improve the efficiency of heat movement in the data center by controlling intake and exhaust airflows. Increased heat dissipation requires greater complexity and integration of the entire data center such as raised floor, CRAC, cabinets, etc.

importance of Proper Cable management

Deploying thermal and power management solutions should not be viewed as the only ways to maintain an efficient data center. Checking for cable performance is as important as tending to overheated equipment or increased

power loads. To maintain the quality of vital information exchanged in today’s data rooms, IT professionals must properly manage cables and cords.

As unsettling as it may be for IT professionals to see a cluttered mass of cable spaghetti, effective cable management is not just about appearances.

Improper cable management can lead to serious consequences:

• Nicks, stretching and twisting cable can affect the signal quality and also the network speed.

employ Cable management best Practices

As the number of IT components continues to increase inside a cabinet, so does the number of power and data cables. The care and attention given to cables during installation and ongoing changes are the main factors in maintaining high-quality network performance.

Consider the following checklist to ensure proper cable management:

• Run cables overhead or below whenever possible to provide easy access.

• Install proper cable management supports. (Most manufacturers have several cable management offerings.)

• Consolidate cable bundles with Velcro® straps, using low to moderate pressure. This can prevent cable damage associated with traditional metal rings.

• Keep copper and fiber-optic cables on separate runs so the weight of the copper does not impact the fiber.

• Avoid kinks and sharp bends in cables by using waterfall and cable spool devices. Spools can be especially effective with fiber for maintaining proper bend requirements and controlling slack.

• Make sure that when cables run through metal openings there are protective grommets and edging.

• Separate power, Data(copper) and Data(fiber) from each other.

“...effective cable management is not just about appearances.”

Page 13: Data Center Cabinet DynamiCs

Understanding Server CabinetThermal, Power, and Cable Management

11

• Cables in the rear of a cabinet can block airflow and increase the temperature inside a cabinet.

• Sharp changes in direction can change the electrical properties of the cable by changing the cable size and twist rate.

Cross sectional

area

Cable Fill rate40%

Cable Fill rate60%

Cable Fill rate80%

Cable type (Cat) 5e 6 6a 5e 6 6a 5e 6 6aDiameter inches 0.22 0.28 0.35 0.22 0.28 0.35 0.22 0.28 0.35

PrOline “PVCm”50mm 6.220 65 40 26 98 61 39 131 81 52

100mm 12.920 136 84 54 204 126 81 272 168 107

X50mm 10.870 114 71 45 172 106 68 ��� 141 90

X100mm 22.960 242 149 95 363 224 143 483 298 191

PrOline “PVCmtD” 3.00 x 4.00 12.000 126 78 50 189 117 75 253 156 100

PrOline “PrbtD*50mm (1.91 x 4.00) 7.640 80 50 32 121 74 48 161 �� 64

100mm (3.88 x 4.00) 15.520 163 101 65 245 151 97 327 202 129

PrOline “PrbF”50mm (1.60 x 5.25) 8.400 88 55 35 133 82 52 177 109 70

100mm (3.57 x 5.25) 18.700 197 121 78 295 182 117 394 243 156

tie WrapsTie Wrap 8” 2.400 N/A N/A N/A N/A N/A N/A 51 31 20

Tie Wrap 12” 6.000 N/A N/A N/A N/A N/A N/A 126 78 50

D-ringlarge 9.440 �� 61 39 149 �� 59 199 123 79

small 3.500 37 23 15 55 34 �� 74 45 ��

Cable Fill rates

Page 14: Data Center Cabinet DynamiCs

Data Center Cabinet Dynamics

thermal Cable POWer

12

• Cable issues can increase the time required to trace a cable during a MAC in the cabinet or rack.

Finding the best thermal, Power and Cable solution for your Data Center

As new technologies arise and the demand for more performance from computer equipment in data centers increases, IT professionals must constantly research best practices for maintaining power consumption, high levels of heat and an abundance of cables.

There is a full range of cabinet features and designs that can be combined with your facility’s data center layout to effectively mitigate heat generated by network equipment, power consumption and effective cable management. Thinking inside the box and finding the solutions for these areas can help facilitate optimal component speed and processing power without sacrificing reliability and performance.

For more information on thermal, power and cable management, visit www.hoffmanonline.com.

Brian Mordick is a Senior Product Manager at Hoffman, with special expertise in datacom, thermal and seismic issues. While developing various types of enclosures during the last 17 years, he’s incorporated innovation into new enclosure designs and holds several patents. His engineering background and knowledge of the Information Technology industry made him an integral part of the development of the Data and Communication product platforms at Hoffman. Mordick is a graduate of the University of Wisconsin – Stout, a member of the BICSI, and Registered Communication Distribution Designer (RCDD). He has frequently contributed to articles regarding enclosure trends and electronics and is active in the industry as a public speaker. Recent presentations include: Thermal Management, BICSI, July 2006; EMC, BICSI, May 2004; Seismic Compatibility of Network Racks & Cabinets, BICSI, May 2002; Thermal management of Network equipment, BICSI, Jan 2002; Data Communications Racks and Cabinets, BICSI, Sept. 2001

about the authorbrian l. mordick, rCDD, senior Product manager, hoffman

Page 15: Data Center Cabinet DynamiCs

13

Page 16: Data Center Cabinet DynamiCs

hoffman

2100 Hoffman WayAnoka, Minnesota 55303-1745 U.S.A.Phone: 763-421-2240

Fax: 763-422-2178Customer Service: 763-422-2661http://www.ehoffman.com

Canada

hoffman111 Grangeway Avenue, Suite 504Scarborough, Ontario MIH 3E9

Phone: 416-289-2770Fax: 416-289-28831-800-668-2500 (Canada only)

mexico

Pentair Enclosures, S. de R. L. de C. V.Federico T. de la Chica, No. 8 Piso-4ACd. Satelite, Naucalpan, Mexico C.P. 53100

Tel: (55) 5393-9005 ext. 222Fax: (55) 5393-8827

For additional international locations see www.hoffmanonline.com/international

WP-00001 Rev. A 09/06