232
ibm.com/redbooks Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu Introduces the next generation data center solution for scale-out environments Describes custom configurations to maximize compute density Addresses power, cooling, and physical space requirements

Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

  • Upload
    ngobao

  • View
    218

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

ibm.com/redbooks

Implementing an IBM System x iDataPlex Solution

David WattsSrihari Angaluri

Martin BachmaierSean DonnellanDuncan Furniss

Kevin Xu

Introduces the next generation data center solution for scale-out environments

Describes custom configurations to maximize compute density

Addresses power, cooling, and physical space requirements

Front cover

Page 2: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu
Page 3: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Implementing an IBM System x iDataPlex Solution

March 2009

International Technical Support Organization

SG24-7629-01

Page 4: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

© Copyright International Business Machines Corporation 2008, 2009. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADPSchedule Contract with IBM Corp.

Second Edition (March 2009)

This edition applies to the IBM System x iDataPlex solution comprising of the following products:

IBM System x iDataPlex Rack, machine type 7825IBM System x iDataPlex 2U Flex chassis, machine type 7831IBM System x iDataPlex 3U chassis, machine type 7834IBM System x iDataPlex dx320, machine types 7326 and 6388IBM System x iDataPlex dx340, machine types 7832 and 6389IBM System x iDataPlex dx360, machine types 7833 and 6390Rack management appliance, machine type 4369IBM System x iDataPlex Rear Door Heat eXchanger, part number 43V6048

Note: Before using this information and the product it supports, read the information in “Notices” on page vii.

Page 5: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiThe team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiBecome a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiMarch 2009, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii

Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Data center facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.1.1 Project Green IT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Next generation data center model . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1.3 New philosophy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Introducing iDataPlex solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.1 The iDataPlex rack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.2 Rear Door Heat eXchanger. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.2.3 Flex node technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.4 Rack management appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.5 Onboard manageability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Chapter 2. Benefits of iDataPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1 Acquisition and operating costs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1.1 Acquisition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.1.2 Operating costs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Integration-ready . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.3 Modularity and flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.4 Rack density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.5 Density at the data center level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.6 Rear Door Heat eXchanger option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Chapter 3. Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.1 Data center-level solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223.2 iDataPlex customers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.2.1 Web 2.0 data center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.2.2 High performance computing clusters. . . . . . . . . . . . . . . . . . . . . . . . 24

© Copyright IBM Corp. 2008, 2009. All rights reserved. iii

Page 6: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

3.3 Hardware redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.4 Scale-up versus scale-out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.5 Design objectives by platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.5.1 BladeCenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.5.2 System x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.5.3 iDataPlex. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.6 System Cluster 1350 and iDataPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.7 Positioning iDataPlex solution with BladeCenter. . . . . . . . . . . . . . . . . . . . 30

Chapter 4. Solution overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.1 The big picture of iDataPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2 iDataPlex rack components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.2.1 The iDataPlex rack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.2.2 iDataPlex cable brackets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384.2.3 iDataPlex cooling options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.2.4 Rear Door Heat eXchanger. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.3 iDataPlex Flex chassis components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.3.1 2U and 3U chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.3.2 Direct Dock Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.3.3 Chassis power supply, power cable, and fans . . . . . . . . . . . . . . . . . 47

4.4 iDataPlex servers and components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.4.1 Comparing the iDataPlex servers . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.4.2 iDataPlex dx320 server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.4.3 iDataPlex dx340 server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.4.4 iDataPlex dx360 server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.4.5 Operator panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.4.6 Storage and I/O tray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.5 iDataPlex options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.5.1 Storage and RAID options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.5.2 I/O options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.5.3 Networking options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.5.4 PDU options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.6 iDataPlex chassis configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.6.1 2U Flex chassis configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.6.2 3U chassis configurations (4A and 4B) . . . . . . . . . . . . . . . . . . . . . . . 874.6.3 General hardware configuration rules . . . . . . . . . . . . . . . . . . . . . . . . 88

4.7 iDataPlex Management components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.7.1 IPMI and BMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894.7.2 SMBridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904.7.3 UpdateXpress System Packs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.7.4 IBM Director Update Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914.7.5 Other firmware update methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954.7.6 Update notification subscription . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

iv Implementing an IBM System x iDataPlex Solution

Page 7: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 5. Application considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975.1 Hardware constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

5.1.1 Chassis considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985.1.2 iDataPlex server considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.1.3 Stateful versus stateless operation of the nodes . . . . . . . . . . . . . . 1015.1.4 Conclusions for operating systems and applications . . . . . . . . . . . 102

5.2 Operating system considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.2.1 Supported operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.2.2 Location of the operating system . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.3 Application considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.4 Redundancy considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065.5 Example iDataPlex scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.5.1 Web 2.0 example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085.5.2 HPC example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Chapter 6. Designing your data center to include iDataPlex . . . . . . . . . 1176.1 Data center design history. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186.2 Rack specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186.3 Rear Door Heat eXchanger specifications . . . . . . . . . . . . . . . . . . . . . . . 120

6.3.1 Planning for the arrival . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216.4 Cabling the rack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276.4.2 Management appliance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276.4.3 Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276.4.4 Power distribution units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286.4.5 Cabling best practices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286.4.6 iDataPlex overhead cable pathways . . . . . . . . . . . . . . . . . . . . . . . . 131

6.5 Data center power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336.5.1 Aggregation and reduction of data center power feeds . . . . . . . . . 133

6.6 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356.7 Airflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

6.7.1 Heat pattern analysis without the Rear Door Heat eXchanger . . . . 1396.7.2 IBM Systems Director Active Energy Manager . . . . . . . . . . . . . . . . 1456.7.3 Hot air ducts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456.7.4 iDataPlex and standard 1U server rack placement. . . . . . . . . . . . . 1476.7.5 Heat pattern analysis with the Rear Door Heat eXchanger . . . . . . 1486.7.6 Rear Door Heat eXchanger airflow characteristics . . . . . . . . . . . . . 152

6.8 Water cooling and plumbing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1526.8.1 Technical specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546.8.2 Absorption rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

6.9 Designing a heterogeneous data center . . . . . . . . . . . . . . . . . . . . . . . . . 1556.10 iDataPlex servers in a standard enterprise rack . . . . . . . . . . . . . . . . . . 1586.11 The Portable Modular Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

Contents v

Page 8: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.12 Possible problems in older data centers . . . . . . . . . . . . . . . . . . . . . . . . 1646.12.1 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656.12.2 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676.12.3 Static electricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676.12.4 RF interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

6.13 Links to data center design information. . . . . . . . . . . . . . . . . . . . . . . . . 168

Chapter 7. Managing iDataPlex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717.1 The Avocent rack management appliance . . . . . . . . . . . . . . . . . . . . . . . 1727.2 IBM power distribution units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1727.3 IBM Director and IBM Systems Director . . . . . . . . . . . . . . . . . . . . . . . . . 1767.4 xCAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1777.5 DIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1787.6 ASU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1787.7 DSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

Chapter 8. Services offerings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1818.1 Global Technology Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

8.1.1 High-density computing data center readiness assessment . . . . . . 1828.1.2 Thermal analysis for high-density computing . . . . . . . . . . . . . . . . . 1838.1.3 Scalable modular data center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1848.1.4 Data center global consolidation and relocation enablement . . . . . 1848.1.5 Data center energy efficiency assessment . . . . . . . . . . . . . . . . . . . 1858.1.6 Optimized airflow assessment for cabling. . . . . . . . . . . . . . . . . . . . 1858.1.7 IBM Server Optimization and Integration Services: VMware . . . . . 1868.1.8 Portable Modular Data Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

8.2 IBM Systems and Technology Group Lab Services . . . . . . . . . . . . . . . . 1888.2.1 Data center power and cooling planning for iDataPlex . . . . . . . . . . 1898.2.2 iDataPlex post-implementation jumpstart . . . . . . . . . . . . . . . . . . . . 1898.2.3 iDataPlex management jumpstart . . . . . . . . . . . . . . . . . . . . . . . . . . 1908.2.4 Cluster enablement team services . . . . . . . . . . . . . . . . . . . . . . . . . 190

8.3 IBM Global Financing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197IBM iDataPlex product publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199How to get Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

vi Implementing an IBM System x iDataPlex Solution

Page 9: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2008, 2009. All rights reserved. vii

Page 10: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

1350™AIX®BladeCenter®Cool Blue™DB2®DPI®DS4000®eServer™

GPFS™IBM Systems Director Active

Energy Manager™IBM®iDataPlex™LoadLeveler®Redbooks®Redbooks (logo) ®

System p®System Storage™System x®System z®Tivoli®WebSphere®xSeries®

The following terms are trademarks of other companies:

Advanced Micro Devices, AMD, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro Devices, Inc.

InfiniBand, and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.

SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and other countries.

FiberRunner, PANDUIT, and the PANDUIT logo are registered trademarks of Panduit Corporation in the United States, other countries, or both.

QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered trademark in the United States.

SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries.

Cell Broadband Engine and Cell/B.E. are trademarks of Sony Computer Entertainment, Inc., in the United States, other countries, or both and is used under license therefrom.

LifeKeeper, SteelEye, and the SteelEye logo are registered trademarks of Steeleye Technology, Inc. Other brand and product names used herein are for identification purposes only and may be trademarks of their respective companies.

VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions.

Java, JavaScript, MySQL, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows Server, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

viii Implementing an IBM System x iDataPlex Solution

Page 11: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

Notices ix

Page 12: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

x Implementing an IBM System x iDataPlex Solution

Page 13: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Preface

The IBM® iDataPlex™ data center solution is for Web 2.0, high performance computing (HPC) cluster, and corporate batch processing customers experiencing limitations of electrical power, cooling, physical space, or a combination of these. By providing a big picture approach to the design, iDataPlex solution uses innovative ways to integrate Intel®-based processing at the node, rack, and data center levels to maximize power and cooling efficiencies while providing necessary compute density.

An iDataPlex rack is built with industry-standard components to create flexible configurations of servers, chassis, and networking switches that integrate easily. Using technology for flexible node configurations, iDataPlex technology can configure customized solutions for applications to meet specific business needs for computing power, storage intensity, and the right I/O and networking.

This IBM Redbooks® publication is for customers who want to understand and implement the IBM iDataPlex solution. It introduces iDataPlex solution and the innovations in its design, outlines its benefits, and positions it with IBM System x and BladeCenter® servers. The book provides details of iDataPlex components and the supported configurations. It describes application considerations for an iDataPlex solution and aspects of data center design that are central to an iDataPlex solution. The book concludes by introducing the services offerings available from IBM for planning and installing an iDataPlex solution.

The team that wrote this book

This book was produced by a team of specialists from around the world working at the International Technical Support Organization (ITSO), Raleigh Center.

David Watts is a Consulting IT Specialist at the IBM ITSO Center in Raleigh. He manages residencies and produces Redbooks publications on hardware and software topics related to IBM System x® and BladeCenter servers and associated client platforms. He has authored over 80 books, papers, and technotes. He holds a Bachelor of Engineering degree from the University of Queensland (Australia) and has worked for IBM both in the United States and Australia since 1989. He is an IBM Certified IT Specialist.

© Copyright IBM Corp. 2008, 2009. All rights reserved. xi

Page 14: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Srihari Angaluri is a Technical Solution Architect in the System x Cluster 1350/iDataPlex development group in Research Triangle Park. He has several years of experience in HPC. Srihari holds a Master of Science degree in Computer Science. He previously worked as a research assistant at the NSF Engineering Research Center at Mississippi State University, and at Cornell Theory Center in Ithaca, NY, where he ported the Message Passing Interface (MPI) middleware library for Microsoft® Windows® clusters, and developed a batch job scheduling system. Prior to joining IBM, Srihari worked on developing various commercial products in the HPC area. Since joining IBM, he has worked on implementing cluster systems at various customer sites, including the Darwin cluster at MIT, and the HPC cluster at Ohio Supercomputer Center, which is ranked #76 in the November 2007 Top500 list.

Martin Bachmaier works as an IT Specialist in the IBM BladeCenter development lab in Boeblingen, Germany, working on the blade systems using the Cell Broadband Engine™ (Cell/B.E.™) system. He has worked for IBM for more than five years. As part of the Open Systems Design and Development team, his current responsibilities include the architecture and management of Linux®-based infrastructures, and the support of early customer installations and development teams around the world. He helped to implement several IBM internal clusters, mainly with Cell/B.E., and is also involved in the development of systems using IBM next generation processors. He holds a degree in Computer Science from the University of Cooperative Education in Stuttgart, Germany, and a Bachelor of Science (Hons) from the Open University in London, UK. He is an IBM Certified Systems Expert and CCNA certified.

Sean Donnellan is an IT Architect with IBM in Ehningen, Germany and has worked in the IT industry since 1995 when he started as an Enterprise Firewall Designer and an AIX® SP2 Cluster Specialist. He has worked for IBM for eight years. Among other IBM specialist certifications, he is an AIX Advanced Technical Expert and holds the CISSP credential. He has been involved in designing solutions for Linux on System z®, Intrusion Detections Systems design in hosting environments, designing systems management solutions, and data center and hosting design in general. He is currently architecting shared data center migrations and shared customer virtualization solutions across the entire IBM Systems family.

Duncan Furniss is a Senior Advisory IT Specialist for IBM Canada, and currently provides technical sales support for System x, BladeCenter and System Storage™ products. He has co-authored four IBM Redbooks publications: High Availability Without Clustering, SG24-6216, IBM eServer xSeries 440 Planning and Installation Guide, SG24-6196, Using Active PCI Manager, REDP-0446, and Using Process Control, REDP-3637. He contributed to the IBM Redpaper Implementing Sun Solaris on IBM BladeCenter Servers, REDP-4269. He has both designed or implemented several Linux clusters for various applications,

xii Implementing an IBM System x iDataPlex Solution

Page 15: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

some of which are on the Top500 list, some are in grids. He is an IBM Regional Designated Specialist for Linux, High Performance Compute Clusters, and Rack, Power and Cooling. He is an IBM Accredited IT Specialist.

Kevin Xu is a Technical Support Engineer for IBM Greater China Group. He has been part of IBM ISC ShenZhen for four years. He provides technical support and training for System x and BladeCenter. He focus on Linux, virtualization, and performance tuning on System x and BladeCenter. He holds the following certifications: Microsoft Certified Systems Engineer, Sun™ Certified Java™ Developer, Red Hat Certified Engineer, VMware® Certified Professional, IBM eServer™ Certified Specialist, and IBM eServer Certified Instructor.

The author team (l-r): Duncan, Sean, Srihari, David, Martin, and Kevin

Preface xiii

Page 16: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The authors want to thank the following people for their contributions to this project:

From the International Technical Support Organization:

� Tamikia Barrow� Mary Comianos� Mike Ebbers� Linda Robinson� Diane Sherman

From iDataPlex Marketing:

� Randi Crawford Wood� RJ Hixson� Monica Martinez� Rajesh Sukhramani

From iDataPlex Development:

� Nadia Anguiano-Wehde� Ralph Begun� Chu Chung� Alan Claassen� Rodrigo De la Parra� Tiep Dinh� Karl Dittus� Eric Eckberg� Noel Fallwell� Kellie Francis� Dottie Gardner� John Gossett� Dennis Hansen� Tim Hiteshew� Vinod Kamath� Vic Mahaney� Gregg McKnight� Mark Megarity� Stephen Mroz� Whit Scott� Roger Schmidt� Scott Shurson� Junjiro Sumikawa

xiv Implementing an IBM System x iDataPlex Solution

Page 17: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

IBM Services Offerings:

� Brian Canney� David Franko� Michael Karchov� Bret Lehman� Rey Rodriguez� Steven Sams

IBM employees from around the world:

� Christine Kubsch, Germany� Peter Morjan, Germany� Ronnie Winick, Research Triangle Park, NC USA

From other companies:

� Alvin Galea, Australia Post� Rick Korn, SMC� Scott Lorditch, Blade Networking Technologies� Tom Boucher and team, Panduit Corp.

Become a published author

Join us for a two- to six-week residency program! Help write a book dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You will have the opportunity to team with IBM technical professionals, Business Partners, and Clients.

Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you will develop a network of contacts in IBM development labs, and increase your productivity and marketability.

Find out more about the residency program, browse the residency index, and apply online at:

ibm.com/redbooks/residencies.html

Preface xv

Page 18: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Comments welcome

Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:

� Use the online Contact us review Redbooks form found at:

ibm.com/redbooks

� Send your comments in an e-mail to:

[email protected]

� Mail your comments to:

IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

xvi Implementing an IBM System x iDataPlex Solution

Page 19: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Summary of changes

This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified.

Summary of Changesfor SG24-7629-01for Implementing an IBM System x iDataPlex Solutionas created or updated on March 25, 2009.

March 2009, Second Edition

This revision reflects the addition, deletion, or modification of new and changed information described in the following sections.

New information� Positioning iDataPlex solution with BladeCenter, page 30� IBM iDataPlex dx320, page 52� IBM iDataPlex dx360, page 60� Table comparing the dx320, dx340 and dx360 servers, page 51� Details about internal SAS connectivity, page 68� Force10 2410CP and SMC 8024L2 switch, page 74� Support for 1TB SATA drive, page 59� Information about heterogeneous data center, page 155� Information about iDataPlex installation in an enterprise rack, page 158� Information about cabling best practices for the iDataPlex rack, page 128� Information about cabling overhead pathways, page 131� Frequently asked questions (FAQs) and answers throughout the book.

Changed information� Operating system support, page 103

© Copyright IBM Corp. 2008, 2009. All rights reserved. xvii

Page 20: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

xviii Implementing an IBM System x iDataPlex Solution

Page 21: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 1. Introduction

Demand for data center compute capacity has been steadily growing over the past few years with existing and emerging technologies such as Web 2.0, online gaming, social network analysis (SNA), real-world Web, Service Oriented Architecture (SOA), cloud computing, virtualization, and so on. These technologies, coupled with growing user demands, are pushing the envelope when it comes to the amount of raw compute power, storage, and other associated data center infrastructure requirements. Collectively, they contribute to the growing energy bills from data center power, and to cooling, acquisition, and on-going maintenance costs for customers.

These demands require not only the expansion of existing data centers but also the design of new massive scale-out data centers to compete in and sustain the rapid growth of markets in these areas. As data centers expand to meet these new demands, new challenges are emerging, which can be primarily categorized as follows:

� Increased power and cooling requirements for servers and related hardware

� Constraints in floor space, which mandate more compute power in a limited amount of space

� The need for efficient and effective manageability of massive scale-out environments

� The need to minimize initial acquisition costs and subsequent maintenance costs for servers and associated infrastructure

1

© Copyright IBM Corp. 2008, 2009. All rights reserved. 1

Page 22: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

1.1 Data center facts

A recent report from IDC, a well-known IT research firm, presents the following interesting facts about data centers:

� Data center power density is increasing by approximately 15% annually.� Resulting power draws per rack have grown eight fold since 1996.� Over 40% of data center customers report that power demand is outstripping

supply.

IDC concludes “Data center infrastructure is challenged to keep pace.”1

According to a survey by AFCOM and InterUnity Group, which was quoted in a SearchDataCenter.com report, “In two years, 44.5% of AFCOM member’s data centers would be incapable of supporting business requirements due to capacity constraints.”2

Professor Jonathan Koomey of Stanford University concludes in his paper “Estimating Total Power Consumption By Servers In The U.S. And The World” that for every kW of power used to drive servers, on average another kW of power is needed to drive infrastructure devices such as power and air conditioning.3

Koomey also estimates that in 2010, electricity use in data centers will be as much as 76% higher than in 2005.

1.1.1 Project Green IT

Given these alarming facts, various commercial companies, hardware manufacturers, as well as the governments of several nations have started thinking about addressing current energy issues and the plan to meet future power demands. According to the IT research firm Gartner, Inc., report “Gartner Identifies the Top 10 Strategic Technologies for 2008,” Green IT is ranked among the top 10.4

1 See IDC report “The Impact of Power and Cooling on Data Center Infrastructure:”http://www.ibm.com/systems/z/advantages/energy/index.html#analyst_rep

2 See SearchDataCenter.com E-book, The Green Data Center: Energy-Efficient Computing in the 21st century:http://searchdatacenter.techtarget.com/general/0,295582,sid80_gci1273283,00.html

3 See Estimating Total US & Global Server Power Consumption:http://enterprise.amd.com/us-en/AMD-Business/Technology-Home/Power-Management.aspx

4 See Gartner press release:http://www.gartner.com/it/page.jsp?id=530109

2 Implementing an IBM System x iDataPlex Solution

Page 23: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Green IT is the term coined to indicate the new paradigm shift in designing data centers to be much more efficient in power consumption and cooling, and to reduce carbon emissions and potential environmental impacts, compared to the traditional data center designs. Gartner predicts that new government regulations have the potential to constrain companies in designing data centers to meet these environmental requirements. In the wake of these challenges and business requirements to stay competitive, companies are actively looking for solutions at various levels from server manufacturers and other infrastructure vendors.

1.1.2 Next generation data center model

To address the new data center challenges, a new way of designing data centers, and the server infrastructure that goes into the data centers, is essential. The design must encompass data center power, cooling, management, and acquisition and operating costs as the chief design goals, as opposed to just performance, cost, and scalability at the server level.

In an effort to address various critical IT challenges, IBM has, for several decades, developed and introduced innovative technologies to the market and established itself as the industry leader. As data center computing enters the next generation, IBM has once again focused on developing technologies that will address the critical data center challenges to meet Green IT goals, will also revolutionize the data center design and computing model over the next decade.

As part of this grand vision, in 2007 IBM announced the Project Big Green initiative, which aims to “increase data center compute capacity by a factor of 10, while using 50 percent less power by the end of the decade, dramatically increasing performance without the need to build new data centers, conserving resources from trees to gasoline.”5

Project Big Green includes server designs and innovations in various related products to help increase power and cooling efficiencies, while reducing IT acquisition costs and simplifying management for massive scale-out data centers.

1.1.3 New philosophy

As part of the new thinking, a new philosophy has emerged, which promises to deliver the Project Big Green goal. The philosophy states that instead of designing servers and putting them in data centers, first design the ideal data center and then design servers and racks specifically for this design. This philosophy is the basis for the iDataPlex solution.5 See IBM press release “IBM Premiers Project Big Green in Hollywood”

http://www.ibm.com/press/us/en/pressrelease/22891.wss

Chapter 1. Introduction 3

Page 24: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

1.2 Introducing iDataPlex solution

To address the growing data center challenges as outlined previously, IBM has taken a new approach and designed an innovative family of products and technologies called iDataPlex solution.

IBM has years of experience designing server technologies for scale-up and scale-out settings that primarily focus on performance and scalability as the fundamental requirements. However, iDataPlex solution focuses on a different set of goals:

� Reduce the initial hardware acquisition costs and on-going maintenance costs for data center owners

� Improve efficiency in power consumption

� Eliminate data center cooling requirements

� Achieve higher server density within the same footprint as the traditional rack layout

� Simplify manageability for massive scale-out environments

� Reduce the time to deployment through pre-configuration and full integration at manufacturing

As is evident, these design goals go far beyond a single server or a single rack level; they are goals for the entire data center.

With the new philosophy and the new design, iDataPlex solution promises to address the data center challenges at various levels:

� An innovative rack design achieves higher node density within the traditional rack footprint.

� An innovative flex node chassis and server technology are based on industry standard components.

� Shared power and cooling components improve efficiency at the node and rack level.

� An optional Rear Door Heat eXchanger virtually eliminates traditional cooling based on computer room air conditioning (CRAC) units.

� Various networking, storage, and I/O options are optimized for the rack design.

� Intelligent, centralized management is available through a management appliance.

Each of these innovations is described in the following sections. Subsequent chapters explain the technology and products in more detail.

4 Implementing an IBM System x iDataPlex Solution

Page 25: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

1.2.1 The iDataPlex rack

The iDataPlex rack cabinet design offers 100 rack units (U) of space, as opposed to the traditional enterprise rack cabinet, which has 42U of space. In that sense, the iDataPlex rack is essentially two 42U racks connected together and provides additional vertical bays. The iDataPlex rack is shown in Figure 1-1.

Figure 1-1 iDataPlex 100U rack

Chapter 1. Introduction 5

Page 26: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

However, the iDataPlex rack is shallower in depth compared to a standard 42U server rack as shown in Figure 1-2. The iDataPlex rack is 600 mm deep (840 mm with the Rear Door Heat eXchanger) compared to the 42U rack which is 1050 mm deep. The shallow depth of the rack and the iDataPlex nodes is part of the reason that the cooling efficiency of iDataPlex is higher than the traditional rack design, because air travels a much shorter distance to cool the internals of the server compared to airflow in a traditional rack.

Figure 1-2 Comparison of iDataPlex with two standard 42U racks (top view)

A single iDataPlex 100U rack enclosure provides more than twice the compute density of a standard 42U rack enclosure within the footprint of a single 42U rack. The iDataPlex rack has 84 horizontal 1U slots for server chassis and 16 vertical slots for network switches, PDUs, and other appliances.

The rack is oriented so that servers fit in side-by-side on the widest dimension. For ease of serviceability, all hard drive, planar, and I/O access is from the front of the rack. There is little need to access the rear of the iDataPlex rack for any serviceability other than to service the Rear Door Heat eXchanger, the power cords, and the power distribution units (PDUs).

1.2.2 Rear Door Heat eXchanger

The Rear Door Heat eXchanger is a water-cooled door that is mounted on the rear of an IBM iDataPlex rack to cool the air that is heated and exhausted by devices inside the rack. A supply hose delivers chilled, conditioned water to the heat exchanger. A return hose delivers warmed water back from the heat exchanger.

446 mmx 520 mm

446 mmx 520 mm

444 mmx 700 mm

444 mmx 700 mm

1050 mm

One iDataPlex Rack

600 mm (840mm

with RDHx)

1200mm 1280mm

Two Standard 19” Racks(Top view) (Top view)

6 Implementing an IBM System x iDataPlex Solution

Page 27: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

For customers who are able to cool their data centers with water, the Rear Door Heat eXchanger can withdraw 100% or more of the heat coming from a 100,000 BTU (approximately 30 kWh) rack of servers to alleviate the cooling challenge that many data centers are having. By selecting the correct water inlet temperature and water flow rate, you can achieve optimal heat removal.

The Rear Door Heat eXchanger is shown in Figure 1-3. The inlet and outlet connectors are shown at the bottom of the photo.

Figure 1-3 Rear Door Heat eXchanger (rear view of the iDataPlex rack)

Chapter 1. Introduction 7

Page 28: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

1.2.3 Flex node technology

The modular design of the iDataPlex components allows for customized server solutions to meet specific business needs. The servers for iDataPlex can be configured in numerous ways by using flex node technology, which is an innovative 2U modular chassis design. The iDataPlex 2U Flex chassis can hold one server with multiple drives or two servers in a 2U mechanical enclosure. Figure 1-4 shows two possible configurations of the iDataPlex 2U Flex chassis.

Figure 1-4 Two examples of configurations in the iDataPlex 2U Flex chassis

For maximum computing density, customers will typically want to populate each chassis with two servers. In addition to the compute-oriented configurations, iDataPlex solution offers a 3U storage-rich server. The 3U chassis can accommodate up to 12 hot swap SAS or SATA drives for up to 9 TB of storage.

Racks are orderable and deliverable with any combination of configured chassis in the rack up to a total of 84 nodes in 42 chassis. This approach provides maximum flexibility for data center solutions to incorporate a combination of configurations in a rack.

1.2.4 Rack management appliance

The iDataPlex rack can include an optional rack management appliance based on the Avocent MergePoint 5300 device. The device uses the Intelligent Platform Management Interface (IPMI) Version 2.0 protocol for multiple management functions to provide an intelligent aggregated rack solution. Customers with

Storage node

Two compute nodes

8 Implementing an IBM System x iDataPlex Solution

Page 29: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

custom solutions that depend on IPMI 2.0 will find full functionality with iDataPlex servers and be able to manage at the rack level. Figure 1-5 shows the Avocent MP5300 Rack Management appliance.

Figure 1-5 Avocent MergePoint 5300 Rack Management Appliance (front and rear)

1.2.5 Onboard manageability

Each compute node has a baseboard management controller (BMC) which provides basic service-processor environmental monitoring functions. If an environmental condition exceeds a threshold, or if a system component fails, the BMC lights LEDs to help you diagnose the problem and records the error in the error log. The BMC also provides remote server management capabilities using the IPMI 2.0 protocol.

Each node also supports IBM Dynamic System Analysis (DSA) to collect and analyze system information to aid in diagnosing problems. The diagnostic programs collect the following information:

� System configuration � Network interfaces and settings � Installed hardware � Service processor status and configuration � Vital product data, firmware, and BIOS configuration � RAID controller configuration � Event logs for ServeRAID controllers and service processors

Chapter 1. Introduction 9

Page 30: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The diagnostic programs create a merged log that includes events from all collected logs. The information is collected into a file that can be sent to IBM service and support. Additionally, one can view the information locally through a generated HTML or text report file.

1.3 Summary

Next-generation scale-out data centers are going to be severely constrained for power, cooling, floor space, and acquisition and maintenance costs, and will be subject to various government regulations for power efficiency, carbon emissions, and environmental impact.

To address these concerns and to design the next-generation data center model, IBM announced the Project Big Green initiative. As part of the project, IBM will bring to market a series of products and technologies that address various data center-related aspects to meet the Project Big Green objectives of building cost-effective and power-efficient green data centers.

The iDataPlex solution, one such technology, defines a new server architecture based on the new iDataPlex 100U rack cabinet, Flex Node and Chassis, Rear Door Heat eXchanger, and a rack management appliance. The iDataPlex solution will not only help customers address the data center power, cooling, and floor space concerns, but will also do it at an attractive price point for customers, and is delivered to the customer preconfigured and prewired for quicker deployment.

The subsequent chapters describe the individual components of the iDataPlex technology in greater detail and present various aspects related to designing and implementing highly efficient data centers using IBM iDataPlex technology.

For more information, see:

http://www.ibm.com/systems/x/hardware/idataplex/

10 Implementing an IBM System x iDataPlex Solution

Page 31: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 2. Benefits of iDataPlex

This chapter describes how iDataPlex solution uniquely addresses data center challenges. The iDataPlex solution is designed from the ground up, from the compute facility, to the rack level, and to the individual servers. This approach addresses costs and efficiencies from procurement, through implementation, to operation. The iDataPlex solution is a range of systems. Similar to the BladeCenter systems that IBM introduced in 2002, the range will grow in numbers and evolve with technological advances.

In this chapter, we introduce the iDataPlex rack and the range of systems that are currently available. We discuss how this new product increases computing density at the rack level and at the room level. We also discuss how the iDataPlex solution is procured and delivered.

Topics in this chapter are:

� 2.1, “Acquisition and operating costs” on page 12� 2.2, “Integration-ready” on page 14� 2.3, “Modularity and flexibility” on page 15� 2.4, “Rack density” on page 16� 2.5, “Density at the data center level” on page 17� 2.6, “Rear Door Heat eXchanger option” on page 18

2

© Copyright IBM Corp. 2008, 2009. All rights reserved. 11

Page 32: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

2.1 Acquisition and operating costs

Paramount among the development objectives for the iDataplex solution is to address areas of acquisition and operating costs in order to lower them and simplify the process for customers who want to build large-scale data centers.

2.1.1 Acquisition

Currently, iDataplex solutions are acquired directly through IBM direct sales channels. Members of the IBM iDataPlex sales group have the required skills, and will involve the appropriate IBM experts, to guide an iDataPlex solution through the configuration, review, manufacturing, delivery, and installation.

Many options are provided by third-party vendors (including Avocent, BNT, Cisco, Force10, and SMC). Many of the internal components are sourced from high-volume manufacturers. When market conditions coincide with appropriately sized orders of iDataPlex, IBM will make spot purchases of components rather than build from existing inventory, taking advantage of opportunities to minimize the acquisition cost. These savings are then passed on to the customer.

iDataPlex solutions are custom-built at the time of the order, reflecting the customer’s specific requirements. Implementations of the iDataPlex scale are unlikely to proceed without planning considerations on several fronts, such as:

� Computational, storage, and networking needs� Financial considerations� Real estate, power, cooling, and cabling provisioning

Thus, if the iDataPlex solution is built while these considerations are being addressed, it can be scheduled for delivery when the data center is ready to receive it. While the exact time frame can be determined when the order is placed, the estimated delivery time can be approximately three months.

These process decisions, as well as the system engineering discussed in Chapter 4, “Solution overview” on page 33, were made to drive the acquisition cost (often referred to as capital expenditure, or CAPEX) as low as possible. Although price of the iDataPlex components are not be published on the Internet, keep in mind that the price of the hardware is substantially lower than traditional tier-one products, and is comparable to white box products.

12 Implementing an IBM System x iDataPlex Solution

Page 33: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

2.1.2 Operating costs

Operating costs (operational expenditure, or OPEX) are the real estate (floor space), electrical power consumption for operation, energy used to maintain temperature and humidity, and manageability costs. These expenses are fundamental considerations in the design of iDataPlex solutions.

The current focus on power and cooling is indicative of two phenomena:

� Acknowledgment of just how expensive power and cooling are in the operation of a computing facility.

� Use of a computing model that provides performance and reliability through software, computing in a scalable and fault-tolerant way on multiple servers, thus decreasing the emphasis on wringing maximum performance and creating the highest levels of reliability within individual servers.

This focus enables iDataPlex engineers to improve power efficiency in many ways, also enabling them to simplify the server design, which also lends itself to greater reliability, lower cost, and higher efficiency. The power supply and fans in each chassis are optimized for efficiency in powering and cooling the nodes. Using 80 mm fans for all configurations provides commonality of parts and are more efficient than using additional 40 mm fans to do the same work. The planars use chip sets that are balanced performers while having low power consumption. The overall node design is short from front to back, and special attention is given to reducing resistance to airflow; the more easily the cooling air can flow through the server, the less energy is used by the fans to move that air.

Aside from moving data centers to lower-cost locations, the solution to the real estate challenge is to increase overall solution density. The iDataPlex solution addresses this challenge at the server level, rack level, and data center level.

iDataPlex nodes comply with Intelligent Platform Management Interface (IPMI) standards, enabling integration with a broad spectrum of management methods. A wide selection of networking devices is available, so in addition to being able to integrate these devices in to the overall networking environment, the investment in network management expertise can be leveraged.

Chapter 2. Benefits of iDataPlex 13

Page 34: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

2.2 Integration-ready

In addition to lowering acquisition costs, having IBM preconfigure the iDataPlex solution at the manufacturing facility reduces CAPEX and OPEX further:

� Packaging that is not necessary after the solution is shipped is minimized, thereby reducing disposal requirements.

� Considerations of proper airflow for cooling, robust mechanical assembly, and in-rack cabling are addressed before the system arrives.

� Testing after final assembly minimizes (eliminates, ideally) servicing that has to be performed after the equipment arrives.

The iDataPlex user simply has to roll the rack into the data center, extend its levelling legs, connect the power, network, and (optionally) cooling water for it to be ready to use. An example of an iDataPlex rack as shipped is shown in Figure 2-1.

Figure 2-1 A preconfigured iDataPlex rack

14 Implementing an IBM System x iDataPlex Solution

Page 35: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

2.3 Modularity and flexibility

The flexible design of iDataPlex provides cost-efficient servers in configurations to meet many requirements. The node design has a common power supply and fan assembly for all models, to minimize costs and maximize the benefits of standardization. The basis for the flex nodes is an industry standard motherboard based on the Server System Infrastructure (SSI) specification.

A flexible set of 11 configurations can be created from common building blocks. The configurations are either computationally dense, I/O-rich, or storage-rich. This modular approach to server design keeps costs low and provides a wide range of node types. Three flex node variations are shown in Figure 2-2.

Figure 2-2 Three of the 12 available iDataPlex flex node configurations

Having IBM integrate the servers, power distribution, and networking at the manufacturing facility saves deployment time and ensures design integrity, both of which reduce solution cost. The IBM specialist who designs the custom iDataPlex solution for the customer takes into consideration the rack positioning of the nodes and the airflow requirements of all components and provides instructions to manufacturing. An iDataPlex solution is built to be shipped, and arrive as a complete unit. The only packaging is the shipping crate. The crate provides protection for the unit in transit, can be moved by forklift, and provides a ramp for rolling the iDataPlex rack from the crate to the floor.

Chapter 2. Benefits of iDataPlex 15

Page 36: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

2.4 Rack density

The unique iDataPlex design maximizes floor space usage as shown in Figure 2-3. In the area of two data center floor tiles, there are two columns of 43U of 19-inch horizontal-mounting rack space. The Avocent rack management appliance, cables, or a filler panel consumes 1U per rack, which is why we specify a total of 84U available in horizontal orientation.

In addition, there are eight 2U vertical pockets, which can be subdivided into 1U pockets (labelled B1–B8 and D1–D8 in Figure 2-3).

Figure 2-3 iDataPlex rack layout

B7 B8B7 B8

B5 B6B5 B6

B3 B4B3 B4

B1 B2B1 B2

D7 D8D7 D8

D5 D6D5 D6

D3 D4D3 D4

D1 D2D1 D2

1U

42U

43U

84U

43U – 84U

A CB D

16 Implementing an IBM System x iDataPlex Solution

Page 37: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The rack density has the potential for an additional benefit, which is the possibility, in certain cases, to reduce the number of power feeds going to the same number of servers when they are consolidated in one rack instead of two, as shown in Figure 2-4.

Figure 2-4 Power feed consolidation

One U.S.-based power company was charging its customers USD1500 - 2000 per month for each 8.6 kVA feed, in addition to the power consumption charges. One customer indicated that reducing the number of feeds per rack from four to three would save them about USD14 million per year.

2.5 Density at the data center level

The iDataPlex design allows for increased system density in the data center. The shallow depth of the rack allows for more rows of systems in the same area. Figure 2-5 on page 18 shows possible density increases that the iDataPlex solutions enable.

8.6 KVA feed 8.6 KVA feed

8.6 KVA feed 8.6 KVA feed 8.6 KVA feed

8.6 KVA feed8.6 KVA feed

Chapter 2. Benefits of iDataPlex 17

Page 38: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 2-5 Data center density improvements enabled by iDataPlex solution

In addition, with 16U of vertical bay space available for power, networking, management, or other options, most iDataPlex designs can use all of the horizontal rack space for data processing, potentially reducing the number of racks required. The power efficiency of iDataPlex means less of the data center floor has to be given to cooling equipment.

2.6 Rear Door Heat eXchanger option

The Rear Door Heat eXchanger option for iDataPlex solution was developed to extract heat at its source. A water-cooled door closes behind the iDataPlex rack and provides energy savings from not having to cool air with fans or blowers elsewhere in the computer room, as is done with conventional computer room air conditioner (CRAC) units.

Because of the design of the iDataPlex rack, the Rear Door Heat eXchanger has a large surface area for the number of servers it cools, making it very efficient. It can greatly reduce, or even eliminate the requirement for additional cooling in the server room, freeing space that is occupied by the numerous CRAC units that are usually required.

IDPx Rack

Hot Aisle

IDPx Rack

Hot Aisle

Cold IsleCold Isle

IDPx Rack

Hot Aisle

IDPx Rack

Hot Aisle

Cold IsleCold Isle

Hot AisleHot Aisle

Cold IsleCold Isle

X sq. ft Air Cooling 0.79X sq. ft Air Cooling <0.42X sq. ft Liquid Cooling

Sized by standard floor tilesExample: 400 CFM per Tile

StdRackw/1U

StdRackw/1U

18 Implementing an IBM System x iDataPlex Solution

Page 39: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Any data center that uses CRAC systems already has chilled water installed, so integrating this solution is not difficult. As an added benefit, hot spots in the data center, which can be problematic to address, are less likely to occur.

Figure 2-6 shows thermal images taken of an iDataPlex rack under test in the IBM Thermal Lab, with a person standing beside it, before and after the heat exchanger was operational.

Even without water-cooling, the iDataplex solution is still at least 20% cooler than the conventional rack approach.

Figure 2-6 iDataPlex rack without and with the Rear Door Heat eXchanger closed and operational

iDataPlex rack can run at room temperature (no air conditioning required) when used with the optional Rear Door Heat eXchanger.

These photos were taken in Thermal Lab at IBM, showing the iDataPlex rack without (top image) and with (bottom image) the Rear Door Heat eXchanger closed and operational.

Chapter 2. Benefits of iDataPlex 19

Page 40: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

20 Implementing an IBM System x iDataPlex Solution

Page 41: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 3. Positioning

This chapter explains how iDataPlex is positioned in the marketplace compared with other systems that have Intel processors. It will help you to understand the iDataPlex target audience and the types of applications for which it is intended.

Topics in this chapter are:

� 3.1, “Data center-level solution” on page 22� 3.2, “iDataPlex customers” on page 22� 3.3, “Hardware redundancy” on page 24� 3.4, “Scale-up versus scale-out” on page 26� 3.5, “Design objectives by platform” on page 27� 3.6, “System Cluster 1350 and iDataPlex” on page 28� 3.7, “Positioning iDataPlex solution with BladeCenter” on page 30

3

© Copyright IBM Corp. 2008, 2009. All rights reserved. 21

Page 42: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

3.1 Data center-level solution

iDataPlex is a new approach for the data center. Unlike servers designed at the server level, the iDataPlex design considers the rack as a whole unit, taking into account the compute nodes, storage, I/O, switches, management, and power and cooling, to provide a complete customized solution. Table 3-1 shows a comparison of iDataPlex to traditional servers.

Table 3-1 Comparing different server form factors

3.2 iDataPlex customers

iDataPlex technology is a flexible, massive scale-out data center server solution built on industry standard components for customers who are looking for compute density and energy efficiency. Figure 3-1 on page 23 shows iDataPlex application positioning against our BladeCenter, rack, and high-end scale-up eX4 technology servers.

Server type Management level Deployment Relative node scale

Towers servers Tower level management(typically remote)

By server Individual servers1–5+

Rack servers Server level By servers (into new or existing rack)

Individual servers3–10+

BladeCenter Chassis level By blade/chassis (into new or existing rack)

BladeCenter chassis (4–14 blades per chassis)

iDataPlex Rack level By rack (into new or existing data center)

iDataPlex Rack (24–84 nodes per rack)

22 Implementing an IBM System x iDataPlex Solution

Page 43: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 3-1 iDataPlex application positioning

iDataPlex is the right choice for customers that are:

� Facing power, cooling and density challenges

� Having software redundancy built into their application and are comfortable with lower hardware redundancy

� Focusing on lowering their capital expense and operating expense

iDataPlex is targeted specifically to data-center-level customer applications that are based on stateless computing, including:

� Web 2.0� High performance computing clusters� Corporate batch processing � Grid computing

3.2.1 Web 2.0 data center

The Web 2.0 data center characteristics include:

� Potentially tens of thousands of nodes in a data center� Often located near low-cost power sources� Easy deploy and scale out� Software resiliency for availability� Pools of compute resources to balance the workload

Enterprise eX4

iDataPlex

Infrastructure Simplification,

Application Serving

Web 2.0, HPC, Grid

Server Consolidation, Virtualization

Scale Out

Scal

e U

p

BladeCenter, System x Rack

Chapter 3. Positioning 23

Page 44: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The iDataPlex solution was designed to match the requirements (characteristics). It provides an easy-to-buy and easy-to-own scale-out date center solution.

3.2.2 High performance computing clusters

High performance computing (HPC) is the use of parallel processing for running advanced application programs efficiently, reliably, and quickly. Clusters are typically comprised of components that could be used separately in other types of computing configurations. It includes compute nodes, network adapters and switches, local and external storage, and systems-management software and applications.

The iDataPlex solution provides the next generation of HPC and focuses on:

� Speed

– High-performance CPU– Large memory per node– High-speed interconnects

� Density

– From 2 to 3 times the density of 1U rack mount solution– Up to 2 nodes per chassis– Up to 84 nodes per rack

� Economy

– Lower capital outlay for acquisition– Up to 40% lower power consumption than standard 1U designs – Up to 75% reduction in cooling cost compared to 1U servers and

traditional computer room air conditioning– Lower total cost of ownership

The iDataPlex solution is the fast, dense, and economical choice for HPC.

3.3 Hardware redundancy

The iDataPlex solution provides hardware redundancy, although hardware redundancy is not as important to iDataPlex customers because they use networking and software to maximize availability to end users. The iDataPlex solution is used in massive scale-out clusters that have many nodes. If one nodes fails, the software will transfer the workload to other nodes. There is little effect on the whole clusters.

24 Implementing an IBM System x iDataPlex Solution

Page 45: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 3-2 shows a comparison of redundancy features between iDataPlex, System x servers, and BladeCenter.

Table 3-2 Comparing redundancy components

The iDataPlex dx340 servers supports fully buffered registered ECC DDR2 memory. The onboard memory chip set supports memory mirroring and memory sparing to increase the availability of memory system.

A ServeRAID controller can be added to the iDataPlex dx340 to support hardware RAID. With the ServeRAID-MR10i SAS/SATA controller, the iDataPlex dx340 supports RAID 0, 1, 10, 5, 50, 6, and 60. The onboard SATA controller does not provide RAID functionality.

The iDataPlex dx340 supports redundant Ethernet connections with the two onboard controllers. If a problem occurs with one Ethernet connection, all Ethernet traffic associated with it is automatically switched to the other controller.

The iDataPlex dx340 monitors fans, power, temperature, and voltage, but it does not provide redundancy on the power supply and fans. Node-level fault tolerance is required.

Many applications are available with built-in redundancy and failover features. These applications include large-scale Web serving, scale-out databases, risk analytics, streaming media (IPTV, online movies), and video rendering.

Redundancy feature

iDataPlex System x servers

System x Enterprise servers

BladeCenter

Memory ECCMirroringSparing

ECCChipkillMirroringSparing

ECCChipkillMemory ProteXionMirroringHot-addHot-replace

ECCChipkillMirroringSparing

Storage RAID Supporteda

a. Supports hardware or software RAID based on server configuration.

Supported Supported Supported

Dual Ethernet Supported Supported Supported Supported

Hot-swap Active PCI Slots

No No Supported No

Power supply No Supportedb

b. Certain low-end System x servers do not have redundant fans and power supplies.

Supported Supportedc

c. Feature provided by BladeCenter Chassis.

Fans No Supportedb Supported Supportedc

Chapter 3. Positioning 25

Page 46: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

3.4 Scale-up versus scale-out

The two design methods to expand a server’s performance are scale-up and scale-out.

Scale-up means that adding additional resources, usually processors or memory, to a single server to increase the server’s performance in areas such as processing capacity or I/O bandwidth to make a powerful server. Scaling up depends on hardware design. It enables simple system management as there are fewer systems and potentially lower software licensing costs. The IBM System x3950 M2 is an industry-leading product using the scale-up design.

Scale-out means adding more servers to a cluster or a compute farm to expand the overall capacity. Each of the servers in such a compute farm usually performs the same task and, provided the system is design for scaling, can be easily extended by adding more servers. Such a design depends on the whole architecture, hardware, and software. High performance compute clusters are a type of scale-out solution.

See Figure 3-2 for iDataPlex positioning in scale-up and scale-out.

Figure 3-2 Scale-up and scale-out position

Application Consolidation

Business Performance

Computing

Business Applications

InfrastructureApplications

1's 10's 100's 1000's 10,000's

Scale Out -Massive ParallelSW Resilience

Scale Up -Large SMP

HW Resilience

System x Tower

System x Enterprise

System x Rack

BladeCenter System x iDataPlex

Web 2.0 HPC

26 Implementing an IBM System x iDataPlex Solution

Page 47: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

3.5 Design objectives by platform

BladeCenter, System x, and iDataPlex offer the right solution to customers depending on the business need. This section describes their different design objectives.

3.5.1 BladeCenter

IBM BladeCenter is focused on:

� Complete infrastructure integration � Total solution management� Energy-efficiency� Hardware and software reliability, availability, and serviceability � Virtualization� Redundancy

IBM BladeCenter can provide an infrastructure simplification solution. BladeCenter delivers the ultimate in infrastructure integration. It demonstrates leadership on power utilization, high speed I/O, and server density. It was designed to provide maximum availability with industry standard components and to reduce the number of single points of failure.

IBM BladeCenter offers the industry’s best flexibility and choice in creating customized infrastructures and solutions. IBM BladeCenter Open Fabric Manager can virtualize the Ethernet and Fibre Channel I/O on BladeCenter. BladeCenter has a long life cycle and preserves system investment with compatible, proven, and field-tested platforms and chassis.

3.5.2 System x

System x is focused on:

� Scale-up solutions� Strong hardware redundancy� Virtualization

System x products provide scale-up solutions, especially the System x3950 M2 that uses the eX4 chip set. The eX4 technology is the fourth generation of products based on the design principle IBM began in 1997: to offer systems that are expandable and offer mainframe-style reliability, availability, and serviceability features.

The x3950 M2 can be configured into a multinode configuration supporting 16 quad-core processors, and up to 1 TB of RAM. It has active memory protection,

Chapter 3. Positioning 27

Page 48: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

including advanced Chipkill ECC memory correction, memory ProteXion, memory mirroring, memory hot-swap, and memory hot-add. It has hot-swap PCI adapters and hot-swap/redundant power supplies and cooling. It is designed around three major workloads. These workloads are:

� Database servers� Server consolidation using virtualization� Enterprise resource planning

3.5.3 iDataPlex

iDataPlex is focused on:

� Price/performance per watt� Fast, large scale-out deployments� Compute density� Customization

The iDataPlex is the price leadership scale-out solution. It provides ultimate power efficiency. The shallow design and Rear Door Heat eXchanger of iDataPlex’s deliver the best rack level cooling efficiency.

iDataPlex provides leadership compute density. Using the innovative rack iDataPlex can fit up to 84 1U quad-core servers and 16 1U vertical slots for switches, power distribution units, and other appliances in the same footprint as a standard rack.

iDataPlex provides a custom solution using Flex Node Technology. It also provides large low-cost localized storage for the demands of Web 2.0. It can give you a customized data center solution.

The iDataPlex provides a high efficiency power supply and low power-consuming fans. It uses up to less 40% power than standard 1U servers.

With the Rear Door Heat eXchanger, you can have a high-density data center without increasing cooling demands. It can cool the room and reduce or even eliminate the need for air conditioning in data center.

3.6 System Cluster 1350 and iDataPlex

Clusters can be configured and deployed to serve a broad range of functions, from server workload consolidation to high-performance parallel computing tasks.

28 Implementing an IBM System x iDataPlex Solution

Page 49: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 3-3 shows the types and typical characteristics of clusters.

Table 3-3 Characteristics of clusters

Types Message passing

Interconnect Workload Fault tolerance

Management

Tightly-coupled Clusters

Extensive inter-node communication

Low latency, high bandwidth, or both

Parallelized applicationa

a. Operations performed on one node depend on output of operations on one or more other nodes in the cluster.

Crash of single nodes impacts operation of entire cluster

SPOCb of hardware and software management, and software and application distribution and maintenance

b. Single point-of-control is abbreviated as SPOC.

Loosely-coupled Clusters

Little inter-node communication, shared storage common

Latency is less of an issue; bandwidth for shared storage might be important

Distributed users or tasks across nodes

Crash of single node might affect performance of cluster, but operation of cluster continues

SPOC of hardware and software management, and software and application distribution and maintenance

“Sample” Failover Clusters

Two interconnected nodes, communication typically limited to “heart-beat”

Usually a cable or simple network

Each node normally handles half the workload

If one node crashes, the other takes over its operations, significant performance impact

SPOC of hardware management, can support SPOC of software and application management and distribution

Single-purpose server farms

Minimal inter-node communication, shared storage common

Latency less of an issue, bandwidth for shared storage can be important

Nodes operate independently, but all nodes dedicated to same application/ workload

Crash of a node has little impact on the other components in the same farm

SPOC of hardware management only, can support SPOC of software and application management

Multi-purpose server farms

Minimal inter-node communication

Not required Nodes operate independently, and nodes run two or more application or workloads

Crash of a node has little impact on the other servers in the same farm

SPOC of hardware management only, can support SPOC of software and application management

Standalone networked servers

No inter-node communication

Not required Nodes operate independently

Crash of a node has no impact on the other servers in the same farm

Might or might not have SPOC for hardware management

Multiple standalone servers

No connection between nodes

Not required Nodes operate independently

Crash of a node has no impact on the other servers in the same farm

No SPOC

Chapter 3. Positioning 29

Page 50: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The System Cluster 1350™ is an integrated HPC cluster solution from IBM, including servers, storage, and networking components. It supports Linux and Microsoft Windows Compute Cluster Server 2003. IBM delivers a factory-built and tested cluster that is ready to plug-in.

The iDataPlex solution can be used in tightly-coupled clusters, loosely-coupled clusters, large numbers of simple failover clusters, single-purpose server farms, and multi-purpose server farms.

3.7 Positioning iDataPlex solution with BladeCenter

As outlined in the previous sections, each of the iDataPlex, BladeCenter, and System x technologies is suitable in different scenarios. The applicability of a specific technology depends on various factors related to the technical and practical customer requirements. In this section, we highlight several technical differences among these technologies, which can potentially influence the selection of iDataPlex over the other technologies.

One advantage of BladeCenter over iDataPlex is the shared chassis architecture. BladeCenter chassis consists of various shared components such as power supplies, switch modules, management modules, and others, all of which provide a more tightly integrated solution to customers.

On the other hand, such shared architecture might not be as flexible as iDataPlex or other System x rack mount server solutions, where the modular components such as switches, power supplies, and others allow more freedom in configuration, and in certain cases result in better overall architecture and integration into the customer environment.

Several other technical differences between iDataPlex and BladeCenter include:

� Better node density

You can achieve better node density with iDataPlex over blades within the same rack footprint. Using the BladeCenter H chassis, you can install up to four chassis in a standard enterprise rack, with up to 56 blade servers total. However, with a standard iDataPlex rack within the same rack footprint, you can install up to 84 iDataPlex server nodes.

Note: System Cluster 1350 delivers a choice of nodes, storage, I/O, and cluster management systems. The nodes include IBM System x server, BladeCenter servers, System p® servers, and now iDataPlex. Storage includes IBM DS3000, and DS4000® series. The I/O includes Ethernet, InfiniBand®, Myrinet, and Fibre Channel.

30 Implementing an IBM System x iDataPlex Solution

Page 51: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� Better control over switch oversubscription

From a networking perspective, with BladeCenter integrated switch modules, there is usually an associated oversubscription, depending on the type of switch selected. For example, with a Gigabit Ethernet Switch Module there is 14:6 oversubscription ratio (14 blades connected to 6 external ports), whereas, with an InfiniBand switch module there is a 14:8 oversubscription, which results in a performance disadvantage compared to standalone switches with more flexibility in the amount of oversubscription.

With a standard rack-mount InfiniBand switch, you may select the ratio for oversubscription by selecting the total number of uplinks to the core central switch to match the number of downlink connections going to the hosts.

� Inter-chassis communications

With BladeCenter, beyond 14 blades, you are going through an external switch to talk to blades in the other BladeCenter chassis, which creates an extra hop and consequently adds more latency. However, with iDataPlex and a standard rack-mount Ethernet switch you can connect up to 48 hosts directly with a single switch, minimizing latency.

� DDR InfiniBand support

Currently, DDR InfiniBand switch module is not supported in BladeCenter. You have to use the InfiniBand pass-thru module, which introduces additional cables requiring additional ports on core InfiniBand switches, which means the total solution can be potentially more expensive. However, iDataPlex supports DDR InfiniBand switches as standard.

� Local storage options

With BladeCenter, you can have only up to two local hard drives on individual blades. However, iDataPlex gives you the ability to put up to five local drives on each node with a 2U chassis, and up to twelve drives in a 3U chassis.

Chapter 3. Positioning 31

Page 52: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

32 Implementing an IBM System x iDataPlex Solution

Page 53: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 4. Solution overview

This chapter provides information about the components of the iDataPlex solution.

Topics in this chapter are:

� 4.1, “The big picture of iDataPlex” on page 34� 4.2, “iDataPlex rack components” on page 35� 4.3, “iDataPlex Flex chassis components” on page 44� 4.4, “iDataPlex servers and components” on page 50� 4.5, “iDataPlex options” on page 67� 4.6, “iDataPlex chassis configurations” on page 81� 4.7, “iDataPlex Management components” on page 89

4

© Copyright IBM Corp. 2008, 2009. All rights reserved. 33

Page 54: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.1 The big picture of iDataPlex

The iDataPlex solution has the following main components:

� A special iDataPlex rack� Up to 42 2U chassis that are mounted in the rack� Servers, storage and I/O components that are installed in the chassis� Switches and power distribution units that are also mounted in the rack

Figure 4-1 gives you an idea of how these components are integrated.

Figure 4-1 iDataPlex and its components

On the left side is a 2U (top) and a 3U chassis (bottom). These can be equipped with several different server, storage, and I/O components. The populated chassis are mounted in an iDataPlex rack (middle). This rack was specifically designed to meet high-density data center requirements. It allows mandatory infrastructure components like switches and power distribution units (PDUs), as shown on the right, to be installed into the rack without sacrificing valuable server space. In addition, the iDataPlex solution provides management on the rack level and a water-based cooling option with the Rear Door Heat eXchanger.

PDUs

3U Chassis

2U Chassis

Rack ManagementAppliance

iDataPlex Rear Door Heat Exchanger

I/O TrayStorage Tray Switches

dx320 server dx360 server

Storage Drives & Options

34 Implementing an IBM System x iDataPlex Solution

Page 55: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

This chapter uses a top-down approach to describe the iDataPlex features. It starts with the rack components, as follows:

� 4.2, “iDataPlex rack components” on page 35 describes rack components.

� 4.3, “iDataPlex Flex chassis components” on page 44 describes available chassis and chassis power supplies.

� 4.4, “iDataPlex servers and components” on page 50 explains what servers and trays you can put into the chassis.

� 4.5, “iDataPlex options” on page 67 shows all available options for the different iDataPlex servers.

� 4.6, “iDataPlex chassis configurations” on page 81 describes all possible configurations of the 2U and 3U chassis.

4.2 iDataPlex rack components

The major building block for a new data center design based on iDataPlex is the iDataPlex rack. In contrast to standard 19-inch racks, the iDataPlex rack is designed for higher density without having to sacrifice your existing data center layout. As with any other rack, the iDataPlex rack continues to provide cabling, power, and cooling capabilities, but the iDataPlex rack provides these capabilities in a more efficient way.

4.2.1 The iDataPlex rack

The footprint of an IBM iDataPlex rack is similar to two standard 19-inch racks. The dimensions of a rack without doors, covers, and casters are approximately 1200 mm wide, 600 mm deep, 2093 mm high (47 inches, 24 inches, 83 inches.). The front and back casters each add approximately 120 mm (5 inches) to the depth. Figure 4-2 on page 36 shows the top view of an iDataPlex rack in comparison to two standard 19-inch racks.

Chapter 4. Solution overview 35

Page 56: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-2 Comparison of iDataPlex rack and two standard 19-inch racks

If the Rear Door Heat eXchanger is added to an installation with rear casters, it adds no additional depth because the heat exchanger is located above the casters. In a configuration without rear casters, the Rear Door Heat eXchanger adds 120 mm (4.7 inches) to the depth of the rack, but only if the casters are removed.

As shown in Figure 4-3 on page 37, an iDataPlex rack provides 84U (2 columns of 42U) of horizontal mounting space for compute nodes and 16U (2 sets of 8U) of vertical mounting space (sometimes called pockets) for power distribution units and networking equipment. This makes a total of 100U usable space on two floor tiles. The iDataPlex rack has an additional 1U of space at the top of columns A and C to mount a management device, such as the Avocent MergePoint management appliance. This extra 1U space can also be used to pass through cables from the back to the front.

Note: This extra space can be relocated during the factory configuration from the top to the bottom of columns A and C.

446 mmx 520 mm

446 mmx 520 mm

444 mmx 700 mm

444 mmx 700 mm

1050 mm

One iDataPlex Rack

600 mm (840mm

with RDHx)

1200mm 1280mm

Two Standard 19” Racks(Top view) (Top view)

36 Implementing an IBM System x iDataPlex Solution

Page 57: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-3 iDataPlex rack numbering scheme

The 8U bays (columns B and D in Figure 4-3) are aligned vertically in the middle and at the right side of the rack as seen from the front. Using them as eight 1U pockets or four 2U pockets is physically possible. Depending on the exact rack configuration, ducting fillers or pocket fillers are used to ensure proper air flow.

IBM iDataPlex follows the EIA 310-D 19-inch standard for rack devices, allowing you to mount additional, existing devices horizontally or vertically, as long as they fit into the reduced depth of the iDataPlex rack. You may also have to sacrifice some of the 84 horizontal units if you plan to use multiple interconnect technologies such as Ethernet or InfiniBand in the same iDataPlex rack. Two examples are provided in 4.7, “iDataPlex Management components” on page 89. One of the examples explains why you have to sacrifice some units when you require copper-based InfiniBand.

B7 B8

B5 B6

B3 B4

B1 B2

D7 D8

D5 D6

D3 D4

D1 D2

1U

42U

43U

84U

43U – 84U

A CB D

Chapter 4. Solution overview 37

Page 58: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.2.2 iDataPlex cable brackets

An iDataPlex rack can be cabled through the bottom in a raised floor environment or from the overhead cable trays. Detailed cabling information about how you connect an iDataPlex rack to your existing cooling, power, and data infrastructure is available in 6.4, “Cabling the rack” on page 127.

The iDataPlex solution offers three types of cable management brackets. Depending on user requirements, they allow for the best alignment of power, Ethernet, or InfiniBand cables and ensure minimal interference of the cables with the airflow. Figure 4-4 on page 39 shows four types of brackets. The first three are IBM brackets. The fourth is from PANDUIT®.

FAQs:

� Why do I want a custom rack form factor in my data center?

The iDataPlex form factor provides you with a more efficient cooling solution, better power delivery, and higher density at the rack level.

� Do I have to change my data center layout?

No. You can slide one iDataPlex in place of two existing racks with no change to your data center and still get all the benefits of iDataPlex.

� Will the iDataPlex rack fit through my doorway?

iDataPlex requires a minimum door height of 2108 mm (83 in.), so it can fit through a typical doorway of 2133 mm (7 ft).

� What about the weight?

The iDataPlex form factor is always less weight than an equivalent number of 1U server. However, because of the dense packaging, a fully populated 84 node iDataPlex rack can have a greater point loading than a single standard rack with 42 nodes.

38 Implementing an IBM System x iDataPlex Solution

Page 59: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-4 Cable bracket options for iDataPlex

4.2.3 iDataPlex cooling options

The two iDataPlex rack cooling options are air and water. Air cooling relies on the traditional data center cooling system, external to the racks. The airflow is from the front to the back of an iDataPlex rack. Because the nodes are less deep than traditional servers, less air pressure is required to push air through the nodes, which in turn helps reduce both the overall cooling effort and the energy required.

The optional IBM Rear Door Heat eXchanger allows an iDataPlex rack to be water-cooled. The high efficiency of the heat exchanger allows an iDataPlex rack to operate with no extra cooling. The fans in the chassis move the air through the

Note: Because iDataPlex racks can be configured in numerous ways, not all configuration options can be covered in this document. IBM provides detailed rack configurations on a per customer basis to ensure that the best overall solution is chosen.

Chapter 4. Solution overview 39

Page 60: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Rear Door Heat eXchanger, so it does not require any additional fans. For more information about the Rear Door Heat eXchanger, see 4.2.4, “Rear Door Heat eXchanger” on page 41.

Part of the Rear Door Heat eXchanger are two air baffles which prevent air recirculation. They are installed at the top and at the bottom of the rack and prevent hot air from bypassing the heat exchanger through gaps introduced by cables to external systems (the air baffles are sometimes referred to as mud flaps). See Figure 4-5.

Figure 4-5 Air baffle “mud flap” installed to prevent hot air escaping

Switches and other ancillary components are typically installed in the vertical bays to maximize the number of chassis and servers that are installed in the horizontal bays in the rack. These devices sometimes have cooling paths that go from one side of the unit to the other, not front to back.

Even though these devices do not use front-to-back cooling, when they are installed in the vertical bays, no additional cooling is required. If they are installed in horizontal bays, however, a special air baffle might be required to route the air from the front of the rack to the side of the device, through the device, then from the other side to the rear of the rack. Figure 4-6 on page 41 shows the airflow when using those air baffles.

40 Implementing an IBM System x iDataPlex Solution

Page 61: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-6 Airflow using special air baffle for side-to-side cooled devices

4.2.4 Rear Door Heat eXchanger

IBM offers an alternative to existing cooling systems. The IBM Rear Door Heat eXchanger (part number 43V6048), shown in Figure 4-7 on page 42, replaces the existing rear door of an iDataPlex rack and provides an extremely efficient water-based cooling solution. The exchanger dissipates heat generated by all servers and provides the data center with cooling at the most efficient place, which is immediately behind the exhaust of the system fans. Without water, the door weighs about 80 - 90 kg (180 - 200 lb).

Cool air inlet

Hot air outlet

Chapter 4. Solution overview 41

Page 62: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-7 IBM iDataPlex rack with the Rear Door Heat eXchanger

The IBM Rear Door Heat eXchanger requires a cooling distribution unit (CDU) in your data center. It connects to your water system with two quick-connects on the bottom of the door.

Details about plumbing for a Rear Door Heat eXchanger is in 6.8, “Water cooling and plumbing” on page 152. The door swings open so you can still access the PDUs at the rear of the rack without unmounting the heat exchanger. Service clearance is the same as for a standard rear door installation. The heat exchanger does not require electricity.

Figure 4-8 on page 43 illustrates the efficiency of the heat exchanger in a set-up with 32 kW heat load and an inlet air temperature of 24°C (75°F). A 100% heat removal means all heat generated by iDataPlex components is removed and

42 Implementing an IBM System x iDataPlex Solution

Page 63: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

dissipated into the chilled water (the air leaving the heat exchanger is the same temperature as air entering the rack at the front).

Figure 4-8 Typical performance of the heat exchanger

To help maintain optimum performance of the heat exchanger:

� Install filler panels over all unoccupied bays.

� Where possible, external cabling should exit the rack at the front.

� If cables are required to exit at the rear of the rack cabinet, make sure they enter or exit through the bottom and top air baffles (see Figure 4-5 on page 40). Bundle the cables together to minimize air gaps, taking care to avoid cable groupings that can lead to interference such as crosstalk. Make sure the air-baffle sliders are closed as far as possible. See 4.2.3, “iDataPlex cooling options” on page 39.

120

115

110

105

100

95

90

85

80

75

% heat removal as function ofwater temperature and flow rate for

24 C rack inlet air, 32kW load, min fan speed°

8 10 12 14

% h

eat

rem

oval

Water flow rate (gpm)

Watertemperature

12°C *

14°C *

16°C *

18°C

20°C

Note: Water temperatures below 18°C (64°F) may only be used if the system supplying the water has the ability to measure the room dew point and automatically adjust the water temperature accordingly to prevent condensation on the servers.

Chapter 4. Solution overview 43

Page 64: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

For more information about installing the Rear Door Heat eXchanger, see the Rear Door Heat eXchanger Installation and Maintenance Guide:

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075220

The Rear Door Heat eXchanger is the only part of an iDataPlex solution that does not arrive pre-installed. It is shipped separately from the iDataPlex rack and must be installed by an IBM authorized supplier. It is the customer’s responsibility to ensure correct plumbing connections.

4.3 iDataPlex Flex chassis components

This section describes the chassis components, which are a 2U and a 3U chassis, the Direct Dock Power feature, the chassis power supplies and the chassis fan pack.

Two chassis sizes provide a variety of configuration options, depending on the customer requirements. A detailed overview of all supported configurations is available in 4.6, “iDataPlex chassis configurations” on page 81.

The 2U Flex Node chassis can be configured as:

� Two 1U compute nodes, each with a PCI Express (PCIe) slot and a drive bay, either SAS or SATA

� One compute node with several additional storage options, either SAS or SATA

� One compute node with an additional PCIe slot (for a total of two) and up to eight disk drives

The 3U chassis offers a compute node plus 12 drive bays for high-capacity disk drives, either SAS or SATA, currently supporting up to 12 TB of internal storage.

FAQ:

� What happens if I have to open the Rear Door Heat eXchanger for maintenance?

The server cooling is not be affected because the heat exchanger cools the data center and not the servers. Because iDataPlex is all front-access (even for connecting to power), there is usually no reason to open the rear of the rack for maintenance. The heat exchanger has no fans or moving parts, so the failure rate is extremely small and it does not require regular maintenance.

44 Implementing an IBM System x iDataPlex Solution

Page 65: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

These chassis and node configurations can be placed in the iDataPlex rack in any order, in two columns of 42U each, as shown in Figure 4-3 on page 37.

Topics in this section are:

� 4.3.1, “2U and 3U chassis” on page 45� 4.3.2, “Direct Dock Power” on page 46� 4.3.3, “Chassis power supply, power cable, and fans” on page 47

4.3.1 2U and 3U chassis

The 2U and the 3U chassis are mechanical enclosures that provide shared power and cooling capabilities to the devices mounted in it. The built-in power supply provides the chassis with sufficient power to drive the installed components. Depending on the configuration, the two available power supply options are 900 W and 375 W.

If either the power supply or the fan assembly require servicing, the chassis must first be removed from the rack and the top cover removed from the chassis as shown in Figure 4-9.

Figure 4-9 Chassis with shared fan assembly (left) and power supply (right), with rear cover removed

2U Flex chassisThe 2U Flex chassis is designed to hold either two servers, or one server with multiple drives or a second PCIe slot or both. For compute-intensive applications, the chassis can be populated with two servers and up to two disks for each server. For storage and I/O-bound applications, the chassis allows several combinations of SATA II or SAS drives and PCIe slots. For a more detailed list of

Chapter 4. Solution overview 45

Page 66: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

possible configurations, see 4.6, “iDataPlex chassis configurations” on page 81. Figure 4-10 shows an empty 2U chassis.

Figure 4-10 An empty 2U Flex chassis

3U chassisThe 3U chassis is specifically designed to meet high storage requirements. It uses the same 900 W power supply as the 2U Flex chassis, but a different chassis fan assembly. The 3U chassis allows for up to 12 high-capacity hot-swap drives, either 3.5-inch SATA II or 3.5-inch SAS drives with support for a variety of RAID levels including RAID-5.

4.3.2 Direct Dock Power

The iDataPlex uses Direct Dock Power to power the nodes in the chassis. Industry standard power cords power each node, but the cords are attached to the rack in a fixed location. When you slide in the chassis, the power receptacle of the chassis simply connects to the power cord, which means you do not have to access the rear of the rack to attach the power cord. Figure 4-11 shows the connectors associated with powering the chassis.

Figure 4-11 Direct Dock Power allows the chassis to be inserted without having to connect power cables

46 Implementing an IBM System x iDataPlex Solution

Page 67: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

With the innovation of Direct Dock Power, you can simply push the chassis into the rack, and the power supply will connect to the PDU. You do not have to access the rear of the rack when installing or working with servers.

4.3.3 Chassis power supply, power cable, and fans

Each iDataPlex chassis provides a shared high-efficiency power supply and low power-consuming fans, similar in concept to a BladeCenter chassis. In the iDataPlex 2U and 3U chassis, the compute node, I/O tray, or storage tray share one power supply and one fan unit.

Figure 4-12 shows the power supply and fan unit for the 2U Flex Node chassis.

Figure 4-12 The chassis provides efficient power and cooling to the one or two nodes installed

The power supply used in each iDataPlex chassis can be more than 90% efficient depending on the load, and consume 40% less power when compared to a traditional 1U server.

The two power supply options for the 2U Flex chassis are listed in Table 4-1 on page 48.

Fan unit comprised of four 80 mm fans (partially removed)

Chassis power supply

Chapter 4. Solution overview 47

Page 68: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-1 Power supply options

Figure 4-13 shows the 900 W power supply. The 375 W power supply looks identical, except that it has only one outlet towards the planars (outlets not visible in the figure because they are at the top end).

Figure 4-13 900 W power supply

Using the 375 W power supply, when the configuration allows it, means that power supply will be more heavily loaded and therefore be more efficient and ultimately save energy.

The configurations that support the 375 W supply are listed in Table 4-32 on page 82. If you prefer, you may elect to use the 900 W supply in these configurations.

The 375 W power supply can be used in the 2U Flex chassis, if only one server is installed in the chassis. In addition, each CPU installed in the server must use 80 W or less (see Table 4-10 on page 57 for current processor options). If a supported video card (graphics processing unit, GPU) is used, it must also use 80 W or less. If either the CPU or GPU consume more than 80 W, then a 900 W power supply must be used.

Power supply Feature code

900 W power supply 1996

375 W power supply 1997

Note: At the time of writing, no video cards were supported in the PCIe slots of the server.

48 Implementing an IBM System x iDataPlex Solution

Page 69: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The two power cables available to connect the power supplies to the rack PDU are listed in Table 4-2. The first cable is a Y-cable, meaning it connects one PDU outlet to two chassis power supplies. The second cable is a straight-through cable, connecting one PDU outlet to one chassis power supply.

Table 4-2 Power cable options

Figure 4-14 shows the connectivity for feature code 6568.

Figure 4-14 Y-cable for power connectivity, feature code 6568

Because the fans are shared between two nodes in a Flex Node chassis, they can be two times the diameter of those in traditional 1U servers (80 mm in iDataPlex compared to 40 mm in 1U servers). This, in combination with less pressure because of the shorter distance the air has to travel across the server, makes one of the most efficient air cooling solutions on the market. As a result, the fans are about 70% more efficient in terms of power consumption, compared to those in a traditional 1U server.

Power cable Feature code

1.8m, 10A/100-250V, 2xC13PM to IEC 320-C14 (Y-cable) 6568

1.8m, 10A/100-250V, C13PM to IEC 320-C14 6569

PDUoutlet

Y-cable

Server

ChassisServer

PS

Server

ChassisServer

PS

Server

ChassisStorage

PS

Server

ChassisStorage

PS

Chapter 4. Solution overview 49

Page 70: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.4 iDataPlex servers and components

The iDataPlex Flex chassis can be configured in different ways, depending on the customer needs. There are a number of different servers that can go into the Flex chassis. This section will go into the details of the different types of servers that are available. It also covers the storage and the I/O tray, which need to be used for storage-rich or I/O rich chassis configurations.

This section begins by comparing the available servers, in terms of application area and major features they support, then describes the operator panel and storage and I/O tray.

Topics in this section are:

� 4.4.1, “Comparing the iDataPlex servers” on page 50� 4.4.2, “iDataPlex dx320 server” on page 52� 4.4.3, “iDataPlex dx340 server” on page 55� 4.4.4, “iDataPlex dx360 server” on page 60� 4.4.5, “Operator panel” on page 64� 4.4.6, “Storage and I/O tray” on page 65

4.4.1 Comparing the iDataPlex servers

IBM offers three iDataPlex servers: the dx320, dx340, and dx360 servers. They have been developed for different types of workloads and power demands. Table 4-3 on page 51 gives an overview about the major key features of each

FAQ:

� What happens when a single fan fails?

Depending on the workload and environment, this can have little impact and the server would continue running. The fan speed control algorithm takes fan failures into account and only increases fan speed when component temperatures hit a certain threshold. For high demand workload, the system might throttle but continue operation. In certain cases, if cooling is no longer sufficient enough, the system performs a soft shutdown to prevent hardware damage.

Note: The iDataPlex product publications refer to the 1U server component as a system-board tray.

50 Implementing an IBM System x iDataPlex Solution

Page 71: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

server. Details of the individual servers are discussed in the next several sections.

Table 4-3 Feature comparison of iDataPlex servers

Table 4-4 on page 52 allows for an easy lookup which server supports a given processor model.

Feature dx320 server dx340 server dx360 server

Optimized for Highest efficiency at lowest power consumption

Balanced power/ performance ratio

Highest performance

Processor sockets Two Two Two

Processor Intel Xeon® Intel Xeon Intel Xeon

Cores per processor 4 (QC) 2 or 4 (DC or QC) 2 or 4 (DC or QC)

Front-side bus 1333 MHz 1333 MHz 1333 or 1600 MHz

DIMM sockets 6 DIMM sockets 8 DIMM sockets 16 DIMM sockets

Maximum memory 24 GB 32 GB 64 GB

Memory speed 533 or 667 MHz 533 or 667 MHz 667 or 800 MHz

PCIe x8 x8 x16 (Gen 2)

Supported in 2U chassis Yes Yes Yes

Supported in 3U chassis No Yes No

SATA II support Yes Yes Yes

SAS support No Yesa

a. requires an optional SAS controller card

Yesa

Maximum internal storage per server

500 GB (with two servers per 2U chassis)

12 TB (using 3U chassis)

5 TB (using 2U chassis with storage tray)

3.5-inch drive support Yes Yes Yes

2.5-inch drive support No Yes Yes

Chapter 4. Solution overview 51

Page 72: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-4 Intel Xeon CPU comparison of iDataPlex servers

Additional performance data for each of system is available at the IBM System x benchmarks page:

http://www.ibm.com/systems/x/resources/benchmarks/index.html

4.4.2 iDataPlex dx320 server

The iDataPlex dx320 server, available in machine types 7326 (one-year warranty) and 6388 (three-year warranty), is a 1U tray with a system board that fits into the 2U Flex chassis. The dx320 server features a highly efficient, low-power design that substantially reduces power consumption compared to any 1U rack server with similar compute capability. It is based on the Intel San Clemente platform.

The iDataPlex dx320 has one RS232 serial port, a VGA port, one RJ45 connector for dedicated systems management (wired to the BMC), two 1 Gbps Ethernet interfaces (based on the Broadcom BCM5722), and two USB 2.0 ports. The VGA port is wired to an onboard Aspeed Technology AST1100 video controller. Figure 4-15 on page 53 shows the front panel of the dx320.

Processor FSB Speed dx320 dx340 dx360

Intel Xeon QC E5405 1333 MHz 2.00 GHz No Yes No

Intel Xeon QC E5410 1333 MHz 2.33 GHz No Yes No

Intel Xeon QC L5410 1333 MHz 2.33 GHz No Yes Yes

Intel Xeon QC L5420 1333 MHz 2.50 GHz Yes Yes Yes

Intel Xeon QC E5420 1333 MHz 2.50 GHz No No Yes

Intel Xeon QC E5430 1333 MHz 2.60 GHz No Yes Yes

Intel Xeon QC E5440 1333 MHz 2.83 GHz No Yes Yes

Intel Xeon QC E5450 1333 MHz 3.00 GHz No Yes No

Intel Xeon DC L5240 1333 MHz 3.00 GHz No Yes Yes

Intel Xeon DC X5260 1333 MHz 3.33 GHz No Yes No

Intel Xeon QC E5462 1600 MHz 2.80 GHz No No Yes

Intel Xeon QC E5472 1600 MHz 3.00 GHz No No Yes

52 Implementing an IBM System x iDataPlex Solution

Page 73: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-15 iDataPlex dx320 front panel

Product publications are available at the IBM support site:

� IBM System x iDataPlex dx320 User's Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5078019

� IBM System x iDataPlex dx320 Problem Determination and Service Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5078020

ProcessorsThe iDataPlex dx320 server is based on the Server System Infrastructure (SSI) industry standard and has two sockets for Intel Xeon quad-core processors. Table 4-5 lists the available processors.

Table 4-5 Supported Intel Xeon CPU options on the dx320 server

MemoryThe dx320 server has six DIMM slots for registered ECC DDR2 modules and supports a maximum of 24 GB (using 4 GB DIMMs). It has two memory channels. DIMMs must be installed in matching pairs. At the time of writing, the dx320 supports only one type of DIMM, shown in Table 4-6 on page 54.

Power controlbutton

Power-on LED

Hard disk driveactivity LED

Locator LED

System error LED

Systemsmanagement port

USB connectorsVideo Ethernetconnectors

Serial

Systemsmanagement

Ethernetlink LED

Systemsmanagementspeed LED

Ethernetconnectionspeed LED

Ethernet linkactivity/status

LED

Processor model

Number of cores

Core speed

FSB speed

L2 cache

Power consumption

Feature code

Intel Xeon L5420 Quad core 2.50 GHz 1333 MHz 12 MB 50 W 4429

Chapter 4. Solution overview 53

Page 74: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-6 Supported memory options for dx320 server

Configuration rules are as follows:

� DIMMs must be installed in matched pairs. DIMMs in a pair must be identical.

� DIMMs can be populated in interleaved or non-interleaved mode. Table 4-7 shows the sequence for interleaved mode, Table 4-8 is for non-interleaved mode.

Table 4-7 Installation order of DIMMs on the dx320 for interleaved mode

Table 4-8 Installation order of DIMMs on the dx320 for non-interleaved mode

� All DIMMs must have the same speed. However, different pairs or groups of DIMMs do not have to be of the same size.

The onboard memory chip set supports memory sparing, but not memory mirroring.

Memory sparing is an optional configuration in which a hot-spare module can replace a defective DIMM module. When the error rate for one module reaches a certain threshold, the system automatically issues a copy operation that activates the standby module. After copying, the defective module is disabled and the spare module takes over. Memory sparing is activated in BIOS.

Description Size Feature code

PC2-5300 ECC DDR2 RDIMM 667 MHza (low power)

a. Default operating speed is 533 MHz for power saving optimization, but can be set to 667 MHz using the BIOS setup utility.

4 GB 3978

Pairs(ranks)

DIMM connectors(as indicated on the planar)

First pair (group 1, group 2) DIMM 1 and DIMM 2

Second pair (group 1, group 2) DIMM 3 and DIMM 4

Third pair (group 1, group 2) DIMM 5 and DIMM 6

Pairs (ranks)

DIMM connectors(as indicated on the planar)

First pair (group 1) DIMM 1

Second pair (group 1) DIMM 3

Third pair (group 1) DIMM 5

54 Implementing an IBM System x iDataPlex Solution

Page 75: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The memory set aside for the spare memory is one rank per channel. The size of the rank (and therefore the amount set aside for sparing) varies depending on the DIMMs used.

Examples of memory working in sparing mode are available in the IBM Redbooks Technote BladeCenter HS21 XM Memory Configurations, TIPS0671:

http://www.redbooks.ibm.com/abstracts/tips0671.html

StorageTable 4-9 lists the two drives that are supported with the dx320 server.

Table 4-9 dx320 disk drives supported

For more information about internal and external storage options, see 4.5.1, “Storage and RAID options” on page 67.

ManagementEach dx320 server has an onboard BMC, which is connected through a dedicated 100 Mbps systems management Ethernet port at the front. The BMC supports IPMI Version 2.0. The feature set includes:

� Remote access to system fan, voltage, and temperature values

� Remote power on, off, and reset

� Remote console by way of a serial over LAN

� Remote access to the system event log

For more information about system management of iDataPlex servers, see Chapter 7, “Managing iDataPlex” on page 171.

4.4.3 iDataPlex dx340 server

The iDataPlex dx340 server, available in machine types 7832 (one-year warranty) and 6389 (three-year warranty), is a 1U tray with a system board that fits into both the 2U Flex chassis and the 3U chassis. It is based on the Intel Bensley platform. Figure 4-16 on page 56 shows the dx340. It contains no power supply or fans because these are provided by the chassis.

Interface type Carrier Disk size Form factor RPM Feature code

SATA II Simple swap 250 GB 3.5-inch 7200 5292

SATA II Simple swap 500 GB 3.5-inch 7200 5288

Chapter 4. Solution overview 55

Page 76: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-16 The iDataPlex dx340 server

In Figure 4-16, the disk drive bay is on the left, memory and CPUs are in the middle section, and space for x8 PCIe cards is on the right connected using a riser card. Depending on the configuration selected, either a one-slot or a two-slot riser card is installed.

Figure 4-17 shows the front of the dx340 server, which has (from left to right) two USB 2.0 ports, an RS232 serial port, a VGA port, and two 1 Gbps Ethernet ports.

Figure 4-17 Front view of dx340 server

Product publications are available at the IBM support site:

� IBM System x iDataPlex dx340 User's Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075221

� IBM System x iDataPlex dx340 Problem Determination and Service Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5078854

56 Implementing an IBM System x iDataPlex Solution

Page 77: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

ProcessorsThe iDataPlex dx340 server is based on the Server System Infrastructure (SSI) industry standard and has two sockets for Intel Xeon dual- and quad-core processors. Table 4-10 lists the available processors. The processors installed in the two sockets must be identical.

Table 4-10 Supported Intel Xeon CPU options on the dx340 server

MemoryThe dx340 server has eight DIMM slots for fully buffered registered ECC DDR2 modules and supports a maximum of 32 GB (using 4 GB DIMMs). It has four memory channels. DIMMs must be installed in matching pairs. The dx340 supports the DIMMs listed in Table 4-11.

Table 4-11 Supported memory options for dx340 server

Processor model Number of cores

Core speed

Front-side bus

L2 cache

Power consumption

Feature code

Intel Xeon E5405 Quad 2.00 GHz 1333 MHz 12 MB 80 W 4404

Intel Xeon E5410 Quad 2.33 GHz 1333 MHz 12 MB 80 W 4405

Intel Xeon L5410 Quad 2.33 GHz 1333 MHz 12 MB 50 W 4406

Intel Xeon E5440 Quad 2.83 GHz 1333 MHz 12 MB 80 W 4407

Intel Xeon L5420 Quad 2.50 GHz 1333 MHz 12 MB 50 W 4429

Intel Xeon E5430 Quad 2.66 GHz 1333 MHz 12 MB 80 W 7764

Intel Xeon E5450 Quad 3.00 GHz 1333 MHz 12 MB 80 W 7761

Intel Xeon L5240 Dual 3.00 GHz 1333 MHz 6 MB 40 W 7753

Intel Xeon X5260 Dual 3.33 GHz 1333 MHz 6 MB 80 W 7763

Description Size Feature code

PC2-5300 ECC DDR2 FBDIMM 667 MHz 1 GB 0542

PC2-5300 ECC DDR2 FBDIMM 667 MHz 2 GB 0544

PC2-5300 ECC DDR2 FBDIMM 667 MHz 4 GB 0556

PC2-5300 ECC DDR2 FBDIMM 667 MHz (low power) 1 GB 3958

PC2-5300 ECC DDR2 FBDIMM 667 MHz (low power) 2 GB 3959

PC2-5300 ECC DDR2 FBDIMM 667 MHz (low power) 4 GB 3960

Chapter 4. Solution overview 57

Page 78: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Configuration rules are as follows:

� DIMMs must be installed in matched pairs. DIMMs in a pair must be identical.

� Standard-power and low-power DIMMs cannot be mixed.

� DIMMs must be populated in the order listed in Table 4-12.

Table 4-12 Installation order of DIMMs on the dx340

� All DIMMs must have the same speed. However, different pairs or groups of DIMMs do not have to be of the same size.

The onboard memory chip set supports memory mirroring and memory sparing.

� Memory mirroring

Memory mirroring allows a system to have a runtime replicate copy of the memory contents available. It is similar to RAID 1 in disk systems, meaning that only 50% of the available memory is usable. Because memory mirroring is implemented in hardware, it is operating system-independent.

Memory mirroring is activated in the BIOS. The first and second DIMM ranks must be populated as shown in Table 4-12. The modules in the second pair then mirror the corresponding modules in the first pair. For example, DIMM 00 and DIMM 20 are a mirror, land DIMM 10 and DIMM 30 are a mirror. The two respective DIMM members in a mirror must be identical in size, speed, and chip organization.

� Memory sparing

Memory sparing is an optional configuration in which a hot-spare module can replace a defective DIMM module. When the error rate for one module reaches a certain threshold, the system automatically issues a copy operation that activates the standby module. After copying, the defective module is disabled and the spare module takes over. Memory sparing is activated in BIOS. It cannot be enabled in combination with memory mirroring.

The memory set aside for the spare memory is one rank per channel. The size of the rank (and therefore the amount set aside for sparing) varies depending on the DIMMs used.

Pairs (ranks) DIMM connectors (as indicated on the planar)

First pair (group 1) DIMM 00 and DIMM 10

Second pair (group 2) DIMM 20 and DIMM 30

Third pair (group 1) DIMM 01 and DIMM 11

Fourth pair (group 2) DIMM 21 and DIMM 31

58 Implementing an IBM System x iDataPlex Solution

Page 79: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Examples of memory working in sparing mode are available in BladeCenter HS21 XM Memory Configurations, TIPS0671:

http://www.redbooks.ibm.com/abstracts/tips0671.html

StorageTable 4-13 lists disk drives that are supported with any of the 2U dx340 configurations.

Table 4-13 dx340 2U disk drives supported

Table 4-14 on page 60 lists disk drives that are supported with any 3U dx340 configuration.

Interface type

Carrier Disk size Form factor

RPM Feature code

SATA II Simple swap 160 GB 3.5-inch 7200 5291

SATA II Simple swap 250 GB 3.5-inch 7200 5292

SATA II Simple swap 500 GB 3.5-inch 7200 5288

SATA II Simple swap 750 GB 3.5-inch 7200 5531

SATA II Simple swap 1 TB 3.5-inch 7200 5559

SAS Hot swap 146 GB 2.5-inch 10 K 5534

SAS Hot swap 73 GB 2.5-inch 15 K 5543

SAS Simple swap 73 GB 3.5-inch 15 K 5178

SAS Simple swap 146 GB 3.5-inch 15 K 5179

SAS Simple swap 300 GB 3.5-inch 15 K 5533

Chapter 4. Solution overview 59

Page 80: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-14 dx340 3U disk drives supported

For more information about internal and external storage options, see 4.5.1, “Storage and RAID options” on page 67.

ManagementEach dx340 server has an integrated BMC, which supports IPMI Version 2.0. The feature set includes:

� Remote access to system fan, voltage, and temperature values

� Remote power on, off, and reset

� Remote console by way of a serial over LAN

� Remote access to the system event log

For more information about system management of iDataPlex servers, see Chapter 7, “Managing iDataPlex” on page 171.

4.4.4 iDataPlex dx360 server

The iDataPlex dx360 server, available in machine types 7833 (one-year warranty) and 6390 (three-year warranty), is a 1U tray with a system board that fits into the 2U Flex chassis. The dx360 server is a high-performance dual socket board, with support for up to 64 GB of memory and PCIe 2.0 x16 for high-speed interconnects. It is based on the Intel Stoakley platform.

The dx360 server has a serial port (RJ45 type), two 1 Gbps Ethernet interfaces, a VGA port, and two USB 2.0 ports, as shown in Figure 4-18 on page 61.

Interface type

Carrier Disk size Form factor

RPM Feature code

SATA II Hot swap 160 GB 3.5-inch 7200 5150

SATA II Hot swap 250 GB 3.5-inch 7200 5151

SATA II Hot swap 500 GB 3.5-inch 7200 5196

SATA II Hot swap 750 GB 3.5-inch 7200 5530

SATA II Hot swap 1 TB 3.5-inch 7200 5560

SAS Hot swap 146 GB 3.5-inch 15 K 5162

SAS Hot swap 300 GB 3.5-inch 15 K 5532

60 Implementing an IBM System x iDataPlex Solution

Page 81: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-18 iDataPlex dx360 front panel

Product publications are available at the IBM support site:

� IBM System x iDataPlex dx360 User's Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077374

� IBM System x iDataPlex dx360 Problem Determination and Service Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077375

IBM has also published a FAQ with hints and tips about the dx360 server:

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5078400

ProcessorsThe iDataPlex dx360 server is based on the Server System Infrastructure (SSI) industry standard and has two sockets for Intel Xeon dual- and quad-core processors. Table 4-15 on page 62 lists the available processors. The processors installed in the two sockets must be identical.

Chapter 4. Solution overview 61

Page 82: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-15 Supported Intel Xeon CPU options on the dx360 server

MemoryThe dx360 server has 16 DIMM slots for fully buffered registered ECC DDR2 modules and supports a maximum of 64 GB (using 4 GB DIMMs). It has four memory channels. DIMMs must be installed in matching pairs. At the time of writing, the dx360 supports the types of DIMMs listed in Table 4-16.

Table 4-16 Supported memory options for dx360 server

Configuration rules are as follows:

� DIMMs must be installed in matched pairs across channels. DIMMs in a pair must be identical.

� DIMMs must be populated in the order listed in Table 4-17 on page 63. Note that for best performance, balance the DIMMs across the two branches. For example, an eight-DIMM solution performs better than a six-DIMM solution.

Processor model

Front-side bus

Core speed

Number of cores

L2 cache

Power consumption

Feature code

Intel Xeon L5410 1333 MHz 2.33 GHz Quad 12 MB 50 W 4406

Intel Xeon L5420 1333 MHz 2.50 GHz Quad 12 MB 50 W 4429

Intel Xeon E5420 1333 MHz 2.50 GHz Quad 12 MB 80 W 7765

Intel Xeon E5430 1333 MHz 2.66 GHz Quad 12 MB 80 W 7764

Intel Xeon E5440 1333 MHz 2.83 GHz Quad 12 MB 80 W 4407

Intel Xeon L5240 1333 MHz 3.00 GHz Dual 6 MB 40 W 7753

Intel Xeon E5462 1600 MHz 2.80 GHz Quad 12 MB 80 W 4437

Intel Xeon E5472 1600 MHz 3.00 GHz Quad 12 MB 80 W 4436

Description Size Feature code

PC2-5300 ECC DDR2 FBDIMM 667 MHz (low power) 2 GB 3959

PC2-5300 ECC DDR2 FBDIMM 667 MHz (low power) 4 GB 3960

PC2-6400 ECC DDR2 FBDIMM 800 MHz (low power) 2 GB 3986

PC2-6400 ECC DDR2 FBDIMM 800 MHz (low power) 4 GB 3987

62 Implementing an IBM System x iDataPlex Solution

Page 83: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-17 Installation order of DIMMs on the dx320

� All DIMMs must have the same speed. However, different pairs of DIMMs do not have to be of the same size.

Memory sparing and memory mirroring are not supported features of the dx360 server.

StorageTable 4-18 lists disk drives that are supported with the dx360 server.

Table 4-18 dx360 2U supported disk drives

For more information about internal and external storage options, see 4.5.1, “Storage and RAID options” on page 67.

Pairs (branch) DIMM connectors (as indicated on the planar)

First pair (branch 0) DIMM_A1 and DIMM_B1

Second pair (branch 1) DIMM_C1 and DIMM_D1

Third pair (branch 0) DIMM_A2 and DIMM_B2

Fourth pair (branch 1) DIMM_C2 and DIMM_D2

Fifth pair (branch 0) DIMM_A3 and DIMM_B3

Sixth pair (branch 1) DIMM_C3 and DIMM_D3

Seventh pair (branch 0) DIMM_A4 and DIMM_B4

Eighth pair (branch 1) DIMM_C4 and DIMM_D4

Interface type

Carrier Disk size

Form factor

RPM Feature code

SATA II Simple swap 160 GB 3.5-inch 7200 5291

SATA II Simple swap 250 GB 3.5-inch 7200 5292

SATA II Simple swap 500 GB 3.5-inch 7200 5288

SATA II Simple swap 750 GB 3.5-inch 7200 5531

SATA II Simple-swap 1 TB 3.5-inch 7200 5559

SAS Simple swap 146 GB 3.5-inch 15 K 5179

SAS Simple swap 300 GB 3.5-inch 15 K 5533

Chapter 4. Solution overview 63

Page 84: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

ManagementEach dx360 server has an onboard BMC, which supports IPMI Version 2.0. The feature set includes:

� Remote access to system fan, voltage, and temperature values

� Remote power on, off, and reset

� Remote console by way of a serial over LAN

� Remote access to the system event log

For more information about system management of iDataPlex servers, see Chapter 7, “Managing iDataPlex” on page 171.

Serial portThe serial port has an RJ45 connector; the pin-out configuration is listed in Table 4-19.

Table 4-19 dx360 serial port pin-out

4.4.5 Operator panel

Each iDataPlex server has an operator panel at the front. Figure 4-19 on page 65 shows the power button and four status LEDs.

Note: Make sure to disable the Quiet Boot option in the dx360 BIOS. If Quiet Boot is enabled, you see only minimal messages during BIOS POST. For example, no messages from the SATA/SAS controller are shown and you are not able to enter LSI BIOS to configure a hardware RAID.

Pin number Description

1 RTS (request to send)

2 DTR (data terminal ready)

3 TXD (transmitted data)

4 GND (common ground)

5 RI (ring indicator)

6 RXD (received data)

7 DSR / DCD (data set ready / data carrier detect)

8 CTS (clear to send)

64 Implementing an IBM System x iDataPlex Solution

Page 85: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-19 iDataPlex server operator panel

The LEDs are defined as follows:

� Power on (green): Determines if the server is powered on or off

� Activity (green): Indicates hard disk activity

� Locate (blue): Location LED controllable by the integrated BMC. This LED is useful to physically identify a system from the management software for maintenance purposes.

� Fault (amber): Indicates a fault has been detected by the integrated BMC

4.4.6 Storage and I/O tray

For storage and I/O-rich configurations, only one iDataPlex server is installed in a 2U chassis. To fill the space, you must have either an additional storage tray or an I/O tray to hold drives, PCIe expansion cards, or both. For more details about the supported 2U and 3U chassis configurations, see 4.6, “iDataPlex chassis configurations” on page 81.

Storage trayThe storage tray (feature code 4075) provides four bays with space for a 3.5-inch disk drive in each of the bays, either SATA or SAS. Figure 4-20 on page 66 shows an empty storage tray.

Tip: The power button has a protective collar around it to prevent accidental power-off of the server. You may remove this collar to more easily access the power button.

On / Off Push Button

Power On

Activity

Locate

Fault

Chapter 4. Solution overview 65

Page 86: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-20 Storage tray

I/O trayThe I/O tray (feature code 4933) has three bays for disk drives and an I/O adapter cage. See Figure 4-19 on page 65. The three disk bays can host up to three 3.5-inch disk drives (SATA II or SAS), or six (two in each bay) 2.5 inch SAS SFF disk drives. For more details about the supported 2U and 3U chassis configurations, see 4.6, “iDataPlex chassis configurations” on page 81.

The I/O adapter cage supports a total of two PCIe adapters (the bottom slot cover in the figure is not used).

Figure 4-21 I/O tray

66 Implementing an IBM System x iDataPlex Solution

Page 87: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.5 iDataPlex options

This section discusses the available iDataPlex options. The first part explains internal and external storage options, and the RAID adapters. The second part discusses I/O option cards, the third part discusses Ethernet switch options, and the last part is about the iDataPlex PDU options. Topics in this section include:

� 4.5.1, “Storage and RAID options” on page 67� 4.5.2, “I/O options” on page 72� 4.5.3, “Networking options” on page 74� 4.5.4, “PDU options” on page 77

4.5.1 Storage and RAID options

This section describes the available storage options for iDataPlex. Internal storage means it is within the same chassis as the iDataPlex nodes. External storage is in a separate enclosure connected through Ethernet or Fibre Channel to the iDataPlex node.

Onboard disk controllers and RAID supportThe onboard controller of the dx320, the dx340, and the dx360 is only for SATA II drives and does not support RAID. See Table 4-20If you plan to use SAS disks, or you want to use hardware-based RAID, you have to install a separate PCIe disk controller, as described in the next section.

Table 4-20 Onboard disk controllers

PCI Express disk controllers and RAID supportYou must install a separate disk controller in a PCI Express (PCIe) slot for any of the following reasons:

� SAS drives are used (because the onboard controller supports SATA only)

� More than six drives are used (because onboard controller has six SATA ports)

� Hardware RAID is required

Controller feature dx320 dx340 dx360

SATA II support Yes, 6-port Yes, 6-port Yes, 6-port

SATA port speed 3 Gbps 3 Gbps 3 Gbps

SAS support No No No

Hardware-RAID No No No

Chapter 4. Solution overview 67

Page 88: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-21 lists the two support RAID cards.

Table 4-21 Supported SAS/SATA RAID controller cards

Table 4-26 on page 73 lists the iDataPlex servers that support the ServeRAID controller card.

SAS connectivityThe next two figures show how each drive is cabled to the RAID controller cards. Figure 4-22 shows a 2U chassis with eight SAS small-form-factor drives, labeled SFF DRV 0 to SFF DRV 7.

Figure 4-22 SAS wiring for 2U chassis configurations

The first four drives (DRV 0 to DRV 3) are directly connected with a 1xSAS cable to the ServeRAID adapter. The second half of the drives (DRV 4 to DRV 7) merges in a single 4x SAS cable using the mini-SAS connector.

Name Cache Battery backup

Supported RAID levels

Feature code

IBM ServeRAID MR10i 256 MB Optional, feature 5700 0, 1, 5, 6, 10, 50, 60 3571

IBM ServeRAID BR10i None None 0 and 1 3577

Note: For some chassis configurations, installing a separate disk controller might use the only available PCIe slot.

Ser

veR

AID

MR

or B

R 1

0i

min

i-SA

S 4

x

SFF DRV 4

SFF DRV 5

SFF DRV 6

SFF DRV 7

SFF DRV 4

SFF DRV 5

SFF DRV 6

SFF DRV 7

SFF DRV 4

SFF DRV 5

SFF DRV 6

SFF DRV 7

mini-SAS 4x

4x

Backplanes

SFF DRV 0

SFF DRV 1

SFF DRV 0

SFF DRV 1

7PIN SATAPHY 1

7PIN SATAPHY 0

SFF DRV 2

SFF DRV 3

SFF DRV 2

SFF DRV 3

7PIN SATAPHY 2

7PIN SATAPHY 3

min

i-SA

S 4

x

68 Implementing an IBM System x iDataPlex Solution

Page 89: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-23 shows the 3U chassis with twelve large-form-factor SAS drives, three of which are merged into one 3 Gbps SAS connection. The 4x SAS cable combines four of such triples, so that in total the twelve drives are connected through 12 Gbps of bandwidth to the controller card.

Figure 4-23 SAS wiring for 3U chassis configuration

For drivers and more information about the controllers, see:

� Drivers and firmware for iDataPlex RAID controllers

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5076022

� IBM ServeRAID MR10i documentation

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5074114

For detailed specifications, see ServeRAID Adapter Quick Reference, TIPS0054:

http://www.redbooks.ibm.com/abstracts/tips0054.html

Internal storage options Each available configuration includes a certain number of local disks as listed in Table 4-32 on page 82. IBM offers several different SATA II and SAS disks. Some of the disks are simple-swap (meaning they can only be added or replaced while the server is off); others are hot-swap (meaning they can be added or replaced while the server is running).

LFF DRV 0

LFF DRV 1

LFF DRV 2

LFF DRV 0

LFF DRV 1

LFF DRV 2

LFF DRV 3

LFF DRV 4

LFF DRV 5

LFF DRV 6

LFF DRV 7

LFF DRV 8

LFF DRV 9

LFF DRV 10

LFF DRV 11

SASEXPANDER

mini-SAS 4x

ServeRAIDMR or BR

10im

ini-S

AS

4x4x

Chapter 4. Solution overview 69

Page 90: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The following drives are used in the 2U Flex chassis:

� SATA drives are available only as 3.5-inch simple-swap drives, which means the server must be powered off before adding or replacing them.

� SAS drives are available as either 3.5-inch simple-swap drives, or 2.5-inch hot-swap drives. Hot swap drives can be replaced while the server is running.

Table 4-22 lists the drives available for the 2U Flex Node chassis.

Table 4-22 Supported storage options for the 2U chassis

The 3U solutions offer either 3.5-inch SAS or 3.5-inch SATA drive options. Both SAS and SATA drives in this chassis are hot-swap drives. When replacing hot-swap drives, to ensure continued data availability, the drive must be part of a redundant array, for example RAID 1 or RAID 5). Table 4-23 on page 71 lists the drives for the 3U chassis.

Interface type

Carrier Disk size Form factor

RPM Feature code

SATA II Simple swap 160 GB 3.5-inch 7200 5291

SATA II Simple swap 250 GB 3.5-inch 7200 5292

SATA II Simple swap 500 GB 3.5-inch 7200 5288

SATA II Simple swap 750 GB 3.5-inch 7200 5531

SATA II Simple-swap 1 TB 3.5-inch 7200 5559

SAS Hot swap 146 GB 2.5-inch 10 K 5534

SAS Hot swap 73 GB 2.5-inch 15 K 5543

SAS Simple swap 73 GB 3.5-inch 15 K 5178

SAS Simple swap 146 GB 3.5-inch 15 K 5179

SAS Simple swap 300 GB 3.5-inch 15 K 5533

70 Implementing an IBM System x iDataPlex Solution

Page 91: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-23 Supported storage options for the 3U chassis

DS3000 supportThe iDataPlex also supports the installation of IBM System Storage DS3000 devices within the iDataPlex rack. Supported models are listed in Table 4-24.

Table 4-24 Supported DS3000 models

All three models are 2U in height and have twelve 3.5-inch bays for hot-swappable SATA II or SAS drives. The maximum storage capacity of a single enclosure is currently 12 TB using 1 TB SATA drives, or 3.6 TB using 300 GB SAS drives.

You can attach up to three EXP3000 expansion units to a DS3000, thus serving up to 48 TB of storage. The EXP3000 are chained to the DS3000 using the SAS interface. The DS3300 and DS3400 are available as a single or dual controller model, with each controller providing two host ports. The controller provides RAID capabilities with support for levels 0, 1, 3, 5, 6, and 10.

Interface type

Carrier Disk size Form factor

RPM Feature code

SATA II Hot swap 160 GB 3.5-inch 7200 5150

SATA II Hot swap 250 GB 3.5-inch 7200 5151

SATA II Hot swap 500 GB 3.5-inch 7200 5196

SATA II Hot swap 750 GB 3.5-inch 7200 5530

SATA II Hot swap 1 TB 3.5-inch 7200 5560

SAS Hot swap 146 GB 3.5-inch 15 K 5162

SAS Hot swap 300 GB 3.5-inch 15 K 5532

Product name Interface type Model number

DS3300 iSCSI 1726-HC3

DS3400 (AC model) Fibre Channel 1726-HC4

EXP3000 SAS (attached to DS3300 or DS3400 by way of SAS)

1727-HC1

Note: You cannot close the rear door, or use the Rear Door Heat eXchanger if a DS3000 is installed in a rack. The DS3000 is slightly deeper than the iDataPlex rack.

Chapter 4. Solution overview 71

Page 92: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

For further information about implementing a DS3000 storage device, see IBM System Storage DS3000: Introduction and Implementation Guide, SG24-7065:

http://www.redbooks.ibm.com/abstracts/sg247065.html

4.5.2 I/O options

All iDataPlex servers except the dx320 support one or two PCIe cards through the use of a riser card. Depending on the available height in the PCIe cage (right cage in Figure 4-21 on page 66), you can install either a one-slot or a two-slot riser card. Two PCIe cards are only supported in I/O rich configurations. The compute and storage rich configurations allow for only one PCIe card.

Table 4-25 PCIe features of each iDataPlex server

I/O option cards that are supported are listed in Table 4-26 on page 73.

Feature dx320 dx340 dx360

PCIe slots maximum

0 2x PCIe 1.0 1x PCIe 2.0

First card, mechanically

None x16 x16

First card, electrically

None x8 x16

Second card, mechanicallya

a. Second card requires an I/O rich chassis configuration, i.e. one planar per chassis only

None x8 None

Second card, electricallya

None x8 None

72 Implementing an IBM System x iDataPlex Solution

Page 93: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-26 Supported PCIe option cards

For information about the SAS/SATA controllers, see 4.5.1, “Storage and RAID options” on page 67.

Type Description Feature code dx320 dx340 dx360

Fibre Channel QLogic® 4 Gb single-port 3567 No Yes No

QLogic 4 Gb double-port 3568 No Yes Yes

Emulex 4 Gb single-port 1698 No Yes No

Emulex 4 Gb double-port 1699 No Yes No

iSCSI QLogic single-port HBA 2976 No Yes No

QLogic double-port HBA 2977 No Yes No

SAS HBA IBM SAS HBA 1681 No Yes No

Ethernet Intel Pro1000 4-port Ethernet 2975 No Yes Yes

SAS/SATA RAID controller

IBM ServeRAID MR10i 3571 No Yes No

IBM ServeRAID BR10i 3577 No Yes Yes

InfiniBand Mellanox Single-port 4x DDR HCA

5468 No Yes Yes

Mellanox ConnectX Dual-port 4x DDR HCA

5471 No Yes Yes

Myrinet Myricom Myri-10G 2 MB with 10GBASE-CX4

2918 No Yes Yes

Chapter 4. Solution overview 73

Page 94: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.5.3 Networking options

Several Ethernet switches are supported, as listed in Table 4-27. They cover a wide range of application scenarios.

Table 4-27 Supported Ethernet switches

Table 4-28 on page 75 lists the features supported by each of the Ethernet switches.

Switch Ports Layer Port speed

BNT RackSwitch G8000F 48 L2/3 1 Gbpsa

a. 10 Gb uplink modules available

BNT RackSwitch G8100F 24 L2 10 Gbps

Cisco 2960G-24TC-L 24 L2 1 Gbps

Cisco 3750G-48TS 48 L2/3 1 Gbps

Cisco 4948-10GE 48 L2/3/4 1 Gbpsa

Force10 S50N 48 L2/3 1 Gbpsa

Force10 S2410CP 24 L2 10 Gbps

SMC 8024L2 24 L2b

b. Has L2/3/4 Quality of Service features

1 Gbps

SMC 8708L2 8 L2 10 Gbps

SMC 8848M 48 L2c

c. Has limited L3 capabilities

1 Gbpsa

74 Implementing an IBM System x iDataPlex Solution

Page 95: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-28 Basic features supported by Ethernet switches

Feature

BN

T G

8000

F

BN

T G

8100

F

Cis

co 2

960G

Cis

co 3

750G

Cis

co 4

948

Fo

rce1

0 S

50N

Fo

rce1

0 S

2410

C

SM

C 8

024L

2

SM

C 8

708L

2

SM

C 8

848M

Ports

# of ports 44 (2)a 20+4b 48 48 44+4 20+4 24 0 44+4b

Port speed 10/100/1000

(1 Gbps)

10/100/1000

10/100/1000

10/100/1000

10/100/1000

10 Gbps

10/100/1000

- 10/100/1000

Module slots 4+4c 20+4d 4 4 2 4 4 4 8 2

Module slot speed

1 / 10 Gbpsc

10 Gbps

1 Gbps 1 Gbps 10 Gbps

10 Gbps

10 Gbps

1 Gbps 10 Gbps

10 Gbps

Stackable Noe Nof No Yes No Yes No No No Yes

VLAN support

Number of VLANs

1024 1024 255 1005 2048 1024 1024 256 255 256

802.1Q VLAN Tagging

Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Performance

Link Aggregation

Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Jumbo frames Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

IGMP snooping Yes Yes Yes Yes Yes Yes No Yes Yes Yes

High availability and redundancy

Spanning Tree Protocol

Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

VRRP Yes Yes No No Yes Yes No No No No

Management

Serial port Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Telnet, SSH Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

SNMP Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

CDP support No No Yes Yes Yes No No No No No

VTP support No No Yes Yes Yes No No No No No

Chapter 4. Solution overview 75

Page 96: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

If you plan to use copper-based InfiniBand1 and Ethernet, mount both switches horizontally, if possible. The reason for this is that the number of cables that go along the B and D columns; all the copper cables take up so much space in front

GVRP support No No No No No Yesi No Yes Yes Yes

Security

Port-based security

Nog Nog Yes Yes Yes Yes Yes Yes Yes Yes

ACL (MAC-based)

Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

ACL (IP-based) Yes Yes No Yes Yes Yes No No Yes Yes

TACACS+ Yes Yes Yes Yes Yes Yes Yes No Yes Yes

RADIUS Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Quality of Service (QoS)

802.1P Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes

Routing

RIP Noh No No Yes Yes Yes No No No Yes

OSPF Noh No No Yes Yes Yes No No No No

BGP Noh No No Yes Yes Yesi No No No No

a. Two 10/100/1000 Ethernet RJ45 ports for management or stacking purposes only (indicated with brackets)

b. Four dual purpose ports: you can either use the regular copper RJ45 port or the SFP module port. When using SFP modules, the corresponding RJ45 ports are disabled (ports 20–24 on the Cisco 2960 and ports 45–48 on the SMC 8848M)

c. Four optional 1 GB SFP ports at the front, and up to four optional 10 Gbit SFP+ or CX4 ports on the back (SFP+ and CX4 modules are available as 2-port modules only)

d. This module provides 20 CX4 ports and four slots for SFP+ modulese. Full stacking support in a future releasef. Management stacking support in a future releaseg. Supported in a future releaseh. Available with optional future software releasei. With latest FTOS only, not supported in SFTOS

Feature

BN

T G

8000

F

BN

T G

8100

F

Cis

co 2

960G

Cis

co 3

750G

Cis

co 4

948

Fo

rce1

0 S

50N

Fo

rce1

0 S

2410

C

SM

C 8

024L

2

SM

C 8

708L

2

SM

C 8

848M

1 This consideration only applies for copper-based InfiniBand. Fibre cables take up much less space.

76 Implementing an IBM System x iDataPlex Solution

Page 97: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

of the vertical pockets that proper cabling to vertically-mounted switches is no longer possible. See Figure 4-24.

Figure 4-24 Recommended cabling for Ethernet-only (left) and Ethernet plus InfiniBand (right)

4.5.4 PDU options

An iDataPlex rack can contain up to eight power distribution units. They are mounted in the vertical bays (B and D columns in Figure 4-3 on page 37) at the rear of the rack. Start populating pockets B1 and D1 with PDUs first, and if more outlets are required, continue with pockets B5 and D5, then pockets B3 and D3, and finally pockets B7 and D7.

If you have cabled your input power to enter the rack from the top, mount the PDUs with the input connector down. If input power enters the rack from the bottom, mount the PDUs with the input connector up. This way, you ensure the best cable routing for the input power cord. You can account for the necessary bending radius without wasting additional space.

IBM offers three different IBM Distributed Power Interconnect (DPI®) PDUs for iDataPlex, as listed in Table 4-29 on page 78. The first two are available world-wide and have a detachable line cord. Amperage and voltage depends on the line cord used.

The three-phase PDU+ has a directly attached line cord and is available in the U.S. only. The line cord is rated for 17.285 kW. It requires a 208 V, 60 A circuit. The connector is an IEC 309 3P+G.

42 U

PD

UP

DU

Sw

itch

Sw

itch

42 U

PD

UP

DU

Sw

itch

Sw

itch

42 U

PD

UP

DUE-Net Switch

PlenumIB Switch

E-Net SwitchPlenum

IB Switch

42 U

PD

UP

DUE-Net Switch

PlenumIB Switch

E-Net SwitchPlenum

IB Switch

Chapter 4. Solution overview 77

Page 98: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-29 Supported iDataPlex PDUs

Each of the three PDUs provide twelve C13 power outlets to the servers or switches.

Table 4-30 lists the modular line cords available, and the power for which each PDU or PDU+ is rated when connected with each cord.

Table 4-30 PDU line cords and resulting power ratings

IBM offers bundles of PDUs and geo-specific line cords. Table 4-31 on page 79 lists the available bundles for all three PDU types.

PDU Monitored Detachable line cord

Power source required

DPI C13 PDU No Yesa

a. See Table 4-30 for line cord choices.

Variesb

b. The amperage and voltage depends on the line cord used. See Table 4-30 for details.

DPI C13 PDU+ Yes Yesa Variesb

DPI C13 PDU+ Yes No 208 V, 60 A, three-phase

Line cord Power source Maximum power

4.3 m, 30 A/208 V, E45d to NEMA L6-30P (U.S.) line cord

Single-phase 30 A, 208 V, 60 Hz

4,992 W

4.3 m, 32 A/380–415 V, E45d to IEC 309 3P+N+G (non-U.S.) line cord

Three-phase wye 32 A,380 - 415 V

19,968 W

4.3 m, 32 A/230 V, E45d to AS/NZS 3112 (Australia and New Zealand) line cord

Single-phase 32 A, 230 V 7,360 W

4.3 m, 30 A/220 V, E45d to KSC 8305 (South Korea) line cord

Single-phase 30 A, 220 V 6,600 W

78 Implementing an IBM System x iDataPlex Solution

Page 99: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-31 PDU and line cord bundles available for iDataPlex

Figure 4-25 shows the three-phase PDU+ with connection for an environmental monitoring probe (it is connected to the RJ45 console connector port, and is described after the figure).

Figure 4-25 Front view of a DPI C13 3-phase PDU+

PDU Line cord Region Feature code

DPI C13 PDU non-monitored

4.3 m, 30 A/208 V, E45d to NEMA L6-30P U.S. 6022

4.3 m, 32 A/380–415 V, E45d to IEC 309 3P+N+G non-U.S. 6026

4.3 m, 32 A/230 V, E45d to AS/NZS 3112 Australia,New Zealand

6027

4.3 m, 30 A/220 V, E45d to KSC 8305 South Korea 6028

DPI C13 PDU+ monitored

4.3 m, 30 A/208 V, E45d to NEMA L6-30P U.S. 6042

4.3 m, 32 A/380–415 V, E45d to IEC 309 3P+N+G non-US 6046

4.3 m, 32 A/230 V, E45d to AS/NZS 3112 Australia,New Zealand

6047

4.3 m, 30 A/220 V, E45d to KSC 8305 South Korea 6048

DPI C13 PDU+ monitored with fixed line cord

Fixed 4.3 m line cord U.S. only 6031

LA

N 56

RS-232 serialconnector LED

RJ-45 LANconnector

RJ-45console connector

Operationmode DIP switch

Input powerconnectorPower outlets

21 3

4

Circuit breakersReset button

Chapter 4. Solution overview 79

Page 100: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The connectors on the PDU have the following functions:

� The RS232 serial connector is used to update the PDU firmware.

� The RJ45 LAN connector is used for remote access to the PDU through your LAN. The Ethernet connector supports 10/100 auto-sense network connections. The green LED on the left indicates a 100 Mbit connection, the amber LED on the right indicates a 10 Mbit connection.

� The RJ45 console connector serves two purposes. You can either connect your computer to configure the PDU (9600, 8N1, no flow control, default password is passw0rd (with a zero in place of the letter O) or you can connect an environmental monitoring probe. The PDU automatically detects the type of connection.

� The reset button resets the PDU for communication purposes only. It does not affect the loads.

� The Operation mode DIP switch is used for maintenance purposes only. See the PDU documentation.

� Circuit breakers are activated if the associated power outlets exceed 20 A. Power to the outlet is turned off. Firmly press the breaker pole to reset the circuit breaker.

For more information, download the DPI C13 and C19 PDU/PDU+ Installation and Maintenance Guide from:

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5073026

You can locally or remotely monitor the power supplied by the PDU at each output receptacle pair. It shows you the current, voltage, and wattage. (present, minimum, and maximum values)

The environmental monitoring probe enables remote monitoring of temperature and humidity and the status of two user-provided contact devices. It can be installed without interrupting loads connected to the PDU. The probe is connected using standard CAT5 network cabling and the probe can be located up to 20 m (65 ft) away from the PDU. You can configure the PDU to notify you if certain thresholds are exceeded or the contact status changes (through e-mail and SNMP).

For more details about PDU management, see 7.2, “IBM power distribution units” on page 172.

80 Implementing an IBM System x iDataPlex Solution

Page 101: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.6 iDataPlex chassis configurations

The basic component of an iDataPlex rack is the chassis that houses compute and storage elements. IBM has introduced a new architecture called Flex Node Technology that allows the same 2U chassis to serve different purposes.

The chassis configurations can be grouped three ways:

� Compute-intensive

� Storage-intensive

� I/O-intensive

The chassis configurations differ in the number of available processors, disks, and PCIe slots.

Table 4-32 on page 82 gives an overview of the 11 different configurations available.

Note: When talking about iDataPlex, we refer to a node as a single compute-instance. A 2U compute chassis with two servers installed equals two nodes; a 2U chassis with a server and a PCIe tray or storage tray equals a single node.

Chapter 4. Solution overview 81

Page 102: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Table 4-32 Available 2U and 3U chassis configurations

In addition to the 2U solutions, IBM offers a 3U chassis for storage-dense requirements. As listed in Table 4-32, the 3U storage chassis can be equipped with up to twelve 3.5-inch hot-swap SATA II or SAS drives, providing you with up to 12 TB (if using SATA drives) or 3.6 TB (if using SAS drives). The values increase as larger capacity disks are supported.

Node type

Number ofnodes

Config-uration name

Disk drive bays PCIe slots (total / available)

Standard power supply3.5-inch

SATA bays

3.5-inch SAS bays

2.5-inch SAS bays

Compute- server (2U)

2 1A 2 simple-swap

0 0 1 / 1 900 W

2 1B 0 2 simple-swap

0 1 / 0a 900 W

2 1C 0 0 4 hot-swap

1 / 0a

a. One PCIe slot must be used for a SAS/SATA RAID controller card.

900 W

Storage server (2U)

1 2A 5 simple-swap

0 0 1 / 1 375 Wb

1 2B 0 4 simple-swap

0 1 / 0a 375 Wb

I/O server(2U)

1 3A 2 simple-swap

0 0 2 / 2 375 Wb

b. The 375 W power supply can be used if the CPU installed uses 80 W or less (see Table 4-10 on page 57 for current processor options). If a supported video card (GPU) is used, it must also use 80 W or less. If either the CPU or GPU consume more than 80 W, then a 900 W power supply must be used.

1 3B 0 2 simple-swap

0 2 / 1a 375 Wb

1 3D 0 0 4 hot-swap

2 / 1a 375 Wb

1 3F 0 0 8 hot-swap

2 / 1a 375 Wb

Storageserver (3U)

1 4A 12 hot-swap

0 0 1 / 0a 900 W

1 4B 0 12 hot-swap

0 1/ 0a 900 W

82 Implementing an IBM System x iDataPlex Solution

Page 103: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

In all configurations, the 2U or 3U chassis provides a single, shared power supply and a single fan assembly (more details are in 4.3.1, “2U and 3U chassis” on page 45).

Table 4-33 gives an overview about which server is supported in what configuration.

Table 4-33 Supported configurations for each server

4.6.1 2U Flex chassis configurations

This section describes the three types of chassis configurations to match the three major types of applications: compute-intensive, storage-intensive, and I/O-intensive.

All configurations of the 2U Flex chassis support either simple-swap SATA II, hot-swap SAS or simple-swap SAS drives, all at a speed of 3 Gbps. All SAS drives have full x1 bandwidth (3 Gbps) to the controller card.

Compute-intensive server (1A, 1B, and 1C)The compute-node solution contains two identical 1U servers with either a 3.5-inch SATA (configuration 1A), a 3.5-inch SAS (configuration 1B), or two

Configuration (see Table 4-32 on page 82)

dx320 dx340 dx360

Compute server (2U)

1A Yes Yes Yes

1B No Yes Yes

1C No Yes No

Storage server (2U)

2A No Yes Yes

2B No Yes Yes

I/O server(2U)

3A No Yes No

3B No Yes No

3D No Yes No

3F No Yes No

Storageserver (3U)

4A No Yes No

4B No Yes No

Chapter 4. Solution overview 83

Page 104: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

2.5-inch SAS drives (configuration 1C) each. Each node also has one PCIe slot. See Table 4-32 on page 82 for a comparison of the configuration types.

Figure 4-26 shows the 3.5-inch drive configuration, which can be either SAS or SATA. The 3.5-inch drives in each node must be of the same configuration (including disk size). These 3.5-inch drives use a simple-swap drive tray.

Figure 4-26 2U chassis with two servers and one 3.5-inch drive each; dx340 shown; configuration 1A or 1B

Figure 4-27 shows the configuration with two 2.5-inch hot-swap SAS disks in each node.

Figure 4-27 2U chassis with two server and two 2.5-inch SAS drives each; dx340 shown; configuration 1C

Type 7831 2U chassiswith two Type 7832 system-board traysdx340

One hard disk drivebay per system-board tray

One PCIe slotper system-board tray

dx340 Type 7832system-board tray

84 Implementing an IBM System x iDataPlex Solution

Page 105: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

If you use SAS drives (either 3.5-inch drives or 2.5-inch drives), you also must have a SAS disk controller in the PCIe slot.

Storage server (2A and 2B)For storage-intensive applications, IBM offers configurations that have a larger number of high-capacity 3.5-inch disk drives. This configuration is sometimes referred to as a node with an attached storage tray.

Storage servers can have either five SATA drives (configuration 2A) or four SAS drives (configuration 2B). These drives are simple-swap (that is, non-hot-swap) and are 3.5-inch drives for maximum capacity. Figure 4-28 shows the SATA configuration (2A). See Table 4-32 on page 82 for a comparison of the configuration types.

Figure 4-28 2U chassis with one server and five SATA drives; dx340 shown; configuration 2A

You must install a separate disk controller in a PCIe slot for any of the following reasons:

� SAS drives are used (because the onboard controller supports SATA only)

� You require hardware RAID

If you do install a RAID card and you are using SATA drives, you can only connect four SATA drives, because these simple-swap drive trays are directly connected to the RAID card and there are only four connectors. Similarly, if you

Type 7832 system-board traydx340 One PCIe card slot

Storage enclosure Five hard disk drive bays

Type 7831 2U chassiswith one Type 7832 system-board tray

and storage enclosuredx340

Chapter 4. Solution overview 85

Page 106: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

use SAS drives, you can only connect four SAS drives. The lower-left drive bay remains unused in these situations.

I/O-intensive server (3A, 3B, 3D, 3F)For I/O intensive applications, IBM offers configurations that can host up to two PCIe cards. This configuration is sometimes referred to as a node with an attached I/O tray.

A riser card that has two PCIe slots is inserted into the system-board. For more information about the PCIe slots and the I/O options, see 4.5.2, “I/O options” on page 72.

Figure 4-29 shows a 2U Flex chassis with two PCIe slots and two 3.5-inch disk drive bays.

Figure 4-29 2U chassis with one server and I/O tray (two 3.5-inch drives); dx340 shown

When 3.5-inch drives are used in a configuration with two PCIe slots, you can only install two drives. The space in the upper middle section of the chassis cannot be used to hold drives. This configurations is referred to as 3A and 3B in Table 4-32 on page 82.

Note: Figure 4-29 on page 86 shows three PCIe slot covers, however, only two slots are enabled in the server. The bottom slot is not used.

Type 7831 2U chassiswith one Type 7832 system-board tray

and I/O enclosuredx340

Type 7832 system-board traydx340 Two PCIe card slots

I/O enclosure

Two hard diskdrive bays

86 Implementing an IBM System x iDataPlex Solution

Page 107: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-30 shows a configuration with two PCIe slots and eight 2.5-inch hot-swap drive bays. These are SAS drives. The 2.5-inch SATA drives are not supported from IBM at this time.

Figure 4-30 2U chassis with one server and I/O tray (eight 2.5-inch SAS drives); dx340 shown; config 3F

The two SAS configurations available are the eight drives, as shown in Figure 4-30, or the four 2.5-inch SAS drives. If four are installed, they are the four left-most drives as shown in the figure. The remaining four drive bays are covered with a filler panel and the center hot-swap backplane is not installed. In Table 4-32 on page 82, the four-drive version is referred to as 3D and the eight-drive version as 3F.

The onboard 6-port SATA II disk controller of the available servers support SATA only and do not support RAID. You have to install a separate disk controller in a PCIe slot for the following reasons:

� SAS drives are used (because the onboard controller supports SATA only).� Hardware RAID is required.

Hot-swap is only supported with 2.5-inch SAS drives.

4.6.2 3U chassis configurations (4A and 4B)

For high density storage requirements, IBM offers a 3U chassis that can contain up to twelve 3.5-inch hot-swap drives, either SATA II (configuration 4A) or SAS (configuration 4B). In both configurations, the server PCIe slot must be used to install the SAS/SATA RAID controller card. Figure 4-31 on page 88 shows the 3U chassis with twelve disks.

Unlike the 3.5-inch drives available for the 2U Flex chassis, the 3.5-inch drives in the 3U chassis are hot-swap drives. These drive options differ from the those supported in the 2U Flex chassis.

Note: The hot-swap backplane provides an x4 connection to the SAS controller card, which results in 12 Gbps of bandwidth for the twelve drives.

Chapter 4. Solution overview 87

Page 108: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-31 3U chassis with one server and twelve hot-swap drives; dx340 shown

4.6.3 General hardware configuration rules

This section briefly outlines the major configuration rules that apply when putting together a new iDataPlex configuration.

� For a 2U-compute solution, both servers must have matching CPUs, memory configurations, and disks (including disk size). Different node configurations within a single 2U chassis are not supported.

� You may not mix SATA and SAS in a single chassis.

� If hardware RAID is required, a SAS/SATA RAID controller card must be installed into an available PCIe slot.

� SAS configurations require a SAS controller card in the PCIe slot. The onboard disk controller of the current servers support SATA only.

dx340 Type 7832 system-board tray One PCIe card slot

Twelve hot-swaphard disk drive bays

Type 7834 3U chassiswith one dx340 Type 7832 system-board tray

88 Implementing an IBM System x iDataPlex Solution

Page 109: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� If more than six drives are used (SATA or SAS), a SAS/SATA controller must be installed in the PCIe slot.

� For the 2U chassis, you cannot create a RAID on more than four 3.5-inch drives (SATA or SAS), because RAID requires a separate RAID controller and the supported controllers support only direct connection to four 3.5-inch simple-swap drives.

4.7 iDataPlex Management components

This section discusses the embedded management technologies in the iDataPlex nodes and utilities to work with them. Tools IBM provides for managing many servers, are discussed in Chapter 7, “Managing iDataPlex” on page 171. Fee-based tools from IBM are not discussed here, but are available from the Tivoli® software group within IBM, and there are many IBM publications that discuss products such as Tivoli Management Environment, Tivoli Workload Scheduler, and Tivoli Provisioning Manager.

4.7.1 IPMI and BMC

The iDataPlex nodes, and many optional third-party components are compliant with Intelligent Platform Management Interface Version 2.0 (IPMI 2.0). IPMI operates independently of the operating system (OS) and allows administrators to manage a system remotely, even in the absence of the OS or the system management software, and even if the monitored system is not powered on.

IPMI also functions when the OS has started, and offers enhanced features when used with system management software. The nodes respond to IPMI messaging to report operational status, retrieve hardware logs, or issue requests. The nodes can alert by way of the simple network management protocol (SNMP) or platform event traps (PET). Administrators can view a text console by way of a serial-over-LAN (SOL) connection.

IPMI operates by way of main controller called the baseboard management controller (BMC) and other satellite controllers. The satellite controllers within the same chassis are connected to the BMC by way of a system interface called the Intelligent Platform Management Bus (IPMB). IPMB is an enhanced implementation of the Inter-Integrated Circuit (I²C) bus.

Chapter 4. Solution overview 89

Page 110: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Field-replaceable unit (FRU) information, an inventory (such as vendor ID, manufacturer, and so on) of the devices that are replaceable is also available. A Sensor Data Records (SDR) repository provides the properties of the individual sensors present on the board. For example, sensors show CPU temperatures, fan speeds, and power supply voltages.

4.7.2 SMBridge

The system management bridge (SMBridge) tool provides server management ability by way of two distinct modes of operation:

� Command-line interface mode (CLI)

� Interactive server mode (Server)

In CLI mode, SMBridge supports out-of-band access (through LAN or serial port) to a remote server. It enables SMBridge users to execute IPMI control commands in a native command line to manage the remote server.

To ease the use of this utility, SMBridge also supports an interactive usage, invoked by the -interactive option. In server mode, SMBridge is started as a background service or daemon after SMBridge package is installed. You can connect to this service using any standard Telnet client, by default with UDP port 623 (the port number is configurable). After connecting, the CLI prompt appears:

SMBridge>

All commands provided in CLI SMBridge interactive mode also work in Server mode, with two additional commands: console and reboot. The two commands allow administrators to view and change the BIOS settings, and to access Linux serial console and Microsoft's EMS/SAC interfaces.

SMBridge must be correctly configured and running before an incoming Telnet connection can be accepted. The configuration file is a text file where the value of runtime parameters is defined. For Windows, the default configuration file is located in the Windows installation directory. For Linux, the default configuration file is installed in the /etc directory. In most cases, modifying the default configuration is not necessary.

The latest version of SMBridge is available for download from:

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-64636

90 Implementing an IBM System x iDataPlex Solution

Page 111: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.7.3 UpdateXpress System Packs

UpdateXpress System Packs are a tested set of online updates released together as a downloadable package when products are first made available, and quarterly thereafter. The UpdateXpress System Packs are unique to each model of server. The installer tool used to apply the updates supports Windows, Linux, and VMware ESX.

At the time of writing, Version 2.01 of UXSP is the current version, available from:

http://www.ibm.com/support/docview.wss?uid=psg1SERV-XPRESS

UXSP 2.01 contains the following features (examples of use are shown):

� A command line utility for downloading the latest firmware from the Web.

setup200.exe acquire –u –l /opt/uxsp –m 8850,7979 –o linux

� Create a report of content changes since the last UpdateXpress system pack.

setup200 acquire -m 8843 -o linux --latest --report

� Create bootable media - USB key, CD/DVD-ROM.

setup200.exe bootable –-iso=ux2008.iso –l /opt/uxspsetup200.exe bootable –-usbkey=f: –l /uxsp

� Include or exclude specific updates, without customizing XML, as is required in 1.x versions.

setup200.exe update –l /opt/usxp -–include=ibm_fw_bios_bwe130a_linux_i386

� Update remote systems.

setup200.exe update –u –l /opt/uxsp --remote=10.0.0.1 –-remote_user=admin –-remote_password=passw0rd

� Support online update of x3950 M2 multi-node systems. Currently each node has to be updated independently.

4.7.4 IBM Director Update Manager

Update Manager can be set to download updates from the IBM Web site on a scheduled basis. IBM typically posts updates to the Web site for initial releases, suggested, and non-critical updates weekly, so having Update Manager check for updates weekly is suggested. Critical updates are posted as soon as they are ready. Manual retrieval of updates can be done without interruption to the automated update schedule.

Chapter 4. Solution overview 91

Page 112: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Update Manager uses profiles to determine what to download and install, and only updates matching the profile will be downloaded. IBM Director can be set up to notify the administrator when new updates have been retrieved by Update Manager.

Figure 4-32 shows the creation of an Update Manager profile.

Figure 4-32 Creation of an Update Manager profile

IBM Director Update Manager uses the same update files as UpdateXpress system packs, plus the XML files that describe them. Therefore, the updates are grouped in machine type and operating system combinations.

IBM Director events are created when updates are downloaded, indicating the success or failure of the downloads. IBM Director Event Action Plans can be created based on the severity of the update downloaded.

Tip: The Update bundles referred to in the profile in Figure 4-32 are one or more separate UpdateXpress system packs.

92 Implementing an IBM System x iDataPlex Solution

Page 113: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

A health check report can be generated for a specific update, to determine which systems the updates are applicable to. A health check report can also be generated for a system or group of systems to indicate applicable updates.

Figure 4-33 shows an example of a health check report.

Figure 4-33 A health check report

Chapter 4. Solution overview 93

Page 114: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

With the downloaded system packages, Software Distribution packages can be created by right-clicking a profile and selecting Create a Software Distribution Package. The software distribution package can be applied to a system or group of systems.

The prerequisites for Update Manager on servers include:

� For Level 0 systems (agent-less systems): IPMI device driver

� For Level 1 systems (hardware monitoring agent only):

– Director 5.20.x Core Services (Level 1 agent) contains firmware inventory enhancements added in 5.20, which were not present in 5.10.x versions.

– IPMI device driver is required for access to BMC inventory and online firmware updates.

� For Level 2 systems (Full Director agent):

– Director 5.20.x Agent: Contains firmware inventory enhancements added in 5.20, which were not present in 5.10.x versions.

– IPMI device driver is required for access to BMC inventory and online firmware updates.

Although the Update Manager packages might contain instructions to reboot the systems, here are some tips to control this behavior:

� View details of each update package in the Update Manager profile.

� Certain offline update packages cause an immediate reboot, by default. Examples include hard drive firmware, ServeRAID firmware, CPLD updates.

� Before deploying, edit the Software Distribution Category, as follows:

a. Set and verify the reboot options.

b. If the reboot check box cannot be unchecked, the package forces an immediate rebooting of the endpoint (although this is infrequent).

c. If you want, drag entries to change the order of the update packages.

� Software distribution and reboots can be scheduled later by using IBM Director scheduler.

The success or failure of updates creates events, which can be the basis of IBM Director Event Action Plans. For more information about these and other IBM Director topics, see:

� Implementing IBM Director 5.20, SG24-6188� Implementing IBM Systems Director 6.1, SG24-7694

94 Implementing an IBM System x iDataPlex Solution

Page 115: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

4.7.5 Other firmware update methods

Firmware updates can be done offline, and are released in bootable diskette (.img) or CD-ROM (.iso) format. Updates are released for Windows (.exe), Linux and VMware (.sh). These updates are run under from the command line, are scriptable, and are provided with XML metafiles for use with IBM Director Update Manager and UpdateXpress system packs.

4.7.6 Update notification subscription

To receive weekly e-mail notifications of firmware and utility updates customized to the products which interest you, subscribe at

http://www.ibm.com/support/subscriptions

Select the product family, system, and operating systems you want to monitor. Then, select the types of information you want and how you want to be notified, as shown in Figure 4-34 on page 96.

Chapter 4. Solution overview 95

Page 116: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 4-34 IBM Support Subscription: Selecting topics and notification method

96 Implementing an IBM System x iDataPlex Solution

Page 117: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 5. Application considerations

This chapter describes the applications for which iDataPlex is designed, and discusses things to consider when planning an iDataPlex solution. The chapter begins with iDataPlex hardware constraints, followed by constraints for operating systems and applications. The chapter discusses scenarios of software stacks that are well suited for iDataPlex. The chapter finishes with two example scenarios involving the use of iDataPlex.

Topics in this chapter are:

� 5.1, “Hardware constraints” on page 98� 5.2, “Operating system considerations” on page 103� 5.3, “Application considerations” on page 105� 5.4, “Redundancy considerations” on page 106� 5.5, “Example iDataPlex scenarios” on page 108

5

© Copyright IBM Corp. 2008, 2009. All rights reserved. 97

Page 118: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

5.1 Hardware constraints

This section discusses the hardware constraints involved in design of the iDataPlex. It provides details about the chassis (2U and 3U chassis) and each of the available servers, the dx320, dx340, and dx360. It also discusses the differences between stateless and stateful operation, and lists consequences to consider, regarding operating systems and applications, when you plan an iDataPlex solution.

5.1.1 Chassis considerations

Both, the 2U and the 3U chassis share one power supply for all chassis components. The power supply has only one input for a power cord.

The iDataPlex rack cabling can use Y-cables for connection from the PDU to the chassis power supply. This means one outlet on the PDU powers two chassis, which in turn power either two servers or one server and the storage subsystem. Figure 5-1 shows this chaining to chassis with either the 900 W or the 375 W power supply.

Figure 5-1 Shared power supply to the nodes (dx340 shown)

PDUoutlet

Y-cable

dx340 server

Chassisdx340 server

900W PS

PDUoutlet

Y-cable

dx340 server

ChassisStorage

900W PS

dx340 server

Chassisdx340 server

900W PS

dx340 server

Chassis

375W PS

98 Implementing an IBM System x iDataPlex Solution

Page 119: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The use of Y-cables means that when you disable the PDU outlet, you shut down up to four servers in your iDataPlex rack. A single server node or storage enclosure is exposed to failures of three components:

� PDU outlet� Y-cable� Chassis power supply

Two PDU outlets are typically grouped and controlled by one circuit breaker. This means, in the worst case, a single failure can affect up to eight server or storage components.

The chassis fan assembly is also non-redundant and shared between the two chassis elements (that is, server-server or server-storage enclosures). The fans cannot be replaced individually because the fan assembly is a single unit. The fan assembly and power supply are replaced by first pulling the chassis out from the rack. Figure 5-2 shows the process of installing and uninstalling the fan assembly.

Figure 5-2 The fan assembly in the 2U chassis

All components in a 2U and 3U chassis are not hot-swappable, except for hot-swappable SATA/SAS disk drives if these are configured. Because you have to unplug the entire chassis for maintenance work, you lose connectivity to the power cord. Hot-swappable SATA/SAS drives can be replaced from the front without interrupting the nodes.

Note that, in a compute-chassis with two servers, pulling out only one server without interrupting work on the other node in this chassis is possible because they are each separately powered from the chassis power supply.

Chapter 5. Application considerations 99

Page 120: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

If one enclosure is pulled out from the chassis, shutters in front of the fan assembly flip down automatically to prevent airflow through this empty bay. Figure 5-3 shows an empty 2U chassis with both top and bottom shutters closed.

Figure 5-3 Empty 2U chassis with power supply (left) and shutters (right) at the back

5.1.2 iDataPlex server considerations

A single iDataPlex server is similar to other commodity server hardware on the market. All available models use an industry-standard SSI system-board, Intel x86-compatible processors, and DDR2 memory; have two Gigabit Ethernet interfaces onboard; and provide additional connectivity by way of one or two PCI Express (PCIe) slots. This approach allows for the vast majority of today’s applications to run in an iDataPlex environment. No porting is necessary to exploit all the advantages iDataPlex gives you.

Only the connections on the front are hot-pluggable. You cannot access or work on system components while the server is powered on. You have to unplug the server from the chassis in order to lift the top cover and access the components. Hot-swappable SAS/SATA drives are the only components you can install or uninstall without powering down the node.

The dx320, dx340, and dx360 system boards contain a SATA-only controller without RAID features. If you want to have RAID at hardware level, you must use a supported PCIe-based RAID card. However, the dx320 server supports only the compute-node configuration, which is equal to one disk drive per server. The dx340 server has only one or two PCIe slots, depending on the configuration of the chassis. If you have a compute chassis with two servers in it, you only have the option between one or two local drives per server. If you then choose SAS drives, you must have a SAS/SATA controller in the only available PCIe slot. This method means you cannot have any additional PCIe cards (such as InfiniBand) in this configuration.

100 Implementing an IBM System x iDataPlex Solution

Page 121: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The dx360 server is only supported in configurations with one PCIe slot (that is, compute- and I/O-rich chassis configuration). If you then install a SAS/SATA controller card (which is needed for SAS drives or hardware RAID), it occupies the only available PCIe slot and you cannot install additional I/O adapters. See Table 4-32 on page 82 for the exact number of available PCIe slots in each of the server configurations.

5.1.3 Stateful versus stateless operation of the nodes

The terms stateful and stateless define whether a computer system (operating system, application, Web page, and so forth) remembers its previous state. A state can be considered as a set of information within the system. When looking at operating systems, you can consider a system that has a static IP to be stateful. An operating system is considered stateful if you create a file somewhere in the file system, and this file continues to be available after the next reboot. An application is considered stateful if it remembers your last opened files or Web pages. A stateless application does not remember them.

A standard HTTP server is considered stateless. It receives HTTP requests and answers them with the requested HTML page. It does not remember the pages served before. When you add interactive elements to the Web server to allow you to log in and track your visited pages, then this server becomes a stateful server.

Stateful systems are easy to understand, install, and maintain. They are the most common types in today’s IT landscape. If you are not sure whether you have stateful or stateless systems, you most likely have a stateful environment. Installations are typically done to local disk drives, and changes are applied to the specific server only.

The large stateless complexes are characterized by the nodes that work independently of each other, whereby the problem that the overall complex is trying to solve is not significantly affected if any one node goes offline. Any work lost as a result of that node going offline is simply reallocated to another node to either complete or repeat.

Stateless systems have their advantages when working with hundreds or thousands of nodes. Only one (read-only) image is required that all the nodes can access and boot. In this setup, the nodes do not require any locally-installed hard drives, eliminating a big source of failure because the probability of hard drive defects increases linearly with the number of drives in use.

Another major advantage of using stateless systems is the ease of management. You only have to maintain one installation, instead of several hundred or thousands. Suitable management tools allow easy replication in a highly efficient manner.

Chapter 5. Application considerations 101

Page 122: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

5.1.4 Conclusions for operating systems and applications

In summary of the iDataPlex hardware conditions, the following conclusions can be drawn:

� A broad range of applications can run on iDataPlex because of industry standard components (standard x86-compatible platform with PCIe slots and support for Linux and Windows).

� The iDataPlex flexible node configuration can be customized to fit a designated purpose.

� The iDataPlex has limited redundancy at the hardware level, and redundancy is only possible within the storage and memory subsystems of a node (using RAID, and memory mirroring or sparing technology). These redundancy features are not as substantial as System x and BladeCenter redundancies, where environmental components such as power supply and fans are also redundant.

� The iDataPlex design assumes that redundancy is implemented at the operating system or application layer.

� The iDataPlex nodes have a small number of PCIe slots (one or two slots only, one of which might be occupied by the SAS/SATA RAID controller card).

� Depending on the type of workload, you may run both OS and applications in either a stateful or a stateless manner. The flexible node configuration and the high density of iDataPlex allows you to exploit the advantages of both approaches.

Note: Stateless operation does not imply network boot, network boot does not imply stateless operation. Booting a stateless, read-only image from local storage (a USB stick, for example), and operating stateful in a network boot environment are both possible.

102 Implementing an IBM System x iDataPlex Solution

Page 123: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

5.2 Operating system considerations

This section lists the supported operating systems and discusses the locations for storing the OS (local disk, SAN, or a shared network).

5.2.1 Supported operating systems

At the time of writing, the following operating systems are supported for the iDataPlex servers:

� dx320

– Linux

• Red Hat Enterprise Linux 4 ES for AMD64/EM64T

� dx340

– Microsoft

• Windows Server® 2003/2003 R2, Enterprise Edition• Windows Server 2003/2003 R2, Enterprise x64 Edition• Windows Server 2003/2003 R2, Standard Edition• Windows Server 2003/2003 R2, Standard x64 Edition• Windows Server 2003/2003 R2, Web Edition• Windows Server 2003, Enterprise Edition (64-bit) with Windows

Compute Cluster Services (WCCS)

– Linux

• Red Hat Enterprise Linux 4 AS for x86• Red Hat Enterprise Linux 4 AS for AMD64/EM64T• Red Hat Enterprise Linux 4 ES for x86• Red Hat Enterprise Linux 4 ES for AMD64/EM64T

� dx360

– Linux

• Red Hat Enterprise Linux 5 Server for AMD64/EM64T• SUSE® Linux Enterprise Server 10 Service Pack 1 for AMD64/EM64T

5.2.2 Location of the operating system

The decision of where to store the operating system is based on customer requirements. The iDataPlex server supports network boot by using the Preboot Execution Environment (PXE), enabling you to boot from local disk or from the network over either one of the two onboard Ethernet interfaces. You may change the boot sequence either permanently from the BIOS, or temporarily by pressing

Chapter 5. Application considerations 103

Page 124: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

F12 during startup of the server. For detailed information about the PXE boot configuration, see the User’s Guide of the respective server. For the Web site addresses, see 4.4, “iDataPlex servers and components” on page 50

When using network boot, the best strategy to follow depends on your global environment. For compute-intensive stateless workloads, you might want to make all clients boot the same read-only image from one server that is installed in the same iDataPlex rack. In this case, the local disk can be used just for swapping or other temporary data.

With support for selected 2U DS3000 series models, installing a SAN environment on the iDataPlex rack and boot servers entirely from SAN is possible. In this case, both the operating system and application data are stored on DS3000 drives. The supported DS3000 models are the DS3300 (iSCSI) and the DS3400 (Fibre Channel) together with the EXP3000 expansion unit. The DS3300 acts as an iSCSI target, and you must install an iSCSI HBA (that implements an iSCSI initiator in hardware) in the iDataPlex server’s PCIe slot. To be able to boot by way of iSCSI, the iSCSI HBA must have an option ROM.

If you plan to implement an iSCSI installation, keep iSCSI traffic separated from your other network traffic. Use VLANs to implement this, for example.

Similarly, for Fibre Channel you have to install one of the available Fibre Channel (FC) adapters to be able to access storage on the DS3400. An option ROM on the FC HBA enables you to boot from FC storage.

The following documents contain additional SAN information:

� IBM System Storage DS3000: Introduction and Implementation Guide, SG24-7065

http://www.redbooks.ibm.com/abstracts/sg247065.html

� IBM System Storage DS3000 series Interoperability Matrix

http://www.ibm.com/systems/storage/disk/ds3000/pdf/interop.pdf

� How to access iSCSI-based storage with the iSCSI driver in Red Hat Enterprise Linux 4

http://kbase.redhat.com/faq/docs/DOC-3939

Note: When you plan to boot your Windows Server installations from iSCSI, you should have at least one system with local storage to be able to install Windows locally, because Windows does not support installation to iSCSI devices directly.

104 Implementing an IBM System x iDataPlex Solution

Page 125: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� Microsoft support for iSCSI

http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/msfiSCSI.mspx

� Boot from SAN in Windows Server 2003 and Windows Server 2000 (white paper from Microsoft)

http://www.microsoft.com/windowsserversystem/wss2003/techinfo/plandeploy/bootfromsaninwindows.mspx

This document also explains how to implement boot from SAN when using Microsoft Cluster Services. It can be done with boot and shared cluster disk either on the same HBA port or on different HBA ports.

5.3 Application considerations

This section discusses the type of applications that are suitable for iDataPlex. Besides the general fact that you can put almost any kind of workload on iDataPlex servers, we focus on those types for which iDataPlex is designed.

Designed for massive scale-out data centers, the concept of iDataPlex is to provide an extremely scalable implementation of commodity hardware (similar to white boxes) and combine it with an easy management. This concept is inline with the how many Web startup companies run their systems. Many Web 2.0 sites started with a small number of Web, application, and database servers (some even with only one for all applications). Over time, as the number of registered users grew, more of those white boxes were added to accommodate the increased workload. Scalability is a must-have for this kind of software. Popular open source software such as the Apache HTTP Server, PostgreSQL, or MySQL™ support the principle of clustering to deal with this approach of expansion on-demand. Such environments most often run on Linux or a derivative of Berkeley Software Distribution (BSD) because they provide an inexpensive alternative to commercial operating systems such as Microsoft Windows.

The term LAMP is used often in those discussions to refer to a software stack consisting of Linux, Apache, MySQL, and PHP. These are the major components used to build a Web server infrastructure. Today, the set of available applications and tools has grown significantly. It is no longer just MySQL that is used as a database backend, but also other open source software like PostgreSQL or Apache Derby (a database with a small memory footprint and can be embedded into Java applications). PHP is not the only application used to create dynamic Web pages. Many existing frameworks implement Web 2.0 elements, such as Asynchronous JavaScript™ with XML (AJAX), and are based on popular scripting languages like Python, PHP, Perl, or Ruby (on Rails). All those server

Chapter 5. Application considerations 105

Page 126: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

applications and extensions have good scale-out capabilities and are a perfect fit for iDataPlex. Every data center that has a substantial number of white boxes in use with the these server and software products is a candidate for consolidation using iDataPlex.

WebSphere® is also a fit for iDataPlex. An analysis of how iDataPlex is suitable to run WebSphere is made in the Harvard Research Group (HRG) paper Enhancing Business Agility - SOA, WebSphere and iDataPlex, available from:

ftp://ftp.software.ibm.com/common/ssi/rep_wh/n/XSW03015USEN/XSW03015USEN.PDF

In the arena of high-performance computing, iDataPlex might be your choice if your applications currently rely on a farm of commodity x86-compatible servers. Example areas of use are simulations or modeling in computer aided engineering (CAE) or scientific research (medical science or chemistry, for example). In many cases, those applications run on x86-compatible systems and scale well when adding more nodes.

5.4 Redundancy considerations

This section describes how to implement basic redundancy in your iDataPlex environment. It discusses the considerations when planning for an infrastructure that is able to continue with failed hardware and software components.

Several components can fail:

� PDU, PDU cable (Y-cable), chassis power supply

� Server (including system-board, CPU, and memory)

� Ethernet connectivity (broken switch or network cable)

The following actions can be taken to protect against those kind of failures:

� Redundant power sources to the iDataPlex rack

When you use more than one PDU, you have to install more than one power cable to the iDataPlex rack. Make sure that they connect to different power lines from your electric utility company, or even better, two different, independent companies.

� Aggregate multiple physical Ethernet links to one virtual link

When cabling your iDataPlex network connections, you can also implement redundancy at the physical layer to protect against a switch breakdown or a faulty Ethernet cable. You can have a redundant connection between the

106 Implementing an IBM System x iDataPlex Solution

Page 127: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

server and the switch by using link aggregation techniques, which is sometimes also referred to as trunking, bonding, or teaming.

The major open source operating systems, like Linux or the BSD derivatives, support trunking of multiple network interface cards (NICs) to one virtual interface. If one of the connected physical links go down, the virtual link remains active and can transmit data over the remaining links. Microsoft Windows has no native support for link aggregation, but some of the external device drivers implement it.

Link aggregation is also supported between switch-to-switch connections. If you use switches from different vendors, make sure you do not use a proprietary aggregation protocol, but instead use the link aggregation control protocol (LACP) that is part of the IEEE 802.3ad standard. Similarly, if you use VLANs with different switch vendors, make sure you use IEEE 802.1Q as the trunking protocol and not the Cisco proprietary Inter Switch Link (ISL).

� Cabling for a redundant path: server iDataPlex switches → external switches

You can also introduce redundancy by cabling one server to two different switches. This approach prevents the server from losing connectivity if one switch fails. To provide full redundancy, continue cabling a second path from the iDataPlex switches to the rest of your infrastructure.

With current iDataPlex switch options, it is not possible to aggregate two interfaces of one server to a combined link that ends in physical ports on two different switches. Only high-end core switches from Cisco and Nortel allow this, such as with Cisco’s virtual switching system (VSS) or Nortel’s split multi-link trunking (SMLT) technology.

� Standby server (hot or cold standby)

A standby server can be implemented in either hot or cold standby. Hot-standby means the spare server is ready to take over immediately for a failing server. No major delay is recognized by a user when a switch to a server in hot-standby occurs.

One requirement to allow for a quick-switch is that both machines always have the same version of application data. No tedious data replication to the standby machines is necessary. An easy way to realize this is with a shared disk that can be accessed from both machines. A shared disk can be implemented, for example, as a SAN or a network shared disk (using services such as SMB/CIFS, NFS, GFS, or GPFS™)

Note: When cabling for Ethernet redundancy (that is, more than one path between switches), make sure the spanning tree protocol (STP) is running on the switches. STP detects switching loops, deactivates redundant paths, and automatically reactivates them if the primary connection fails.

Chapter 5. Application considerations 107

Page 128: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The High-Availability Linux Project (Linux-HA) is a popular open-source project that combines nodes to form clusters. When a node in a cluster fails, the other nodes take over the tasks that were running on that node. For more information about Linux-HA, see:

http://www.linux-ha.org

Availability monitoring can be implemented on two layers:

– You can monitor your server as a single component.

– You can monitor dedicated resources, such as a Web server or database.

Commercial products offering high availability capabilities are IBM Tivoli System Automation for Multiplatforms or SteelEye® LifeKeeper®. Systems in cold-standby usually remain either powered off or in a basic setup until they are used. Users of those services experience a significant delay or even notice the downtime of the server, until the standby server is set up and ready to take over the workload.

5.5 Example iDataPlex scenarios

The following two sections illustrate how iDataPlex can be implemented when designing a solution. It focuses on the main users of massive scale-out data centers: Web 2.0 companies and HPC customers. The two scenarios are meant as high-level examples to demonstrate the flexibility of a design based on iDataPlex. They are not design guides or recommendations on the use of specific hardware or software.

5.5.1 Web 2.0 example

On the surface, data centers at Web 2.0 companies look similar: a big farm of commodity x86 boxes. But, when you look more closely at the applications that run on these systems, it is a bit different. Although most of them run generally available open-source software, they enrich it with additional features or add

Note: The two versions of Linux-HA are release 1 (v1) and a redesigned release 2 (v2). For an initial setup, V1 seems easier to get started with and configure, but it has several restrictions. For more information, see:

� Version 1:

http://linux-ha.org/FactSheet

� Version 2:

http://linux-ha.org/FactSheetv2

108 Implementing an IBM System x iDataPlex Solution

Page 129: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

specific components as proprietary pieces (these make the real asset of several companies, the most famous example is the Google search algorithm).

The following example covers a generic approach of running a Web 2.0 data center with a software stack comparable to LAMP, enhanced by dynamic elements. It consists of a firewall, a Web server, an application server, and a database server farm. Other than the firewall, these are assisted by a caching server. Figure 5-4 shows the typical configuration.

Figure 5-4 Generic Web 2.0 infrastructure

The following list shows characteristic parameters for server configurations for each of the different application areas:

� Firewall: I/O-rich configuration

– 2U chassis– Intel Pro1000 Ethernet card, providing four additional 1 Gbit Ethernet ports– Four hot-swappable SAS drives with hardware RAID– High availability solution, such as Linux-HA

� Web server: storage-rich configuration

– 2U chassis– SATA drive configuration– Aggregated Ethernet links for higher availability and performance

� Application server: storage-rich configuration

– 2U chassis– SATA drive configuration– CPU speed and main memory size above standard installations

� Database server: 3U storage-rich configuration

– SAS drives, 15 k RPM, hardware RAID– CPU speed and main memory size above standard installations– Aggregated Ethernet links for higher availability and performance

Internet Firewall Webserver Appserver DB server

Cachingserver

Cachingserver

Cachingserver

Chapter 5. Application considerations 109

Page 130: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

If the Web server does not have high storage demands, using the compute-node solution, which can provide two nodes per 2U, is possible, allowing for twice the density.

Figure 5-5 shows how an iDataPlex rack might be configured with these types of servers.

Figure 5-5 iDataPlex sample Web 2.0 configuration

B7 B8B7 B8

PDU

ETH

PDU

ETH

PDU

ETH

PDU

ETH

PDU

B2PDU

B2

PDU

ETH

PDU

ETH

PDU

ETH

PDU

ETH

PDU

ETH

PDU

ETH

PDU

D2PDU

D2

A CB D

Firewalls

Avocent

Webserver +caching

1U for cablesfor cable pass-through

Appserver +caching

DB server +caching

Management box, likeAvocent Mergepoint 5300

Dynamic spares

110 Implementing an IBM System x iDataPlex Solution

Page 131: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The configuration assumes a data center with cabling from the ceiling. Therefore, 1U is left for cables at the top of column C. All the Ethernet switches are capable of working at ISO Layer 2 or Layer 3, which provide security features such as IP-based access control lists (ACLs). For additional security, VLANs are used to separate the different server-clouds. The different Ethernet switches are connected by way of two or more aggregated links to ensure best performance and high availability. Spare servers are available in the upper left part of the rack which can be used if other servers fail.

If the company is growing, more capacity is likely necessary in the data center. Increasing the computing power in the data center by adding a second iDataPlex rack is easy.

For example, if a Web 2.0 company decides to enhance its current service with social networking, the company will require more database servers (and possibly also application servers) to handle the workload. The company can install an additional iDataPlex rack with the left-half (column A) installed with the database server, and the right-half (column C) installed with the application server. Because the Avocent MergePoint management appliance is capable of handling up to 256 servers, the company can continue to manage its own iDataPlex infrastructure from a single appliance.

5.5.2 HPC example

This example shows how to implement iDataPlex for an HPC customer who wants an expandable, but low-cost environment based on Linux. The solution is based on one rack only, but it applies equally to more. The scenario assumes an application that is CPU-bound, uses an average amount of main memory, and needs low-latency communication for data bursts at infrequent intervals.

The proposed solution consists of 76 compute nodes, each of them being one dx360 server with two Intel quad core Xeon CPUs. This results in a configuration with 152 physical CPUs and 608 cores in one rack. Each node can be equipped with up to 64 GB of main memory running at 800 MHz, giving a total of 4,846 GB of RAM in a single rack.

All nodes work in a stateless environment, PXE-booting from one management node. The Ethernet interfaces are only used for booting and controlling the nodes and InfiniBand is used as the high-speed interconnect. Four 24-port InfiniBand switches are installed in the upper and lower region of columns A and C. See Figure 5-6 on page 112. Each node requires an InfiniBand HBA in the PCIe slot, which limits the configuration to the use of SATA drives (SATA drives do not require a separate controller). The local disk is used for swapping and temp disk purposes only.

Chapter 5. Application considerations 111

Page 132: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 5-6 iDataPlex sample HPC configuration

Note: The sum of all devices in either column A or C is 43. This value is correct because there is one additional unit in both columns for management and switch options.

PDU

B8PDU

B8

PDU

B6PDU

B6

PDU

B4PDU

B4

PDU

B2PDU

B2

PDU

D8PDU

D8

PDU

D6PDU

D6

PDU

D4PDU

D4

PDU

D2PDU

D2

A CB D

IB-Switch

Eth-Switch

login node

IB-Switch IB-Switch

IB-Switch

Eth-Switch

48-port Switch, 1U

24-port IB, 1U

24-port IB, 1U

login + mgt, 3U

1U for Cablesfor cable pass-through

40 compute nodes 36 compute nodes

112 Implementing an IBM System x iDataPlex Solution

Page 133: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Compute-node characteristics include:

� IBM dx360 server, with two Intel quad core Xeon CPUs and 64 GB RAM

� PXE performs stateless boot through Ethernet; local SATA disk is used for swap and scratch

� InfiniBand HBA for low-latency communication

In our scenario, we have dedicated a 3U chassis as the management server, providing space for up to 3.6 TB of data on SAS disks, depending on number and size of installed disks. The example configuration uses fast 300 GB SAS drives spinning at 15 k RPM. This allows for the highest speed when reading or writing data.

The storage is divided into two partitions.: A small partition is for the local file system of the management node; a big partition is where the shared file system for the nodes resides. The management node runs with either Distributed Image Management (DIM) or xCAT, and Warewulf, serving stateless images.

With DIM, about 5 - 10 GB is sufficient storage on the local file system including the image for all 76 nodes (the exact amount depends on the number of required software packages). Compared to the 3.6 TB available space, only a small fraction has to be sacrificed for management. The remainder is allocated for a parallel file system, such as General Parallel File System (GPFS) or Global File System (GFS), that is mounted on all the nodes. The parallel file system is provided to the nodes as a common file system to exchange application data.

For monitoring purposes, each node can run a small Ganglia daemon to monitor the performance. On the management node, this information is collected and visualized in several graphs on a Web interface.

Figure 5-7 on page 114 shows Ganglia monitoring 32 busy cluster-nodes in an iDataplex rack.

Chapter 5. Application considerations 113

Page 134: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 5-7 An iDataPlex cluster with 32 nodes running Ganglia

Management node characteristics include:

� 3U chassis

� Twelve SAS drives, each 300 GB, 15 k RPM, and RAID-5

� One interface internal to the cluster, one interface to the production network

� Runs Linux with either DIM or xCAT and Warewulf, serving stateless images through PXE boot

� Runs a job scheduler such as IBM LoadLeveler®, or Simple Linux Utility for Resource Management (SLURM)

� Acts as a GPFS or GFS server for the nodes

� Runs Ganglia for performance monitoring of all nodes

114 Implementing an IBM System x iDataPlex Solution

Page 135: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The design uses 3U on each side for the switches. Copper-based InfiniBand switches should not be mounted in vertical bays (column B or D) because the thick cables take up too much space to route them that way (for more details about cabling constraints, see 4.5.3, “Networking options” on page 74). The 1U at the bottom of column C is used to pass uplink cables through to the existing infrastructure (assuming a raised floor data center; otherwise this additional 1U would be at the top).

The customer data center requirements can be met with the flexible iDataPlex design. For instance, IBM was able to deploy a compute-chassis iDataPlex solution, in which all 84 nodes used copper-based InfiniBand cables to exit the rack and connect directly to an external switch.

Chapter 5. Application considerations 115

Page 136: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

116 Implementing an IBM System x iDataPlex Solution

Page 137: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 6. Designing your data center to include iDataPlex

This chapter discusses rack details, several recommendations for moving the rack to its proper place, how the rack interfaces with the data center, and issues that data centers face, such as cooling, airflow, power, and others.

Topics in this chapter are:

� 6.1, “Data center design history” on page 118� 6.2, “Rack specifications” on page 118� 6.3, “Rear Door Heat eXchanger specifications” on page 120� 6.4, “Cabling the rack” on page 127� 6.5, “Data center power” on page 133� 6.6, “Cooling” on page 135� 6.7, “Airflow” on page 138� 6.8, “Water cooling and plumbing” on page 152� 6.9, “Designing a heterogeneous data center” on page 155� 6.10, “iDataPlex servers in a standard enterprise rack” on page 158� 6.11, “The Portable Modular Data Center” on page 161� 6.12, “Possible problems in older data centers” on page 164� 6.13, “Links to data center design information” on page 168

If you require more information or assistance, see Chapter 8, “Services offerings” on page 181, and the iDataPlex and Rear Door Heat eXchanger documentation.

6

© Copyright IBM Corp. 2008, 2009. All rights reserved. 117

Page 138: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.1 Data center design history

In 1965, Gordon Moore predicted that the number of transistors that could inexpensively be placed on a chip would double approximately every two years, and this statement (known as Moore’s law) is still accurate. This increase in transistor density means that computer equipment is often replaced by either physically smaller systems or more powerful systems (or both smaller and more powerful).

A larger number of smaller units equates to higher rack densities, often resulting in a significant increase in power consumption and cooling requirements. Higher power density leads to different cooling requirements, because each additional watt of power used by IT equipment means additional power for cooling must be provided to remove the heat produced from the data center. This fact is also true of the iDataPlex, which allows 138% more servers than standard 1U server, on the same floor space.

In the 1960s, IBM introduced two technologies:

� The IBM System/360 Model 91 with water cooling� Virtualization on the S/360 mainframe in the form of CP67

These two technologies are now used in data centers that run servers with Intel and AMD™ processors. The use of water for cooling can be up to 3500 times more efficient than air, and virtualization helps consolidate energy that is wasting the low-utilized servers, which are running in data centers around the world. For more information, see:

http://searchdatacenter.techtarget.com/news/article/0,289142,sid80_gci1218484,00.html

The more we look back, the more we see that problems we face now are not new and many of the solutions have been tried and tested in one form or another before. The iDataPlex solution takes advantage of IBM knowledge in the data center to provide a best-of-breed solution for the high-density data center.

6.2 Rack specifications

This section provides rack specifications, information about the rack footprint, and the requirements for clearance, transport, and placement.

An iDataPlex rack is a 100 1U-equivalent rack with an EIA standard node-mounting of 84U nodes in two vertical columns and 16U vertical bays for switches or PDUs (1U = 44.5 mm or 1.75 in.). The vertical bays are in two rows,

118 Implementing an IBM System x iDataPlex Solution

Page 139: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

each having four 2U sections above one another. The dimensions are listed in Table 6-1.

Table 6-1 iDataPlex rack dimensions

To move the iDataPlex rack through a doorway, first review the minimum and recommended dimensions listed in Table 6-2.

Table 6-2 Doorway dimensions

Configuration Width Depth Height

Base rack + casters (no doors, or covers)

1200 mm 840 mm 2093 mm

Base rack (no doors, covers, nor casters)

1200 mm 600 mm 2093 mm

Rack + front doors (only) 1200 mm 840 mm 2093 mm

Rack + rear door (only) 1200 mm 844 mm 2093 mm

Rack + side covers (only) 1235 mm 840 mm 2093 mm

Rack + front + rear + side covers 1235 mm 844 mm 2093 mm

Note: The fully populated rack fits under a 2110 mm (83 in.) doorway only if rolled in on the casters (instead of remaining on the shipping crate).

Dimension Minimum Recommended

Doorway height 2110 mm (83 in.) 2133 mm (84 in.)

Doorway width 851 mm (33.5 in.) 900 mm (35.5 in.)

FAQ:

� How tall is the rack; is it taller than enterprise racks; and will it fit through my doorway?

iDataPlex requires a minimum of 83” door height, so it will fit through a typical 7’ door.

Doorways and elevators on some old buildings might not be tall enough to fit iDataPlex racks.

There are no height reduction features, but we can support iDataPlex in an Enterprise rack in these environments.

Chapter 6. Designing your data center to include iDataPlex 119

Page 140: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The weight of an iDataPlex rack depends on the configuration. A typical compute-node configuration comprised of the following devices would weigh 1270 kg (2800 lb):

� Rack� 42x 2U chassis� 2 Ethernet switches� 4 PDUs

An empty rack weighs 250 kg (551 lb).

6.3 Rear Door Heat eXchanger specifications

The iDataPlex Rear Door Heat eXchanger (part number 43V6048) has the following specifications:

� Dimensions:

– Width: 1190 mm (46.9 in.)– Depth: 120 mm (4.7 in.)– Height: 1995 mm (78.5 in.)

The heat exchanger adds 127 mm (5 in.) to iDataPlex Rack depth if the casters are removed.

FAQ:

� My Datacenter cannot support the weight of an iDataPlex rack, what should I do?

iDataPlex form factor is less weight than an equivalent number of 1U.

One iDataPlex empty rack weighs up to 35% less than two standard racks.

A maximum configured 1U can weigh up to 35 lbs; the iDataPlex’s equivalent 1U weighs up to 23 lb. Typical weights are lower, but iDataPlex is always less than an equivalent 1U.

Most data centers have a total floor loading limitation. In these cases iDataPlex is always better than equivalent 1U.

Data centers also have point loading limitations. iDataPlex with 84 nodes would have greater point loading than a single standard rack with 42 nodes. This is normally only a problem with very old data centers that have wood filled floor tiles versus concrete filled tiles.

120 Implementing an IBM System x iDataPlex Solution

Page 141: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� Doorway dimensions:

– Minimum height: 2020 mm (79.5 in.)– Recommended height: 2057 mm (81 in.)

� Weight:

– 82 kg (180 lb) empty– 91 kg (200 lb) full, with water

Depending on conditions, the Rear Door Heat eXchanger can extract heat produced by equipment using 30 kWh of energy (while operating at 90 - 94% of the door’s maximum efficiency).

The Rear Door Heat eXchanger does not use electricity.

6.3.1 Planning for the arrival

This section provides information about what to plan for when an iDataPlex rack arrives. For more information, see the Installation and Planning Guide provided with the iDataPlex rack:

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075216

Unpacking and acclimation The iDataPlex must be acclimated to the data center in order to avoid condensation. If the temperature during transport or storage was below the dew point of the data center, condensation can form on the equipment inside the rack. If the rack was shipped or stored in a plastic shipping bag, do not remove the bag until all signs of condensation are gone.

FAQ:

� What happens if I open the Rear Door Heat eXchanger for maintenance?

iDataPlex is completely front-access, even power, so there is no reason to go to the rear of the rack for maintenance.

The server cooling is not affected as the heat exchanger cools the data center, not the servers, even if the heat exchanger goes offline.

The heat exchanger has no boost fans or moving parts, so the failure rate is extremely small and does not require regular maintenance.

In a reasonably sized data center, a single heat exchanger failure has a negligible effect. Timing of this is in minutes to hours, not seconds.

Chapter 6. Designing your data center to include iDataPlex 121

Page 142: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Instructions for rolling the rack off the shipping crate are listed in the Installation and Planning Guide provided with the rack. A wooden ramp is provided with the shipping crate so that the rack can be rolled on to the floor.

Weight and transportA fully populated rack can weigh up to 1360 kg (2998 lb) in special custom configurations. The weight of the rack is distributed over four casters and a margin of safety should be added when moving it across floors to account for changes in the point of gravity when pushing or pulling the rack.

Placing the rackThe rack does not require a raised floor, because the power, network, and plumbing can be brought in from above. Figure 6-1 shows an example of the intake and outlet hoses of the Rear Door Heat eXchanger coming from ceiling trays.

Figure 6-1 Rack on solid floor with flexible plumbing from ceiling (with casters)

When plumbing is through a raised floor, the flexible hosing should be routed from the front of the rack to allow movement in the hosing. See Figure 6-2 on page 123.

Important: Ensuring that the location of the rack can hold its weight is not often enough to consider. When moving the populated rack into a data center across old raised floors, be prudent and consider using sufficient sheet metal or plywood plates to distribute the load across the floor tiles while rolling the rack to its destination.

122 Implementing an IBM System x iDataPlex Solution

Page 143: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-2 Example routing of raised floor Rear Door Heat eXchanger flexible plumbing

Removing the casters is recommended to more easily install the plumbing for the Rear Door Heat eXchanger and to clear the floor of obstructions. See Figure 6-3.

Figure 6-3 Caster removal

Caster units installed on system in manufacturing

Lower the leveling feet at the installation site

Caster units can be removed after levelers are lowered

Chapter 6. Designing your data center to include iDataPlex 123

Page 144: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

When ordering the system, you may specify whether the cabling and plumbing should go to the ceiling or into a raised floor. Depending on the configuration, 1U of space is left blank at either the top or the bottom of the rack to allow the network cabling to pass through the air baffles (mud flaps).

The whole rack can easily be mounted to the floor with four bolts. In areas with seismic activity, bolting the rack directly to the concrete floor. might be necessary.

The four bolts on the underside of the rack are 872 mm apart from side to side, and 538 mm apart from the front to the back of the rack.

FootprintBecause of its space-saving footprint of 1200 mm x 600 mm (width x depth), the iDataPlex fits on two European floor tiles (European = 600 mm x 600 mm or 23.6 in. x 23.6 in.) as shown in Figure 6-4.

Figure 6-4 Floor tile layout (600 mm2)

Note: The rack actually has 1U extra on each column (84U + 2U) so that the spacer at the bottom or top does not reduce the 84U usable space. The management appliance, when installed, uses up the other 1U spare space

Note: When the rack is bolted to the floor and cabling is from above it may be necessary to slide out the bottom nodes to access the bolts, as the shipping bracket that is used to allow for cabling is at the top in this case. When the bolt has been accessed the bottom node can be re-inserted and reconnected.

(4x) Leveler Feet

(underside)

(4x) Outboard Casters

(removable)

Front

Rear

(2x) Holes in Floor Tiles (for cables/hoses)

Top View

124 Implementing an IBM System x iDataPlex Solution

Page 145: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

On US floor tiles (24 in. x 24 in. or 609.6 mm x 609.6 mm) spacers should be used between racks so as to ensure that the racks are evenly distributed across the tiles.

The racks can also be connected together and an option is available for this when ordering the rack.

ClearanceThe space required to open the Rear Door Heat eXchanger fully is 2000 mm (78.75 in.) and the minimum distance from the back of the rack to the next rack or the wall is 480 mm (18.9 in.)

The front of the rack requires 1010 mm (39.75 in.) clearance to allow access by way of the front rack doors. If placed face-to-face, the iDataPlex racks should be spaced 1200 mm (48 in.) apart to allow for the doors.

Figure 6-5 on page 126 shows the various clearances.

Chapter 6. Designing your data center to include iDataPlex 125

Page 146: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-5 Clearance for rack doors

Because the rack is serviceable from the front, and only the power connectors and the PDU LAN ports are connected at the back, opening the heat exchanger is rarely necessary. The door can be added or removed with just under 600 mm (2 ft) clearance. However, someone must be able to get behind the rack to connect the quick-connect hoses to the bottom of the door.

610 mm(24 in)

1200 mm(47.25 in)

610 mm(24 in)

600 mm(23.62 in)

500 mm(15.75 in)

1010 mm(39.75 in)

2000 mm(78.75 in)

(Maximum)

480 mm(18.9 in)

(Minimum)

(2x) 17.5 mm(0.7 in)

(Optional)

784.86 mm(30.9 in)(Service

access space)

126 Implementing an IBM System x iDataPlex Solution

Page 147: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.4 Cabling the rack

The rack is delivered with the intra-rack Ethernet cabling in place or, in special cases, this cabling is performed at the customer’s premises. The cabling in the rack is done with cables of different lengths to avoid excess cables obstructing access to the rack. Additional cabling is required to connect the iDataPlex switches to the customer’s infrastructure. The power cables are preinstalled and only the PDUs have to be connected to the customer’s power supply.

6.4.1 Nodes

Each node has two 10/100/1000 Ethernet ports. Also, expansion cards can be added to augment the node with, for instance, InfiniBand or Fibre Channel connectivity. The nodes also offer two USB 2.0 ports, a DB15 VGA port, and a DB9 serial port.

6.4.2 Management appliance

The optional management appliance has two Ethernet ports, a DB25 parallel port, a DB15 VGA port, a DB9 serial port, two USB 2.0 ports, and PS/2-style keyboard and mouse ports. The Ethernet ports should be connected to avoid later cabling changes. Other ports can be connected as required.

6.4.3 Switches

Ethernet switches are only installed in even positions in columns B and D of the rack. With an all-Ethernet cabling configuration, a handle bracket is used to take up all of the excess cable. The excess cable is looped and placed inside the fillers that are under the handle bracket. Hook and loop straps are used to retain the cables against the handle bracket in neat bundles.

One switch in column B may be connected to nodes on columns A and C. However, a minimum of 1U cabling channel in column C is then reserved for a cabling bracket to route the cables. That space cannot be used by a node.

Note: The LAN connections for the managed PDU and the management appliance are at the rear of the rack and have to be routed through either a raised floor under the rack, or through the top of the rack to plug into the switches in the rack. There is a small opening in the crossbar at the top of the rack through which cables may be passed.

Chapter 6. Designing your data center to include iDataPlex 127

Page 148: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Generally, the switches are placed and grouped near the center of the rack to allow cables to be routed up and down.

InfiniBand and other special cabling requirements are normally performed onsite.

6.4.4 Power distribution units

PDUs are installed in the back to supply power to nodes, switches, and other equipment. They are only installed in odd positions in columns B and D. The handle brackets are used in the even numbered positions that are next to or below a PDU to manage all the power cords.

PDU sharing is possible across the columns. A PDU in column B can power nodes from columns A and C. However, this is only done when there is no switch next to that PDU, because that space is where to put all the excess cables.

6.4.5 Cabling best practices

Given the non-traditional rack and node design in iDataPlex, the cabling practices used in iDataPlex solutions must be different from the traditional cabling practices. IBM has worked closely with a professional physical layer infrastructure company, Panduit Corp. to evaluate the best cabling strategies for iDataPlex solutions to meet the important cable management criteria. These criteria can include signal integrity, system reliability, minimal downtime from equipment failure, allowance for proper airflow, and others. Also consider various practical aspects related to the data center design.

In this section, we highlight several important considerations of PANDUIT, and also the IBM mechanical engineering team.

The goal of cable management in the iDataPlex System is to develop and layout a routing strategy that prevents cabling congestion in any area of the rack. It is recommended that the cabling infrastructure be distributed between the top, middle, and bottom of the rack while using two vertical channel spaces (at the middle and at the right side of the rack) for vertical cable runs.

The successful management of copper cables throughout the iDataPlex System rests on the considered deployment of Category 5e Ethernet cables and InfiniBand assemblies.Category 5e cables should be used for Ethernet links, because the diameter of Category 6 and 6A cables can introduce density challenges and potential airflow problems.

In all scenarios involving placement of switches and PDUs in the iDataPlex rack, observe the following rules. See Figure 6-6 on page 130 for examples.

128 Implementing an IBM System x iDataPlex Solution

Page 149: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� In the front:

– Switches should only be installed in the even numbered positions in columns B and D.

– When there is no switch, a flat filler should be used in even numbered positions in columns B and D.

� In the rear, PDUs should only be installed in odd numbered positions in columns B and D.

� The use of a central vertical channel space to route the Ethernet links from the server to the switch equipment, by using one-foot increments in patch cords, reduces slack by deploying shorter lengths of cable.

Chapter 6. Designing your data center to include iDataPlex 129

Page 150: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-6 iDataPlex Rack with Ethernet Switches

� In the iDataPlex system, LC Duplex uplinks from each Ethernet switch should be routed out of the rack to an aggregation layer of the network outside of the iDataPlex rack.

PANDUIT recommends using a structured cabling approach where all fiber connections route back to a fiber enclosure either inside or outside of the rack. This location is where installers would have the ability to quickly perform moves, adds, and changes to existing fiber infrastructure to better utilize the existing data center cabling plant. From this enclosure, you can then run

17181920

Flat Filleror Switch

Pocketedor

DuctingFiller +Handle

Bkt

2122232425262728293031

Flat Filleror Switch

Pocketedor

DuctingFiller +Handle

Bkt

32333435363738

123456789

Flat Filleror Switch

Pocketedor

DuctingFiller +Handle

Bkt

10111213141516

39404142

Flat Filleror Switch

Pocketedor

DuctingFiller +Handle

Dkt

43EVENODD

Column BColumn A

130 Implementing an IBM System x iDataPlex Solution

Page 151: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

distribution cables overhead or under-floor through the cable routing systems (such as the overhead system shown in Figure 6-7), which then routes and protects this horizontal cable with little or no changes ever made to it. These pathways can also be used to route CAT5e and InfiniBand cables between iDataPlex racks within the data center environment.

Figure 6-7 iDataPlex overhead cable routing with pathway spill-outs for cable drops (PANDUIT)

� With an all-Ethernet cabling configuration, the handle bracket can be used to take up all excess cable. This cable can be looped and placed inside the fillers that are under the handle bracket. Hook and loop straps are used to retain the cables against the handle bracket and in neat bundles.

6.4.6 iDataPlex overhead cable pathways

PANDUIT provides innovative solutions for network cable management in data centers. Although most of these solutions are generic in nature, they could be effectively employed in custom scenarios as well. IBM has tested the PANDUIT FiberRunner® Overhead Cable Pathways as a cable management solution for iDataPlex. In this section, we provide details of this solution from PANDUIT. Customers who are interested in using this solution in their data centers, for

Chapter 6. Designing your data center to include iDataPlex 131

Page 152: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

better cable management for iDataPlex, can consult their IBM iDataPlex sales representative who will engage the PANDUIT engineering team to assist with the solution.

PANDUIT Overhead Cable Pathways can be either integrated directly into the iDataPlex solution or be independent of the racks. Either option can be easily implemented based on each customer’s individual requirements.

An example of an integrated pathway solution is shown in Figure 6-8. This example shows the top view floor layout, consisting of two IBM Enterprise racks and two IBM iDataPlex racks. The two Enterprise racks reside within the middle of this four rack lineup with one iDataPlex rack on either side of the Enterprise racks.

Figure 6-8 PANDUIT Overhead Pathways (top view)

Two product kits allow for fast deployment of the overhead pathways. These kits minimize the time necessary for the facilities group to install the pathways because the pathways bolt directly to the racks and are specifically sized to fit a single rack solution. These pre-kitted solutions eliminate traditional field fabrication of pathway system components, thereby decreasing the time to deploy the overall system. One kit is specific to the Enterprise rack and the other is specific to the iDataPlex rack solution.

Table 6-3 lists the PANDUIT part numbers for the two Overhead Pathway kits.

Table 6-3 PANDUIT Overhead Pathway Kits

Figure 6-7 on page 131 shows the PANDUIT overhead cable pathway system deployed with an iDataPlex solution.

PANDUIT part number Description

FRIBMKT01BL IBM iDataPlex Rack Overhead Pathway Kit

FRIBMKT02BL IBM Enterprise Rack Overhead Pathway Kit

132 Implementing an IBM System x iDataPlex Solution

Page 153: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.5 Data center power

The data center uses power to operate servers, provide cooling, provide lighting, provide uninterruptible power, and so on. This section discusses the power requirements of just the IT equipment (specifically the iDataPlex solution) as opposed to total data center power requirements.

When discussing power in the data center, values for Power Usage Effectiveness (PUE) and Data Center Infrastructure Efficiency (DCiE) are often used to indicate how efficient the data center is. One part of the equation is the IT equipment power. These values are calculated as shown in Figure 6-9. A data center with high power efficiency is one with a DCiE of more than 60%.

Figure 6-9 Power Usage Effectiveness and Data Center Infrastructure Efficiency1

Several service offerings in Chapter 8, “Services offerings” on page 181 can help you better understand how to improve the PUE and DCiE of your data center. Although the iDataPlex racks are part of the solution, note that IT equipment power normally accounts for only 45% of the total facility power.

IT equipment that uses less power requires less energy for cooling which frees up more energy for further IT equipment and can also free up floor space used by cooling equipment.

6.5.1 Aggregation and reduction of data center power feeds

As we mentioned in a previous chapter, one U.S.-based power company was charging its customers USD1500 - 2000 per month for each 8.6 kVA feed, in addition to the power consumption charges. The customer indicated that reducing the number of feeds per rack from four to three (an iDataPlex rack only requires three 8.6 kVA feeds versus four feeds with traditional enterprise racks)

1 For more information, see:http://www.thegreengrid.org/sitecore/content/Global/Content/white-papers/The-Green-Grid-Data-Center-Power-Efficiency-Metrics-PUE-and-DCiE.aspx

PUE = Total Facility PowerIT Equipment Power

PUE = Total Facility PowerIT Equipment PowerTotal Facility PowerIT Equipment Power

DCiE = IT Equipment Power x 100%Total Facility Power

1PUE

= DCiE = IT Equipment Power x 100%Total Facility PowerIT Equipment Power x 100%Total Facility Power

1PUE

1PUE

=

PUE: Power Usage EffectivenessDCiE: Data Center Infrastructure Efficiency

Chapter 6. Designing your data center to include iDataPlex 133

Page 154: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

would save them about USD14 million per year. This is a good example of how cost saving can be achieved by repackaging.

Figure 6-10 Power feed cost example - Two enterprise racks versus one iDataPlex rack

The two standard racks in Figure 6-10 are each 42U high with 1U servers which makes 84 systems in total. The total power feed costs are approximately USD6000 - 8000 per month. The time and manpower charged to deploy is for two racks. Since each rack only consumes 10 kVA, this results in a total of 14.4 kVA of unused (stranded) power capacity across the two racks that is still billed to the customer. This effectively raises the price per watt of power for those racks.

The iDataPlex contains 84 systems sharing power. The total power feed costs are USD3000 - 4000 less per month when compared with the 1U servers. The time and manpower to deploy is typically charged by the rack, so iDataPlex is potentially half the charge. The three feeds leave approximately 5.8 kVA of unused (stranded) power capacity, which is less than half the 14.4 kVA for the standard racks.

Another important note is that the two 42U racks have a total of 84U and the iDataPlex rack has 100U total space, which provides a further advantage to the customer.

8.6 KVA feed 8.6 KVA feed

8.6 KVA feed 8.6 KVA feed 8.6 KVA feed

8.6 KVA feed8.6 KVA feed

10 kVA Rack 10 kVA Rack 20 kVA Rack

$3,000 - $4,000/rack/mo $3,000 - $4,000/rack/mo $4,500 - $6,000/rack/mo

134 Implementing an IBM System x iDataPlex Solution

Page 155: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.6 Cooling

The traditional data center caters for a mix of machines of different sizes and power densities, and traditionally cooling is laid out to cool a wide range of rack densities in the same room. Many data centers are 10 - 15 years old, however, and the cooling facilities were designed to process the heat generated from racks consuming 2 - 3 kW of power, but not necessarily the levels generated by 20 - 30 kW of power per rack.

Using the traditional N+1 perimeter CRAC air-conditioning unit design means that the locations of the units and the venting has to be carefully planned to reduce the size and number of hot spots. A common method with the perimeter design is to run the entire computer room colder than it has to be. This manner can lead to the CRACs operating under the dew point, which requires dehumidifying the air. Humidifiers are then required to reintroduce humidity to the air that is coming from the CRACs. This approach means wasted energy and additional power for equipment to re-condition the conditioned air. It can also lead to static build-up if the air in the data center is too dry.

In-row cooling brings active airflow components into the rows of racks and thereby allows hot spots to be dealt with on a row-by-row basis. A whole range of different solutions have been devised over the years and most are air-flow based. For instance, the scalable modular data center, described in 8.1.3, “Scalable modular data center” on page 184, uses in-row cooling.

The Rear Door Heat eXchanger brings cooling granularity down to rack-level. However, instead of air as the conduit, water is used to extract the heat, which is is significantly more energy-efficient. In fact, depending on the water temperature and flow rate, the iDataPlex Rear Door Heat eXchanger can actually cool the

FAQ:

� My data center can only support up to 15kW of power per rack. Do I have to only partially populate the iDataPlex rack?

No. The iDataPlex is equal to two standard racks. When you compare the iDataPlex 1U servers to the same number of typical 1U servers, iDataPlex will be lower power by up to 40%.

The innovative rack design allows for less stranded power. A typical customer workload of 12kW per rack would require two 30A three phase feeds per rack or four for 84 servers at 24kW. However, with iDataPlex, we can power the same number of servers with three 30A three phase feeds for 24kW per rack.

Chapter 6. Designing your data center to include iDataPlex 135

Page 156: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

egress air to below the ingress temperature, thereby actually cooling the air around it and reducing the need for traditional CRAC units.

As the Rear Door Heat eXchanger operates above the dew point in normal operation, it also cuts the need for re-humidification. With the Cool Blue™ Rear Door Heat eXchangers for standard 19-inch racks and the iDataPlex Rear Door Heat eXchanger, a large degree of flexibility is still available within the rack and standard equipment can be installed. Additionally, cooling at the source of the heat load actually lowers your incremental energy consumption and can help reduce the floor space required for air conditioning.

Figure 6-11 shows how typical data center air conditioning removes heat energy (the red arrows) from IT equipment (the orange box).

Figure 6-11 Example data center cooling loop—from server to the outside air

With a Rear Door Heat eXchanger, the CRAC unit (the dotted grey box in Figure 6-11) is exchanged for a Cooling Distribution Unit (heat exchanger and pumps depicted in Figure 6-12 on page 137). The CDU measures the room’s humidity and temperature and regulates the temperature of water in the closed loop that is flowing to the Rear Door Heat eXchanger to above the dew point.

B P C P B

B

Refrigerationchiller loop

Cooling towerwater loop

Cooling towerair loop

Buildingchilled

water loop

CRAC andserver fans

PumpPumpPumpPump Compressor

Electricalpower

B P C P B

B

Refrigerationchiller loop

Cooling towerwater loop

Cooling towerair loop

Buildingchilled

water loop

CRAC andserver fans

PumpPumpPumpPump Compressor

Electricalpower

Note: Three Rear Door Heat eXchangers plus one CDU is approximately equivalent to one conventional CRAC.

136 Implementing an IBM System x iDataPlex Solution

Page 157: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-12 Closed loop to Rear Door Heat eXchanger and interface to building chilled water supply

A further addition to data center cooling solutions is the Data Center Stored Cooling Solution (also known as the Cool Battery), which is an IBM-patented

Buildingchilledwater

Flowcontrolvalve

Rear doorheat exchangers

s T

Return

Redundantpumps

Expansiontank

Distributionmanifolds

Quick-connectcouplings

Flexible hoses,maximum length15.24 meters (50 feet)

Loop heatexchanger

Pressure reliefvalve (based onmaximum pressureapplication)

Supply

CSCS

CSCS

CS

Secondary sidetemperature feedbackfor controlled watertemperature tospecification

Return

Supply

CS

CSShutoffvalves

Circuit setterflow controlvalves

FAQ:

� I can only cool xx kW per rack, but iDataPlex is 37 kW?

The iDataPlex can power up to 37 kW. Most configurations will be significantly less.

Datacenter cooling should be sized to the workload and not the maximum rated power. You could be wasting the cooling energy that could be allocated to computing energy.

The iDataPlex total power is less than the equivalent number of rack units in two standard racks. The iDataPlex form factor is rotated 90 degrees to a standard rack, so iDataPlex has the same frontal area and access to floor tiles as two standard racks.

Chapter 6. Designing your data center to include iDataPlex 137

Page 158: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

thermal storage solution designed to provide energy cost savings of up to 30% of peak energy usage by shifting energy usage to off-peak hours.

The system works with phase change material that goes from liquid to a semi-solid state in order to store cooling capacity for use during peak hours. The Data Center Stored Cooling Solution has the added advantage of effectively using the environment to augment the data center cooling capacity by using night and day or summer and winter temperature differences when the system is installed outside the building. This approach adds cooling capacity and enables growth while also augmenting the data center’s power backup systems.

For more details, see:

http://www.ibm.com/press/us/en/pressrelease/21524.wss

6.7 Airflow

This section describes some different approaches to using the data center air to cool components.

In the traditional perimeter design2, the air in the data center is cooled and forced into the raised floor. Perforated tiles in front or under racks allow the air to be circulated through the equipment. Commonly, two rows of racks are placed so that the hot air from two adjacent rows is collected between the rows in a so called hot aisle. The aisles which provide the cold air for the rows of racks are referred to as cold aisles.

The hot and cold aisle design allows much better airflow management than randomly spaced and placed racks as practiced in some older data centers.

2 See ASHRAE publication Thermal Guidelines for Data Processing Environments for further background information: http://tc99.ashraetcs.org/

Tip: To maximize airflow efficiency, a clear air path has to be established under the raised floor and on the return back to the CRAC units. Also, perforated tiles should be placed in cold aisles only. No perforated tiles should ever be placed in a hot aisle.

138 Implementing an IBM System x iDataPlex Solution

Page 159: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.7.1 Heat pattern analysis without the Rear Door Heat eXchanger

In this section, we use thermal imagery to show the heat patterns generated by 1U servers in traditional enterprise racks and the heat generated by iDataPlex racks. We look at dense, very dense, and extremely dense configurations as well as iDataPlex configurations with and without the Rear Door Heat eXchanger units installed.

We describe the heat patterns on configurations with the Rear Door Heat eXchanger in 6.7.5, “Heat pattern analysis with the Rear Door Heat eXchanger” on page 148.

We define the terms dense, very dense, and extremely dense in Table 6-4, where the tile pitch is the number of 600 mm x 600 mm floor tiles from the left edge of one cold aisle to left edge of the next cold aisle. For example, Figure 6-13 on page 141 shows an 8-tile pitch.

Table 6-4 Density definitions

FAQ:

� iDataPlex has non-redundant cooling, so what happens when a fan fails?

iDataPlex does not have redundant cooling. However, depending on the workload and environment, a fan failure has little impact and the server would keep running.

The iDataPlex Fan Speed Control Algorithm takes into account fan failures in the most efficient way possible. iDataPlex logs a fault and continues operating as normal with a fan failure. It will monitor component temperatures and increase the fan speed only when necessary.

If workload demand is high, the system might throttle, but continues operating until a repair action can be scheduled. In some cases after a fan failure, the system does not have sufficient cooling and will perform a soft shut down to prevent damage to components.

Data center density Tile pitch

Dense 8-tile pitch

Very dense 7-tile pitch

Extremely dense 6-tile pitch (or less)

Chapter 6. Designing your data center to include iDataPlex 139

Page 160: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Data center with 1U servers in enterprise racks: 8-tile pitchFigure 6-13 on page 141 is a thermal map of a dense data center with 4500 1U servers, with each rack operating at 18.9 kW. The airflow is at 1680 cfm (cubic feet per minute). Total power consumption is just over 2000 kW. The hot aisle is 1 m (40 in.) wide while the cold aisle is 1.8 cm (72 in.) wide, equating to an 8-tile pitch.

The result is that 100% of the racks are operating at or below 25°C (77°F).

FAQ:

� Do I get efficient cooling without the Rear Door Heat eXchanger?

Yes. At the data center the server cooling is not affected by the Rear Door Heat eXchanger.

The server cooling is affected by the low impedance server and rack design, shared fans, and fan speed algorithm. They all work the same with or without the Rear Door Heat eXchanger.

140 Implementing an IBM System x iDataPlex Solution

Page 161: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-13 Thermal map of a dense data center with hot/cold aisle design (temperatures are in °F)

Cold Aisle, 72”

Hot Aisle,40”

Cold Aisle, 72”

Hot Aisle,40”

Chapter 6. Designing your data center to include iDataPlex 141

Page 162: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Data center with 1U servers in enterprise racks: 7-tile pitchFigure 6-14 shows 9400 1U servers (double the number in Figure 6-13 on page 141), in a very dense data center, with each rack operating at 18.9 kW. The airflow is at 1680 cfm. Total power consumption is 4211 kW. The hot aisle is 1 m (40 in.) wide while the cold aisle is 1.21 m (48 in.) wide, equating to a 7-tile pitch.

The result is that only 12% of racks are now operating at or below 25°C (77°F).

Figure 6-14 Thermal map of a very dense data center with hot/cold aisle design (temperatures in °F)

The thermal map of the very dense data center in Figure 6-14 shows that the traditional perimeter design is unable to adequately cool the number of servers installed. There is too much power and not enough cooling. This configuration is limited to how far it can scale.

Standard 1U configurations are limited by:

� 1U fan and power efficiencies� CRAC cooling and airflow capacity

Cold Aisle, 48”

Hot Aisle,40”

Cold Aisle, 48”

Hot Aisle,40”

142 Implementing an IBM System x iDataPlex Solution

Page 163: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Data center with iDataPlex racks: 8-tile pitchThe thermal map in Figure 6-15 shows approximately 5000 iDataPlex nodes, in a dense data center, with each rack operating at 33.6 kW. In this configuration, the iDataPlex racks do not have Rear Door Heat eXchanger units installed.

The airflow is at 2352 cfm. Total power consumption is 2016 kW. The hot aisle is 1.82 m (72 in.) wide and the cold aisle is also 1.82 m (72 in.) wide equating to an 8-tile dense pitch.

The result is that 100% of the racks are operating at or below 25°C (77°F).

Figure 6-15 iDataPlex servers in a dense data center with traditional hot/cold aisle design (°F)

Cold Aisle, 72”Hot Aisle, 72”

Cold Aisle, 72”Hot Aisle, 72”

Chapter 6. Designing your data center to include iDataPlex 143

Page 164: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Data center with iDataPlex racks: 7-tile pitchFigure 6-16 shows 9,408 iDataPlexServers, in a very dense data center, with each rack operating at 33.6 kW. The airflow is at 2352 cfm. Total power consumption is 3763 kW. The hot aisle is 1.21 m (48 in.) wide while the cold aisle is also 1.82 m (72 in.) wide equating to a 7-tile pitch. In this configuration, the iDataPlex racks do not have Rear Door Heat eXchanger units installed.

The result is that 45% of the racks are operating at or below 25°C (77°F).

Figure 6-16 iDataPlex servers in a very dense 5000 sq. foot data center with traditional hot/cold aisle design (°F).

The very dense (7-tile pitch) iDataPlex data center is somewhat limited but is still able to adequately cool almost half the servers (45%), which shows the iDataPlex capacity to scale-out better than standard 1U servers that can only cool 12% adequately.

The 45% is most likely still unacceptable to most customers because the remaining 55% are potentially running above acceptable temperature ranges for

Hot Aisle, 48”Cold Aisle, 72”

Hot Aisle, 48”Cold Aisle, 72”

144 Implementing an IBM System x iDataPlex Solution

Page 165: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

the devices, which indicates that there are limits to what traditional perimeter cooling designs can achieve.

The previous iDataPlex configurations are limited by:

� CRAC cooling and airflow capacity� Room cooling

Heat removal is more efficient and more effective with the use of Rear Door Heat eXchanger units installed on the iDataPlex racks. We discuss scenarios with the Rear Door Heat eXchanger units installed in 6.7.5, “Heat pattern analysis with the Rear Door Heat eXchanger” on page 148.

6.7.2 IBM Systems Director Active Energy Manager

In the previous scenarios, where heat management has the potential of causing problems to the systems in the data center, using the IBM Systems Director Active Energy Manager™ to implement power capping on certain nodes for the purpose of keeping the entire data center operational is possible. Even though power capping reduces the number of available CPU cycles, it can be an alternate way of reducing the heat load in the hot spots of the data center.

While power capping might not be the tool of choice in a high performance cluster, or a hosting environment, it may be useful for emergency situations, for batch processing, or for scheduled workloads that are not time-critical. Even power capping cannot overcome all cooling problems, however, and therefore does not replace a solid data center design.

For more information about IBM Active Energy Manager, see Going Green with IBM Systems Director Active Energy Manager, REDP-4361:

http://www.redbooks.ibm.com/abstracts/redp4361.html

For more information about IBM Director, see Implementing IBM Systems Director 6.1, SG24-7694:

http://www.redbooks.ibm.com/abstracts/sg247694.html

6.7.3 Hot air ducts

Ducts can be used to prevent recirculation of air between the hot and cold aisles. This is a logical progression from the hot-cold aisle configuration and allows the CRACs to operate at higher efficiencies by increasing the temperature of the return air. Essentially, this optimized form of the hot and cold aisle design allows a much more dense data center. Similar concepts use covered aisles.

Chapter 6. Designing your data center to include iDataPlex 145

Page 166: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

One advantage that the ducting can have is that the use of a raised floor can be avoided in certain circumstances. Unless implemented in a new data center, this type of design can mean a complete redesign of existing data centers and is therefore limited in its use.

Figure 6-17 shows an example of hot-air ducting with iDataPlex racks.

Figure 6-17 Data Center Scenario with Vertical Ducts

Figure 6-18 on page 147 shows a thermal representation of the separation provided by the hot air ducts. The iDataPlex provides a further advantage over standard enterprise racks in this scenario because accessing the rear of an iDataPlex rack is rarely necessary; nearly all cabling is done from the front and the tile pitch can therefore be further reduced.

Overhead air supplyOverhead air return

12” Hot plenum

22” Vertical duct

36” Cold aisle

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

Data centerview

146 Implementing an IBM System x iDataPlex Solution

Page 167: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-18 Example Computational Fluid Dynamic representation of hot aisle ducts

6.7.4 iDataPlex and standard 1U server rack placement

The volume of air in the cold aisle can be reduced significantly when the Rear Door Heat eXchanger units are installed on iDataPlex racks, because the racks no longer heat the room. The thermal buffering, or airflow separation, of the cold aisle air volume is no longer required. Another way of seeing this is that the recirculation, or mixing, of hot and cold air no longer happens on the same scale as in traditional designs and therefore the cold aisle air temperature is not raised by recirculating air from hot aisles as these are no longer as hot as in traditional designs.

The placement of mixed rows with iDataPlex racks requires special consideration when Rear Door Heat eXchanger units are used, because the temperature of mixed (iDataPlex and enterprise rack) hot aisles are less, thereby reducing the efficiency of the CRAC units servicing these rows. The Rear Door Heat eXchanger does reduce the load on the CRAC units, separating the rows with Rear Door Heat eXchangers from those without to allow the CRACs to operate more efficiently might still be sensible.

Figure 6-19 on page 148 shows the different cold aisle areas recommended for the comparable rack configurations. The iDataPlex requires less cold aisle volume than a row of enterprise racks with 1U servers, while the iDataPlex with Rear Door Heat eXchanger requires the least cold aisle volume.

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

iDat

aPle

x

CFDview

Chapter 6. Designing your data center to include iDataPlex 147

Page 168: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-19 Narrow depth rack improves data center density - no change to data center layout

Notes that the iDataPlex allows for two times the nodes in a similar footprint with over two times the EIA spaces3.

6.7.5 Heat pattern analysis with the Rear Door Heat eXchanger

In this section, we use thermal imagery to show the heat patterns generated by iDataPlex racks with a Rear Door Heat eXchanger. We look at extremely dense (6-tile pitch) configurations and configurations that have not been previously viable.

Extremely dense data center design with Rear Door Heat eXchangersThe thermal map shown in Figure 6-20 on page 149 is a configuration of more than 10,000 servers in iDataPlex racks, in an extremely dense data center. Each rack operating at 33.6 kW. The airflow is at 2352 cfm. Total power consumption is 4301 kW. The hot aisle is 91cm (36 in.) wide and the cold aisle is 1.21 m (48 in.) wide equating to a 6-tile pitch.

IDPx Rack

Hot Aisle

IDPx Rack

Hot Aisle

Cold IsleCold Isle

IDPx Rack

Hot Aisle

IDPx Rack

Hot Aisle

Cold IsleCold Isle

Hot AisleHot Aisle

Cold IsleCold Isle

X sq. ft Air Cooling 0.79X sq. ft Air Cooling <0.42X sq. ft Liquid Cooling

Sized by standard floor tilesExample: 400 CFM per Tile

StdRackw/1U

StdRackw/1U

2.4X server density5X compute density with quad core

3 The 19-inch pockets, as described in Electronic Industries Alliance (EIA) Standard EIA/ECA-310

148 Implementing an IBM System x iDataPlex Solution

Page 169: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The result of using the Rear Door Heat eXchanger units is that 100% of the racks are operating at or below 77°F. The exit air temperatures are uniform across the iDataPlex rack height.

In this scenario, the Rear Door Heat eXchanger units are removing 90% of the exhaust heat while CRACs are extracting the remaining heat.

Figure 6-20 iDataPlex servers in an extremely dense data center with traditional hot-cold aisle design and Rear Door Heat eXchanger units (°F)

Cold Aisle, 48”Hot Aisle, 36”

Cold Aisle, 48”Hot Aisle, 36”

Chapter 6. Designing your data center to include iDataPlex 149

Page 170: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Circular airflow with Rear Door Heat eXchangerFigure 6-21 shows a circular air flow in the data center with intentional recirculation through Rear Door Heat eXchanger units, which remove 85% of the exhaust heat. This level is achieved by combining the hot and cold aisles so that air exhaust from one iDataPlex with Rear Door Heat eXchanger enters the inlet of the adjacent rack, thereby halving the tile pitch and allowing for an efficient mix of CRACs and Rear Door Heat eXchangers. In this case, approximately 7500 nodes are installed in the data center.

Figure 6-21 Possible circular data center airflow with iDataPlex and Rear Door Heat eXchanger

150 Implementing an IBM System x iDataPlex Solution

Page 171: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Single cold aisle with Rear Door Heat eXchangers and perimeter CRACsFigure 6-22 shows an example 3-tile pitch design that also breaks with the traditional hot and cold aisle paradigm. The Rear Door Heat eXchanger units remove about 85% of the exhaust heat while the rest is handled by a small number of traditional CRACs.

The central cold aisle is fed by the CRACs and the exhaust heat is channeled back towards the CRACs. The complementary setup allows for CRAC or Rear Door Heat eXchanger failures while providing a very dense data center design. This scenario encompasses approximately 9000 nodes as shown in Figure 6-22.

Figure 6-22 iDataPlex with Rear Door Heat eXchanger showing air exhaust from one rack going into the inlet of the adjacent rack

Chapter 6. Designing your data center to include iDataPlex 151

Page 172: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.7.6 Rear Door Heat eXchanger airflow characteristics

The Rear Door Heat eXchanger behaves differently depending on the volume of air moved through it per unit of time. See Figure 6-23.

Figure 6-23 Airflow characteristics of the Rear Door Heat eXchanger

The effectiveness of the Rear Door Heat eXchanger is actually higher when less air is moved through it. This lends itself well to the iDataPlex because the flow rate of the air through the rack is lower than in the deeper standard racks.

6.8 Water cooling and plumbing

Plumbing is required to use the Rear Door Heat eXchanger. The Rear Door Heat eXchanger is 75 - 95% more efficient than standard air cooling by moving the thermal transfer from the CRAC unit to the back of the rack. It enables 30 kW per rack cooling capacity and works with chilled water or evaporative liquid for lowest-risk deployment; it uses industry standard quick fit connectors.

IDP RDHx Cooling Capacity

80.082.084.086.088.090.092.094.096.098.0

100.0

2500 3000 3500 4000Flowrate (cfm)

%Q

Rem

oval14.8 gpm12.4 gpm10.3 gpm

Flowrate of air (cubic feet per minute)

Flowrateof waterin gallonsper minute:

Environmental Context : 18°C water, 32 kW rack power, 24°C air inlet to rack

152 Implementing an IBM System x iDataPlex Solution

Page 173: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The Rear Door Heat eXchanger has the following characteristics:

� Can provide 100% heat removal

� Allows passive heat extraction, for example no compressors and peltier elements

� Provides a low air side impedance for the rack, which means less resistance to the airflow and therefore less fan power required

� Has a low waterside impedance, which means water flows more freely and requires less pumping power

� Has optimized performance through uniform node airflow, which means less turbulence at the back of the rack

Clients request water cooling for a number of reasons:

� Increased heat output from server CPUs

� Increased heat loads in data centers

� Demands for greater cooling system efficiency

In general, the benefits are that the server density can be increased without increasing cooling requirements and that temperature hot spots are more effectively controlled.

The iDataPlex Rear Door Heat eXchanger uses industry standard quick-connect hose fittings, as shown in Figure 6-24.

Figure 6-24 Quick-connect fittings at the bottom of the Rear Door Heat eXchanger

Note: The Rear Door Heat eXchanger requires professional movers to install, because it weighs approximately 82 kg (180 lb).

Chapter 6. Designing your data center to include iDataPlex 153

Page 174: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.8.1 Technical specifications

The following list indicates technical specifications of the Rear Door Heat eXchanger:

� Pressure

– Normal operation: 137 kPa (20 psi)– Maximum: 689 kPa (100 psi)

� Volume

– Approximately 11 liters (3 gallons)

� Temperature

– 18°C ± 1°C (64.4°F ± 1.8°F)

See also 6.8.2, “Absorption rates” on page 154.

� Required water flow rate (measured at the supply entrance)

– Minimum per minute: 22.7 liters (6 gallons) – Maximum per minute: 56.8 liters (15 gallons)

See the Rear Door Heat eXchanger Installation and Maintenance Guide for more information:

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075220

6.8.2 Absorption rates

The Rear Door Heat eXchanger can be fine-tuned to account for different rack loads and facility dependencies. The graph in Figure 6-25 on page 155 shows the absorption rates for a 32 kW rack.

154 Implementing an IBM System x iDataPlex Solution

Page 175: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-25 Absorption rates for different water temperatures and volumes

Water temperature under 18°C normally require a cooling distribution unit (CDU), which separates the closed water loop in the heat exchanger from the building cold-water supply and controls the water temperature of the closed loop. The CDU measures the temperature and humidity of the room and adjusts the water temperature to avoid cooling to below the dew point. This approach is consistent with the ASHRAE Class 1 Environmental Specification that requires a maximum dew point of 17°C (62.6°F). See also “ASHRAE” on page 168.

6.9 Designing a heterogeneous data center

The unique design, form factor, and rack layout of iDataPlex does not necessarily mandate a data center design based on iDataPlex racks and nodes alone. You can easily design a heterogeneous data center by mixing traditional servers and other equipment in standard enterprise racks with iDataPlex racks. The primary benefits of iDataPlex are that this heterogeneous design will not incur any additional overhead or result in inefficiencies in power, cooling, and others.

% Heat Removal as function ofWater Temperature and Flowrate for:

24C rack inlet air, 32KW load, Min Fan Speed

7580859095

100105110115120

8 10 12 14

Water Flowrate (gpm)

% H

eat R

emov

al H2O Temp12 C *14 C *16 C *18 C20 C

Environmental Context: 24°C water, 32 kW rack power, 24°C air inlet to rack

at minimum fan speed.

Chapter 6. Designing your data center to include iDataPlex 155

Page 176: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The following figures show example heterogeneous floor layouts using by iDataPlex and traditional racks in a standard data center floor space of approximately 12 m x 18 m (40 ft x 60 ft):

� Figure 6-26� Figure 6-27 on page 157� Figure 6-28 on page 157� Figure 6-29 on page 158

Each layout shows either or both the number of iDataPlex and traditional racks, as well as the amount of usable horizontal rack-unit space available in each layout (indicated in the respective figure caption).

Figure 6-26 91 iDataPlex racks (7826 horizontal rack units)

156 Implementing an IBM System x iDataPlex Solution

Page 177: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-27 104 standard racks (4368 horizontal rack units)

Figure 6-28 66 iDataPlex racks and 24 standard racks (6684 horizontal rack units)

Chapter 6. Designing your data center to include iDataPlex 157

Page 178: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-29 19 iDataPlex racks and 66 standard racks (4406 horizontal rack units)

6.10 iDataPlex servers in a standard enterprise rack

A newly supported concept is the placement of iDataPlex chassis and servers in a standard enterprise rack. Based on early experiences with customers that prefer not to use the iDataPlex rack, but to install the iDataPlex Flex chassis in a standard enterprise rack (EIA-310-D), IBM now supports this concept with certain caveats, as this section discusses.

158 Implementing an IBM System x iDataPlex Solution

Page 179: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-30 iDataPlex Chassis in a Standard Enterprise Rack (top view)

Figure 6-31 on page 160 shows an example with iDataPlex chassis placed in a standard enterprise rack, with 40 iDataPlex nodes, two 48-port Ethernet switches, and a KVM console. Customers wanting to use this type of approach should be aware of the following aspects:

� The figure shows the USB Conversion Option (UCO) bundled, with the KVM dongle hanging in front of the disk drives.

� This solution is supported by iDataPlex development, but the customer should understand that disk drive access requires moving the cables.

� Cable management brackets route the cables in the front of the rack (the cold aisle).

� This solution requires removal of front doors

Cablebundle

FrontDoor

EIA310D

is 483.4mm

Chapter 6. Designing your data center to include iDataPlex 159

Page 180: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-31 iDataPlex chassis cabling in a Standard Enterprise Rack

Note: Based on the specific options chosen, such as high-speed networking, storage, and others, there might be more considerations when placing iDataPlex nodes in the standard enterprise racks. The particular customer configuration and preferences should be carefully analyzed before providing an approved design based on the standard enterprise racks.

160 Implementing an IBM System x iDataPlex Solution

Page 181: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

If the customer plans to use the IBM standard enterprise racks (option part number 46M2943), consider the following issues:

� Because of cables, a front door might not be able to be installed.

� Custom packaging might be required for shipping (due to no front door)

� Must use custom cable routing brackets, but you must confirm whether they would work and would be sufficient.

� Cables and brackets can intrude into the cold aisle (front of the rack).

� Determine whether cable egress from the rack is negotiable.

� For common egress, you might have to potentially lose density for cable routing to the rear of the rack.

� If overhead cabling is used, determine whether cables exit out the front top of the rack.

� Determine whether cables can be routed under the rack.

� Determine whether a hole for the cable exit can be located in the front of the rack in the cold aisle.

6.11 The Portable Modular Data Center

The Portable Modular Data Center (PMDC) is an offering aimed at providing quick data center capacity to clients that may not have the infrastructure to otherwise cater for the equipment. The advantage lies not so much in its mobility but in the stand-alone capsule that allows it to be placed outside and even next to existing buildings. The PMDC can be used to augment existing data centers or it can provide a data center for a customer without having to put a building in place. The PMDC uses standard shipping containers of approximately 6 m and 12 m (20 ft and 40 ft).

Figure 6-32 on page 162 shows a standard 40-ft container with its outside dimensions.

Chapter 6. Designing your data center to include iDataPlex 161

Page 182: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-32 40-ft Portable Modular Data Center Container with dimensions

The PMDC is also available with iDataPlex as an option. Three example configurations are:

� A 12 m (40 ft) container with nine iDataPlex racks in a central row� A 12 m (40 ft) container with 18 iDataPlex racks, shown in Figure 6-33 on

page 163� A double-wide 12 m (40 ft) container with 27 iDataPlex racks

The most densely packed PMDC is the 12 m (40 ft) container with 18 iDataPlex racks shown in Figure 6-33 on page 163. In this configuration, each rack can have 84 compute nodes with each two quad-core CPUs making up to 12,096 processor cores in one container.

2450

12100

2878

162 Implementing an IBM System x iDataPlex Solution

Page 183: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 6-33 An iDataPlex Portable Modular Data Center Container with 18 systems

The iDataPlex solution is ideally suited to the PMDC because the Rear Door Heat eXchanger allows for a much denser packaging than standard servers and racks allow. The achieved density of nodes is also aided by the nodes, which are less deep than enterprise servers allowing for two rows of iDataPlex racks in one container. In this configuration, the Rear Door Heat eXchanger is aided by additional fans on the container ceiling and provides sufficient air movement (active heat extraction) in all circumstances. See Figure 6-34.

Figure 6-34 Additional ceiling fans supporting airflow in 18-rack PMDC (showing dimensions in inches)

<10.8">21.7"

110.2"

<10.8">

Chapter 6. Designing your data center to include iDataPlex 163

Page 184: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

In the PMDC, the racks are placed approximately 25 cm (10 in.) from the container walls which is less than the normal recommended minimum distance. The PMDC is a special design and was tested in the thermal lab so as to ensure that the air flow can be guaranteed in this special configuration.

The PMDC offering also has container configurations that provide supporting functions such as uninterruptible power supply and cooling.

6.12 Possible problems in older data centers

This section describes several problems that can occur in older data centers. In general, factors to consider when placing racks in the data center include:

� Cable lengths

Be cautious about the maximum lengths for the different types of cables. For example, standard Ethernet cabling should not exceed 90 m (295 ft) from one switch to the next.

� Radio interference

Proximity to airport radar installations or commercial radio stations might cause interference in the rack or between racks. In general, strong radio frequency (RF) fields can be a problem for most computing equipment.

� Vibration

Railways, underground trains, and heavy vehicles in the vicinity can damage server equipment, particular hard drives.

� Power circuits

Power should be clean and stable. Sharing power with large motors and other industrial equipment is not a good idea and can lead to data loss and damaged equipment.

� Cooling

Adequate cooling must be provided at all times.

� Dust or contaminants

Metal shavings and conductive contaminants should be avoided in the computer room air because they build up on the equipment and can cause damage.

164 Implementing an IBM System x iDataPlex Solution

Page 185: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.12.1 Power

A stable power source that is free of interference is a requirement for the high density data center. In most cases three-phase power can be used in the data center itself and the load on the three phases should be balanced.

Heavy industrial equipment, welding equipment, and other equipment that uses power can cause sags in voltage and can also shift the axis of the three-phase power. Three-phase motors can actually start to vibrate or run backwards if there is enough imbalance in a three-phase system. Certain water heaters and motors use three-phase power and can cause interference with power feeds.

Ensure that the power feed itself is laid out for the load and that the return path (or neutral) is laid out to cope with the load. Ground connections in general are important and improper grounding is a common cause for problems.

Various problems with three-phase motors have been linked to overhead rolling doors in loading bays, elevator engines, air conditioning compressors, and similar equipment.

Normally a five-wire three-phase power distribution system is recommended for use in data centers. IBM can help clients design a suitable power distribution system for the data center.

Earthing systemIn this section, we compare two of the IEC 60364 earthing systems:

� TN-S4 � TN-C5

In a TN-C (also known as PEN5) earthing system, the earth-ground and neutral lines are combined. This can lead to significant currents on the shielding of data cables between racks and can also lead to the building itself carrying current (which is undesirable).

A TN-S system provides a low-noise earth connection, which is recommended for IT equipment with data connections.

Figure 6-35 on page 166 shows the potential problem with TN-C systems that does not occur with TN-S earthing systems. The figure shows how the absence of a separate protective earth line leads to differences in potential between the two systems that are connected to the same power circuit. The difference in potential leads to current flowing across shielding of the orange data cable. This issue can damage equipment and cause interference on data lines.4 Terra Neutral-Separate (Neutral and Protective Earth on separate conductors)5 Terra Neutral-Combined (TN-C) also known as Protective Earth and Neutral (PEN)

Chapter 6. Designing your data center to include iDataPlex 165

Page 186: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

In certain cases, the length of cables that can be used in a data center are shortenend dramatically. The farther apart two machines on the same circuit are, the higher the difference in potential is likely to be, and therefore the more interference is produced. This does not happen in a TN-S system when PE and N lines are separated.

Another solution to this problem is to use fiber optic cables between the racks, as is done in many data centers. Fiber optic cable does not conduct electricity and therefore can help solve this problem.

Figure 6-35 TN-C earthing system (PEN line)

Apart from the use of TN-C, and even with TN-S, further problems can arise if the protective earth framework in the data center is not properly implemented or designed. These problems tend to occur the larger an area a data center encompasses and can be mitigated or remedied by providing a closely-meshed grounding system.

166 Implementing an IBM System x iDataPlex Solution

Page 187: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.12.2 Cooling

Cooling design considerations include:

� Placing perforated floor tiles in hot aisles reduces the air temperature of the hot aisle and thereby reduces the efficiency of the CRAC units.

� If the volume of cold air provided to a rack is too low then the air from the back of the rack might start circulating back into the rack either over the top of the rack or around the side if the rack is standing on its own or at the end of a row. This leads to warm air circulating through the racks and is not desirable.

� Placing perforated floor tiles too close to CRACs has the unwanted effect of sucking air through the tile from above. This is known as Bernoulli’s principle and can be measured with a Venturi meter.

6.12.3 Static electricity

Static electricity can cause damage to electronic devices. To avoid the build-up of static electricity, a number of factors have to be taken in to account:

� Floor coverings

Floor coverings must be antistatic with low propensity ratings

� Floor resistance

The maximum resistance should not be above 2x1010 ohms between the building and the floor. The minimum resistance should not be below 150 kilohms for two floor points that are 1 m (or 3 ft) apart.

� Raised floor grounding/earthing

Connections to earth should be provided at multiple points in the room and the raised floor, if fitted, should have its entire structure connected to ground in multiple places.

Two types of raised floor are common:

– Stringerless floors– Floors with stringers

Stringers are used to form the grid system. In stringered systems there is one stringer under each edge of each floor panel. Stringered floors are recommended for grounding and provide a Signal Reference Grid when bolted together. Stringerless floors have special needs above and beyond those of stringered floors and require special attention. Also, all metal conduits should be bonded to one another and to the floor.

� Air humidity

Humidity should be 35 - 60% to prevent objects from storing electrical charge.

Chapter 6. Designing your data center to include iDataPlex 167

Page 188: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

6.12.4 RF interference

Radio frequency (RF) interference can potentially be a problem for computer equipment. Close proximity to airports, radio stations, television stations, mobile phone masts, and strong RF signals should be avoided or mitigated. If such a situation is suspected, then IBM can be engaged to assess the situation and recommend mitigation.

6.13 Links to data center design information

This section lists Web sites with relevant data center design information.

ASHRAE ASHRAE is the American Society of Heating, Refrigeration and Air-Conditioning Engineers. ASHRAE Technical Committee 9.9 (TC 9.9) concentrates on mission-critical facilities, technology spaces, and electronic equipment.

ASHRAE has published a series of books about data center design:

� Thermal Guidelines for Data Processing Environments� Datacom Equipment Power Trends and cooling Appliances� Design Considerations for Datacom Equipment Centers� Liquid Cooling Design Considerations for Data and Communications

Equipment Centers� Structural and Vibration Guidelines for Datacom Equipment Centers� Best Practices for Datacom Facility Energy Efficiency� High Density Data Center: Case Studies and other considerations.

For more information about ASHRAE and the TC 9.9, see:

http://tc99.ashraetcs.org/

IEC 60364 electrical installations for buildingsThe International Electrotechnical Commission (IEC) is a non-profit, non-governmental international standards organization that prepares and publishes international standards for all electrical, electronic, and related technologies. For more information about the IEC, see:

http://www.iec.ch/

168 Implementing an IBM System x iDataPlex Solution

Page 189: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

2008 National Electrical Code (NFPA 70)This advisory code is published by the National Fire Protection Association (NFPA). It is has been approved as an American National Standard on August 15, 2007. An online version of the NEC is available from:

http://www.nfpa.org/freecodes/free_access_agreement.asp?id=7008SB

Chapter 6. Designing your data center to include iDataPlex 169

Page 190: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

170 Implementing an IBM System x iDataPlex Solution

Page 191: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 7. Managing iDataPlex

This chapter discusses available options from IBM for managing an iDataPlex environment. It covers the Avocent rack manager appliance option, the IBM managed power distribution unit, and it looks at how the various management products can be used with the system.

Topics in this chapter are:

� 7.1, “The Avocent rack management appliance” on page 172� 7.2, “IBM power distribution units” on page 172� 7.3, “IBM Director and IBM Systems Director” on page 176� 7.4, “xCAT” on page 177� 7.5, “DIM” on page 178� 7.6, “ASU” on page 178� 7.7, “DSA” on page 179

7

© Copyright IBM Corp. 2008, 2009. All rights reserved. 171

Page 192: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

7.1 The Avocent rack management appliance

The Avocent MergePoint 5300 management appliance is available as an optional component of an iDataPlex solution. The MergePoint 5300 appliance provides baseboard management controller auto discovery and configuration, which in turn enables serial over LAN console access, power control, and hardware monitoring of the iDataPlex nodes.

The MergePoint 5300 has a Web browser interface, and a command-line interface that leverages the SMASH (systems management architecture for server hardware) command-line protocol, a standards-based user and scripting interface defined by the distributed management task force (DMTF). These provide a method for performing actions on a group of nodes simultaneously. The MergePoint 5300 appliance provides console and activity logging. It is licensed to support 128 servers, and can be upgraded to support 256 servers.

The MergePoint 5300 looks like the MergePoint 5200, shown in Figure 7-1.

Figure 7-1 Avocent MergePoint 5200

7.2 IBM power distribution units

Three power distribution unit (PDU) options are available for iDataPlex, as described in 4.7, “iDataPlex Management components” on page 89. Two have monitoring capability (called PDU+) and one does not have monitoring capability. Each of the three PDU options have twelve C13 outlets.

The IBM PDU+ provides power monitoring for each outlet pair and environmental monitoring. This PDU is available as an option for iDataPlex. The connector side of the PDU+ is shown in Figure 7-2 on page 173.

172 Implementing an IBM System x iDataPlex Solution

Page 193: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Figure 7-2 PDU+ connections

The PDU+ has the following features:

� Sensors that are supported through environmental monitoring probe inputs

This feature requires the environmental monitoring probe, which is available from certain PDUs, as listed in Table 7-1.

Table 7-1 Availability of sensor probes on iDataPlex-supported PDUs

� Remote monitoring of connected devices or sensors by way of two contact switch terminals

� Comprehensive power management and flexible configuration through a Web browser, Telnet, SNMP, or serial console

� Configurable user-security control

� Detailed data-logging for statistical analysis and diagnostics

� Upgrade utility for easy firmware updates

� Event notification through SNMP trap or e-mail alerts

� Daily history report through e-mail

� Address-specific IP security masks to prevent unauthorized access

PDU PDU feature code Sensor probe

DPI C13 PDU 6022 Not available

DPI C13 PDU+ 6042 Included with the PDU

DPI C13 PDU+ (three-phase, 208 V, 60 A, fixed line cord)

6031 Available as an option

Front

Input voltage status LEDRS232 serial port (DB-9)

Power outlet (C13 type)Outlet load group & #

Breaker resetReset button

Operating mode DIP switch

Serial (console) port

Input power (UTG type) connectorEthernet (LAN) port

Chapter 7. Managing iDataPlex 173

Page 194: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

To configure the PDU+, connect a terminal or terminal emulator to the 9-pin D shell, set to 9600 bits per second, 8 data bits, no stop bit, 1 parity bit (8N1), and no flow control.

The default password for the serial shell is passw0rd (where 0 is the number zero, not the letter O). You will have to create remote access user names and passwords for browser-based access. From the browser interface, you can see real-time power statistics by outlet-pair, as shown in Figure 7-3.

Figure 7-3 Real-time power statistics

174 Implementing an IBM System x iDataPlex Solution

Page 195: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

You can also see a summary of the power statistics, as shown in Figure 7-4.

Figure 7-4 Summary of power statistics

Chapter 7. Managing iDataPlex 175

Page 196: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

A graph of several statistics shown over time is also available. Figure 7-5 shows an example.

Figure 7-5 History log

7.3 IBM Director and IBM Systems Director

IBM Director is a systems management utility provided with all IBM servers. It is capable of managing up to 5000 machines that have IBM Director Agent or IBM Director Core Services installed, and a virtually unlimited number of devices with SNMP agents, or with inherent operating system capabilities. It is also capable of communicating by way of IPMI to the BMC controller on the iDataPlex servers, and by way of the SNMP to networking hardware and server operating systems.

One of the most useful capabilities of IBM Director is grouping like-systems, and performing tasks on all members of the group at once, such as firmware updates. IBM Director has evolved in to a widely capable utility, and would not be appropriate to discuss in depth here.

176 Implementing an IBM System x iDataPlex Solution

Page 197: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

For more information, see the following resources:

� For IBM Director 5.20, see Implementing IBM Director 5.20, SG24-6188:

http://www.redbooks.ibm.com/abstracts/sg246188.html

� For IBM Systems Director 6.1, see:

– The IBM Systems Director Web site

http://www.ibm.com/systems/management/director

– Implementing IBM Systems Director 6.1, SG24-7694:

http://www.redbooks.ibm.com/redpieces/abstracts/sg247694.html

7.4 xCAT

The Extreme Cluster Administration Toolkit (xCAT), is a scalable distributed computing management and provisioning tool that provides a unified interface for hardware control, discovery, and both diskful and diskfree deployment.1 It was developed to make simple clusters easy, and complex clusters possible2 by IBM Linux cluster specialists, to meet the requirements of our customers. The previous version, xCAT 1.3, used IBM exclusive software to control legacy systems management processors. The current version 2.x utilizes, among other interfaces, IPMI for hardware control, and is being developed as an open source project, at:

http://sourceforge.net/projects/xcat/

For more information about xCAT, see:

http://www.alphaworks.ibm.com/tech/xcat/

The xCAT Web site also contains iDataPlex installation information, showing the deployment of Red Hat Enterprise Linux 5.1 on two iDataPlex racks (one of the machines is used as a management node, the remaining 167 machines are compute nodes). See the following documentation:

http://xcat.svn.sourceforge.net/svnroot/xcat/xcat-core/trunk/xCAT-client/share/doc/xCAT-iDpx.pdf

Additional information is also in the publication (from 2002) Building a Linux HPC Cluster with xCAT, SG24-6623:

http://www.redbooks.ibm.com/abstracts/sg246623.html

1 As defined at http://www.xcat.org/2 As described at http://www.alphaworks.ibm.com/tech/xCAT/

Chapter 7. Managing iDataPlex 177

Page 198: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

If you are an existing IBM Cluster Systems Management (CSM) user, the publication, xCAT 2 Guide for the CSM System Administrator, REDP-4437, might also be helpful:

http://www.redbooks.ibm.com/abstracts/redp4437.html

7.5 DIM

Distributed Image Management (DIM) for Linux Clusters is a scalable image management tool that allows servers to run a Linux distribution over the network without local disks. It was developed for use with diskless blades in IBM BladeCenters, but can also be used with regular rack mounted servers, such as those in iDataPlex.

Among its major advantages is the extremely small size (less than 1 MB), the feature to instantly replicate changes to thousands of nodes, and to automate DHCP configuration using a single XML file.

For more information about DIM, see:

http://www.alphaworks.ibm.com/tech/dim/

7.6 ASU

The IBM Advanced Settings Utility (ASU) allows you to modify the BIOS settings without having to restart the system and enter the BIOS menus. ASU is supported in Linux, Windows, WinPE, Solaris™ and DOS environments. ASU supports scripting environments through its batch-processing mode.

The ASU utility and documentation are available from:

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-55021

Note: By using the ASU utility, you may generate a BIOS definition file from an existing system. A standard definition file is not provided on the IBM support site.

178 Implementing an IBM System x iDataPlex Solution

Page 199: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

7.7 DSA

The iDataPlex nodes ship with a pre-boot version of Dynamic System Analysis (DSA), which collects and analyzes system information to aid in diagnosing system problems. DSA collects information about the following aspects of a system:

� System configuration

� Installed applications and hot fixes

� Device drivers and system services

� Network interfaces and settings

� Performance data and running process details

� Hardware inventory, including PCI information

� Vital product data and firmware information

� SCSI device sense data

� Application, system, security, ServeRAID, and service processor system event logs

An operating system-installable version of DSA available. It also creates a merged log that you can use to identify cause-and-effect relationships from different log sources in the system. DSA can also compare device driver and firmware on the system to the versions available on an UpdateXpress CD, and provide a summary of the differences.

Also, a DSA portable edition is available. It runs from the command line without altering any system files or system settings. It expands to temporary space on the target system, runs, and deletes all intermediate files after processing completes. DSA can collect system information in sensitive customer environments with only temporary use of system resources. DSA is supported in both Windows and Linux environments.

Chapter 7. Managing iDataPlex 179

Page 200: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

180 Implementing an IBM System x iDataPlex Solution

Page 201: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Chapter 8. Services offerings

To aid in the successful implementation of an iDataPlex deployment, IBM offers a variety of planning and installation services. These offerings and what they provide are described in this chapter.

Topics in this chapter are:

� 8.1, “Global Technology Services” on page 182� 8.2, “IBM Systems and Technology Group Lab Services” on page 188� 8.3, “IBM Global Financing” on page 192

8

© Copyright IBM Corp. 2008, 2009. All rights reserved. 181

Page 202: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

8.1 Global Technology Services

Global Technology Services (GTS) offers a number of services that are relevant to iDataPlex and to site and facility services in general. The following list shows is a selection of services that evaluate space and thermal requirements, perform airflow analysis, simulate application scenarios, and design the power and cooling for data centers.

The GTS offerings described in this section are:

� 8.1.1, “High-density computing data center readiness assessment” on page 182

� 8.1.2, “Thermal analysis for high-density computing” on page 183� 8.1.3, “Scalable modular data center” on page 184 � 8.1.4, “Data center global consolidation and relocation enablement” on

page 184 � 8.1.5, “Data center energy efficiency assessment” on page 185� 8.1.6, “Optimized airflow assessment for cabling” on page 185 � 8.1.7, “IBM Server Optimization and Integration Services: VMware” on

page 186 � 8.1.8, “Portable Modular Data Center” on page 186

For more information, see the site and facilities Web site:

http://www.ibm.com/services/us/index.wss/itservice/igs/a1026000

8.1.1 High-density computing data center readiness assessment

The high-density computing data center readiness assessment helps clients benefit from high-density computing by assessing the following information:

� The capacity and capability of their existing data centers

� The gaps that could jeopardize continuous operations

� The actions to resolve identified concerns

This readiness assessment helps clients gauge their capacity to support high-density IT infrastructure components in their data center facilities. By determining the existing facility’s power-supply and heat-removal capabilities and comparing them to the power and cooling requirements demanded by the new technology, the readiness assessment can identify potential gaps that could jeopardize continuous operations, and can provide an input for problem resolution.

182 Implementing an IBM System x iDataPlex Solution

Page 203: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The readiness assessment consulting methodology combines IBM technology insight and decades of experience in data center facility design and operations to help CIOs and other IT decision-makers determine a data center facility’s capacity to accommodate high-density technology and to operate it reliably.

The offering includes:

� An assessment of a facility’s as-built power supply and heat-removal capabilities

� A comparison of existing capacity against power and cooling requirements demanded by high-density technology, both existing and planned

� An examination of the entire interdependent power and heat-removal infrastructure

� The identification of gaps potentially jeopardizing continuous operations

� A comprehensive evaluation that forms the basis for recommendations for any necessary remedial measures

The potential benefits to a client include:

� Professional guidance in managing the growth and expense challenges associated with high-density technology

� Key technology insight to identify design shortfalls and capacity limitations

� Continuous operations

� Adoption of powerful computing capabilities to meet escalating customer demands

For more information, see:

http://www.ibm.com/in/gts/datacentre/hdcas.html

8.1.2 Thermal analysis for high-density computing

Thermal analysis for high-density computing assists clients in identifying and resolving heat-related problems within existing data centers and provides recommendations for cost savings and future expansions.

Computational fluid dynamics software tools are used to create predictive thermal models that enable clients to plan new data centers designed to satisfy current equipment cooling requirements, to support additional equipment for future IT expansion, and to isolate thermal problem areas in the data center.

Chapter 8. Services offerings 183

Page 204: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The potential benefits of this offering include:

� Lower data center costs by helping to improve cooling efficiency and thereby reducing related power consumption

� Increased system uptime by helping to reduce server outages caused by high-heat conditions

� Provision of information you can use to better understand how to manage data center growth

� Effective consolidation of data center facilities

� Improved reliability of data center facilities

For more information, see:

http://www.ibm.com/in/gts/datacentre/tahdc.html

8.1.3 Scalable modular data center

Although this offering does not, at the time of writing, use iDataPlex components, it is for clients who want a data center to be up and running quickly and would potentially harmonize well with an iDataPlex solution.

The scalable modular data center offering allows clients to quickly make use of a turnkey solution with power, cooling, security, and monitoring. Because of its modularity, it is quick to set up in nearly any client facility. In-row cooling and other innovative solutions make this an attractive offering.

The scalable modular data center offering helps clients to deploy ready-made data centers in less time and at lower cost than traditionally designed data centers. IBM experience, server technology knowledge, and facilities infrastructure expertise help clients more easily define their server and facilities requirements.

Visit the following Web site for more information:

http://www.ibm.com/in/gts/datacentre/datacntr.html

8.1.4 Data center global consolidation and relocation enablement

This offering provides a consistent, repeatable, phased management approach to help clients implement a data center relocation or consolidation by leveraging local IBM site and facilities expertise around the globe.

184 Implementing an IBM System x iDataPlex Solution

Page 205: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Examples of recently completed projects include:

� A client in China that achieved a USD180 million reduction in annual operating expenses from consolidating 38 to 2 data centers while also improving business resilience.

� A client in Germany achieved USD7.2 million in annual operational savings by consolidating 4 data centers into one 353 m2 (3,800 square-foot) data center.

8.1.5 Data center energy efficiency assessment

The data center energy efficiency assessment provides a comprehensive assessment of the client data center and supporting physical infrastructure to identify operational cost savings and assists clients with utility rebates or LEED (Leadership in Energy and Environmental Design) certification.

This offering provides:

� A comprehensive, fact-based analysis of data center energy efficiency

� An evaluation of cooling system components, electrical systems, and other building systems

� A baseline metric for the data center’s energy use

� A road map of cost-justified recommendations

8.1.6 Optimized airflow assessment for cabling

This offering is a solution to the issue of excess and obsolete cabling in a raised-floor air-delivery plenum, which identifies and removes unused cabling for the client, resulting in an energy optimized air delivery system.

This offering has the following features:

� A comprehensive review of existing cabling infrastructure

� A plan for improvements to the data center that can help increase availability and maintain a security-rich environment

� An expert analysis of the overall cabling design required to help improve data center airflow for optimized cooling

� Specific recommendations for cabling system improvements

� A report on how the new structured cabling design can help maximize airflow for cooling, which can improve efficiency and reduce power consumption

Chapter 8. Services offerings 185

Page 206: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Several potential benefits are:

� Increased availability, improved energy efficiency, and reduced overall data center management costs

� Reduced potential for downtime through fiber connector inspection and verification

� Simpler change and growth within data centers through the documentation of cooling systems, which also helps companies organize and manage cables and trunking

8.1.7 IBM Server Optimization and Integration Services: VMware

The Server Optimization and Integration Services help clients get the most out of virtualization and includes solution framing, planning, and implementation. Optional post installation support can also be purchased. This service is designed to help the client avoid the pitfalls of virtualization and helps select the systems that are ideally suited for virtualization. By using this structured approach, combined with the expert knowledge of the team and the tools they use, helps clients get the best out of virtualization.

For more information, see:

http://www.ibm.com/services/us/index.wss/offering/its/a1029383

8.1.8 Portable Modular Data Center

The Portable Modular Data Center (PMDC) is a way of rapidly deploying a reproducible data center environment to nearly any worldwide location.

The PMDC is container-based as opposed to the Scalable Modular Data Center which requires a building and is not mobile. See 8.1.3, “Scalable modular data center” on page 184 for details of that offering.

The design, build, and drop-ship of an IBM Portable Modular Data Center can be completed in as short a period as 12 - 14 weeks including shipping. The PMDC solution can be designed in a few different configurations, the length of each container solution is either 6 m (20 ft) or 12 m (40 ft), as follows:

� IT container only (contains IT equipment racks with the required cooling units, power distribution, remote monitoring and fire detection and suppression)

� IT container plus services container (contains uninterruptible power supply, batteries, power distribution, electrical switchboard, chiller unit, and fire detection and suppression)

� All-in-one (single container with both IT equipment and services equipment)

186 Implementing an IBM System x iDataPlex Solution

Page 207: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The PMDC can be built with redundant N+1 infrastructure. It is suited to both permanent or temporary installations. Configurations are available with and without iDataPlex components.

Figure 8-1 Example multi-container PMDC

The solution features include:

� A complete PMDC design to meet client requirements

� Work with the client’s teams to determine power and chilled water requirements, space allocations, installation requirements, and so forth.

� Project management of the entire project which provides a single point of contact for the client

� A fit-up of the implementation site

� Managed installation of electrical and water connections, placement of containers, PMDC access, and so forth.

� Installation of engine generators, chillers, electrical feeds, and so forth, as required

� Integration, connection, and the start-up of IT technology into the PMDC.

The solution can also include:

� An optional sourcing of new IT technology beyond the iDataPlex offering� The relocation of existing IT technology� Provisioning for connectivity requirements for communications and networks� Start-up and test of the complete PMDC

Chapter 8. Services offerings 187

Page 208: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

For more information, see the press announcement, which also contains an introductory video:

http://www.ibm.com/press/us/en/pressrelease/24395.wss

8.2 IBM Systems and Technology Group Lab Services

IBM Systems and Technology Group (STG) Lab Services provides an in-depth capability for data center design and analysis that is not limited to the planning of racks, servers, chiller doors, and power. Lab Services experts have a broad scope and a deep knowledge of every aspect of the data center.

A number of services are tailored directly for the iDataPlex, and a number of further services cover the data center as a whole. The STG Lab Services team is ideally positioned to help clients roll out or come up to speed with an iDataPlex installation because they have full access to the developers of the systems and experts who are deeply involved in equipment and software tuning, providing detailed recommendations and reports, and setting worldwide industry standards.

For overview information about the of STG Lab Services offerings, see:

� 8.2.1, “Data center power and cooling planning for iDataPlex” on page 189� 8.2.2, “iDataPlex post-implementation jumpstart” on page 189� 8.2.3, “iDataPlex management jumpstart” on page 190� 8.2.4, “Cluster enablement team services” on page 190

Several solutions offered by STG include:

� Power, cooling, and I/O data center best practices

� Power sizing in the data center

� Use of virtualization to help reduce the demand for energy

� Information about power and heat load values for specifically configured server systems

� Comparisons of IBM products with competitors based on performance per watt, performance and SWaP (space, watts, and performance) metrics

� Power trends for IBM product lines

� Integration of IBM products into data centers with limitations (for example, 5 kW per rack)

� Reducing carbon emissions in line with European laws

� Water cooling in new data centers

188 Implementing an IBM System x iDataPlex Solution

Page 209: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� Planning of raised floor heights, ceiling heights, positions of CRACs, positions of cable trays, and more

� Using IBM system and storage device capabilities to manage power demand and thermal load

IBM STG Lab Services is a world-wide team based in the following locations:

� Austin, TX, U.S. � Bangalore, India� Beaverton, OR, U.S. � Beijing, China� Kirkland, WA, U.S.� LaGaude, France� Mainz, Germany� Poughkeepsie, NY, U.S. � Raleigh, NC, U.S. � Rochester, MN, U.S. � Taipei, China

The Web site for IBM STG Lab Services is:

http://www.ibm.com/systems/services/labservices/solutions/labservices_datacenter.html

8.2.1 Data center power and cooling planning for iDataPlex

This offering helps clients assess air conditioning and distribution in preparation for installing one or more iDataPlex racks. Guidance for cooling is provided, as is an evaluation of the requirement for a Rear Door Heat eXchanger. iDataPlex power specifications are reviewed in light of the customer’s PDU and hardware configuration. Electrical requirements are provided based on the findings.

The service also helps clients evaluate the need for a data center thermal analysis or measurement and optimization study and provides appropriate direction based on the sum of findings from all the performed evaluations.

8.2.2 iDataPlex post-implementation jumpstart

This offering helps clients install and configure the iDataPlex in their own environment. This is done by providing a comprehensive overview of the system and the best practices related to it. The custom-tailored hardware and software is demonstrated and then restored to out-of-the-box settings so that the client can customize the software under supervision and with support from lab experts. This process effectively transfers the required knowledge.

Chapter 8. Services offerings 189

Page 210: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Several of the topics covered are: planning sessions, cluster configuration, management node configuration, BIOS/RAID/firmware verifications and updates, operating system and cluster manager installation, network and storage configuration, cluster tests, power up/down tests, skills transfer, and assistance with the acceptance tests.

8.2.3 iDataPlex management jumpstart

This services offering takes the same approach as the other jumpstart services by first demonstrating the products and then helping the client configure them from an unconfigured state.

This offering includes such items as:

� Planning the use of and installation of the appliance� Overviews of the different command-line utilities� Authentication methods� Monitoring and management functions� Integration into SNMP-based management networks � Integration into IBM Director-based installations� Demonstrations of the appliance showing chassis and node provisioning � User management� Remote and power management� CLI for batch and interactive use� Configuring the alert handling

8.2.4 Cluster enablement team services

The cluster enablement team provides clients and partners with access to IBM experts who are skilled in HPC cluster implementation, which includes hardware, software, and complex high-performance computing knowledge.

The service categories are divided into four components:

� Cluster Implementation Services

The Cluster Enablement Team provides cluster implementation services for:

– Standard IBM 1350 clusters– Microsoft Windows Compute Cluster Server– DB2® Balanced Configuration Unit– DB2 Information Integrator – SAP® Search Engine

190 Implementing an IBM System x iDataPlex Solution

Page 211: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

The Cluster Enablement Team provides design-your-own cluster services for:

– Basic cluster installation and setup– Design SAP Business Intelligence Accelerator and SAP Search Engine– Myrinet and InfiniBand network implementation– GPFS implementation

� Cluster Support Services

Cluster Support Services has three support subscription options, each renewable yearly:

– Remote support

• Provide basic remote support by way of e-mail, phone, Web, and so on

• Cluster 1350 Service provides level 1 and 2 remote support as needed

– On-call administration support

• Dedicated Cluster 1350 resource to provide remote cluster administration and assistance as needed (24 x 5, 4-hour response window, pager and e-mail notifications)

• Optional 24 x 7 support for mission-critical customers paying a higher premium

– On-site health-check support

• Provide periodic maintenance support to ensure health of systems, perform necessary upgrades, and so on

• Fixed number of visits per year to the customer site

• Available for design-your-own clusters and standard 1350 clusters

� Advanced Cluster Support Services

Advanced Cluster Support Services provides customized account-specific services that include:

– Hardware problem troubleshooting – Cluster benchmarking assistance – Software implementation and tuning – Cluster administration

� Cluster Training Services

The Cluster Training Services offer training on cluster-related topics and have a number of standard offerings that include:

– General cluster training– xCAT– GPFS

Customized training is also available.

Chapter 8. Services offerings 191

Page 212: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

8.3 IBM Global Financing

IBM Global Financing can assist clients with competitive customized financing structures for their System x iDataPlex equipment. Leasing structures can lower total cost of ownership, minimize risk, improve accountability, and enable clients to focus on their core business strategies while still preserving their ability to make flexible equipment decisions throughout the entire technology life cycle.

For financing information, see:

http://www.ibm.com/financinghttp://www.ibm.com/financing/us/recovery/gogreenwithibm/

IBM Global Financing offers competitive financing to credit-qualified customers and IBM Business Partners to assist them in acquiring IT solutions. The offerings include financing for IT acquisition, including hardware, software, and services, from both IBM and other manufacturers or vendors, as well as commercial financing (revolving lines of credit, term loans, acquisition facilities, and inventory financing credit lines) for Business Partners. Offerings for all customer segments (small, medium, and large enterprise), rates, terms, and availability can vary by country.

192 Implementing an IBM System x iDataPlex Solution

Page 213: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

acronyms

AFCOM Association for Computer Operation Managers

AJAX Asynchronous JavaScript and XML

AMD Advanced Micro Devices™

AS Australian Standards

AFCOM Association for Computer Operation Managers

AJAX Asynchronous JavaScript and XML

ASHRAE American Society of Heating, Refrigerating and Air-Conditioning Engineers

ASU Advanced Settings Utility

BIOS basic input output system

BMC Baseboard Management Controller

BNT Blade Network Technologies

BSD Berkeley Software Distribution

BTU British Thermal Unit

CAE computer aided engineering

CD compact disc

CD-ROM compact disc read only memory

CDU cooling distribution unit

CIO chief information officer

CISSP Certified Information Systems Security Professional

CLI command-line interface

CPLD complex programmable logic device

CPU central processing unit

CTS clear to send

DB database

Abbreviations and

© Copyright IBM Corp. 2008, 2009. All rights reserved

DCD data carrier detect

DCiE Data Center Infrastructure Efficiency

DHCP Dynamic Host Configuration Protocol

DIM Distributed Image Management

DIMM dual inline memory module

DMTF Distributed Management Task Force

DOS disk operating system

DPI Distributed Power Interconnect

DSA Dynamic System Analysis

DSR data set ready

DTR data terminal ready

DW data warehousing

ECC error checking and correcting

EIA Electronic Industries Alliance

FC Fibre Channel

FRU field replaceable unit

GA general availability

GB gigabyte

GFS Global File System

GND common ground

GPFS General Parallel File System

GTS Global Technology Services

HBA host bus adapter

HPC high performance computing

HTML Hypertext Markup Language

HTTP Hypertext Transfer Protocol

I/O input/output

. 193

Page 214: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

IBM International Business Machines

IDC International Data Corporation

IEC International Electrotechnical Commission

IEEE Institute of Electrical and Electronics Engineers

IP Internet Protocol

IPMB Intelligent Platform Management Bus

IPMI Intelligent Platform Management Interface

IPTV Internet Protocol Television

ISL Inter-Switch Link

IT information technology

ITSO International Technical Support Organization

kW kilowatt

LACP Link Aggregation Control Protocol

LAMP Linux Apache MySQL PHP

LAN local area network

LED light emitting diode

LEED Leadership in Energy and Environmental Design

MB megabyte

MIT Massachusetts Institute of Technology

MPI Message Passing Interface

NEC National Electrical Code

NFPA National Fire Protection Association

NFS network file system

NSF Notes Storage File

OS operating system

PCI Peripheral Component Interconnect

PCIe PCI Express

PDU power distribution unit

PEN Protective Earth and Neutral

PET Platform Event Trap

PMDC portable modular data center

PUE Power Usage Effectiveness

PXE Preboot eXecution Environment

RAID redundant array of independent disks

RAM random access memory

RF radio frequency

RHDX Rear Door Heat eXchanger

RI ring indicator

ROM read-only memory

RPM revolutions per minute

RSA Remote Supervisor Adapter

RTS request to send

RXD received data

SAN storage area network

SAS Serial Attached SCSI

SATA Serial ATA

SCSI Small Computer System Interface

SDR Single Data Rate

SLURM Simple Linux Utility for Resource Management

SMASH Systems Management Architecture for Server Hardware

SMLT split multi-link trunking

SNA systems network architecture

SNMP Simple Network Management Protocol

SOA Service Oriented Architecture

SOL Serial over LAN

SSI Server System Infrastructure

STG Server & Technology Group

194 Implementing an IBM System x iDataPlex Solution

Page 215: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

TB terabyte

TCO total cost of ownership

TX transmit

TXD transmitted data

UDP user datagram protocol

UPS uninterruptible power supply

URL Uniform Resource Locator

USB universal serial bus

USD U.S. dollar ($)

VGA video graphics array

VLAN virtual LAN

VSS virtual switching system

WCCS Windows Compute Cluster Services

XM extended memory

XML Extensible Markup Language

Abbreviations and acronyms 195

Page 216: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

196 Implementing an IBM System x iDataPlex Solution

Page 217: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks

For information about ordering these publications, see “How to get Redbooks” on page 201. Note that some documents might be available in softcopy only.

Related IBM Redbooks and Redpaper publications:

� Building an Efficient Data Center with IBM iDataPlex, REDP-4418

http://www.redbooks.ibm.com/abstracts/redp4418.html

� BladeCenter HS21 XM Memory Configurations, TIPS0671

http://www.redbooks.ibm.com/abstracts/tips0671.html

� ServeRAID Adapter Quick Reference, TIPS0054

http://www.redbooks.ibm.com/abstracts/tips0054.html

� Implementing IBM Director 5.20, SG24-6188

http://www.redbooks.ibm.com/abstracts/sg246188.html

� Building a Linux HPC Cluster with xCAT, SG24-6623

http://www.redbooks.ibm.com/abstracts/sg246623.html

� IBM System Storage DS3000: Introduction and Implementation Guide, SG24-7065

http://www.redbooks.ibm.com/abstracts/sg247065.html

© Copyright IBM Corp. 2008, 2009. All rights reserved. 197

Page 218: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

IBM iDataPlex product publications

These publications are also relevant as further information sources:

� IBM iDataPlex Information Center

http://publib.boulder.ibm.com/infocenter/systems/scope/idataplex

� IBM iDataPlex Rack Installation and User's Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075216

� Rear Door Heat eXchanger Installation and Maintenance Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075220

� IBM System x iDataPlex dx320 User's Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5078019

� IBM System x iDataPlex dx320 Problem Determination and Service Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5078020

� IBM iDataPlex dx340 User's Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5075221

� IBM iDataPlex dx340 Problem Determination and Service Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5078854

� IBM System x iDataPlex dx360 User's Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077374

� IBM System x iDataPlex dx360 Problem Determination and Service Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5077375

� IBM PDU+ Installation and Maintenance Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5073026

� IBM ServeRAID MR10i publications

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5074114

� IBM ServeRAID BR10i Installation and User’s Guide

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5076481

198 Implementing an IBM System x iDataPlex Solution

Page 219: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Online resources

Web sites listed in this section are also relevant as further information sources.

IBM Web sites� IBM System x iDataPlex home page

http://www.ibm.com/systems/x/hardware/idataplex

� IDC report: The Impact of Power and Cooling on Data Center Infrastructure

http://www-03.ibm.com/systems/z/advantages/energy/index.html#analyst_rep

� IBM press release: IBM Premiers Project Big Green in Hollywood

http://www.ibm.com/press/us/en/pressrelease/22891.wss

� IBM press release: IBM Unveils Plan to Combat Data Center Energy Crisis; Allocates $1 Billion to Advance “Green” Technology and Services

http://www.ibm.com/press/us/en/pressrelease/21524.wss

� IBM press release: IBM Project Big Green Tackles Global Energy Crisis

http://www.ibm.com/press/us/en/pressrelease/24395.wss

� Global Technology Services SalesOne site (for IBM employees only)

http://w3.ibm.com/services/salesone

� IBM Systems and Technology Group Lab Services

http://www.ibm.com/systems/services/labservices/solutions/labservices_datacenter.html

� IBM Global Financing, IT energy efficiency financing

http://www.ibm.com/financing/us/recovery/gogreenwithibm/

� IT Services - Site and facilities

http://www-935.ibm.com/services/us/index.wss/itservice/igs/a1026000

� iDataplex Drivers, Firmware, and Software - ServeRAID

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-5076022

� System Management Bridge BMC CLI and Remote Console Utility

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-64636

� Advanced Settings Utility (ASU)

http://www.ibm.com/support/docview.wss?uid=psg1MIGR-55021

� Support subscriptions

http://www.ibm.com/support/subscriptions

Related publications 199

Page 220: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� IBM ServerProven

http://www.ibm.com/servers/eserver/serverproven/compat/us/

� IBM System Storage DS3000 series Interoperability Matrix

http://www.ibm.com/systems/storage/disk/ds3000/pdf/interop.pdf

� xCAT download

http://www.alphaworks.ibm.com/tech/xCAT/

� Distributed Image Management for Linux Clusters (DIM) download

http://www.alphaworks.ibm.com/tech/dim/

� Scalable modular data center

http://www.ibm.com/in/gts/datacentre/datacntr.html

� Thermal analysis for high density computing

http://www.ibm.com/in/gts/datacentre/tahdc.html

� High-density computing assessment services

http://www.ibm.com/in/gts/datacentre/hdcas.html

� Server Optimization and Integration Services - VMware server virtualization

http://www-935.ibm.com/services/us/index.wss/offering/its/a1029383

� Site and facilities

http://www-935.ibm.com/services/us/index.wss/itservice/igs/a1026000

Other information not from IBM� The green data center: Energy-efficient computing in the 21st century

http://searchdatacenter.techtarget.com/general/0,295582,sid80_gci1273283,00.html

� Gartner press release “Gartner Identifies the Top 10 Strategic Technologies for 2008”

http://www.gartner.com/it/page.jsp?id=530109

� Estimating Total Power Consumption By Servers In The U.S. And The World by Jonathan Koomey

http://enterprise.amd.com/us-en/AMD-Business/Technology-Home/Power-Management.aspx

� The High Availability Linux Project

http://www.linux-ha.org

200 Implementing an IBM System x iDataPlex Solution

Page 221: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

� Configuring Red Hat Enterprise Linux 3 or 4 to access iSCSI storage

http://kbase.redhat.com/faq/docs/DOC-3939

� Microsoft Support for iSCSI

http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/msfiSCSI.mspx

� Boot from SAN in Windows Server 2003 and Windows 2000 Server

http://www.microsoft.com/windowsserversystem/wss2003/techinfo/plandeploy/bootfromsaninwindows.mspx

� The Green Grid Data Center Power Efficiency Metrics: PUE and DCiE

http://www.thegreengrid.org/sitecore/content/Global/Content/white-papers/The-Green-Grid-Data-Center-Power-Efficiency-Metrics-PUE-and-DCiE.aspx

� International Electrotechnical Commission

http://www.iec.ch/

� ASHRAE Technical Committee 9.9: Mission Critical Facilities, Technology Spaces and Electronic Equipment

http://tc99.ashraetcs.org/

� US National Electrical Code

http://www.nfpa.org/freecodes/free_access_agreement.asp?id=7008SB

� Review of liquid cooling book by Don Beaty from ASHRAE TC 9.9

http://searchdatacenter.techtarget.com/news/article/0,289142,sid80_gci1218484,00.html

� xCAT

http://www.xcat.org/ http://sourceforge.net/projects/xcat/

How to get Redbooks

You can search for, view, or download Redbooks, Redpapers, Technotes, draft publications and Additional materials, as well as order hardcopy Redbooks publications, at this Web site:

ibm.com/redbooks

Related publications 201

Page 222: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Help from IBM

IBM Support and downloads

ibm.com/support

IBM Global Services

ibm.com/services

202 Implementing an IBM System x iDataPlex Solution

Page 223: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Index

Numerics11 configurations 811350 282U chassis 992U Flex chassis 8

configurations 833U chassis 45, 82–83, 87

Aacquisition 12Active Energy Manager 145Advanced Settings Utility 178air baffles 40, 124airflow 138airflow assessment 185AJAX 105Apache 105appliance, management 8applications 105ASU 178Avocent management appliance 8, 172

Bbaffles 124best practices, cabling 128big picture 34BladeCenter 27

compare with iDataPlex 30BMC 9, 89BNT switches 74bolts 124boot from SAN 103brackets 38

Ccable lengths 164cable management brackets 38cabling 127

best practices 128overhead cabling 131

capital expense 12casters 119, 123

© Copyright IBM Corp. 2008, 2009. All rights reserved

challenges 1circuit breakers 99circular air flow 150Cisco switches 74clearance 125Cluster 1350 28cluster enablement team services 190comparing the servers 50comparison with BladeCenter 30components 34–35compute node 83

configurations 82–83concrete floor 124condensation 121configuration

HPC 111Web 2.0 108

configurations 81flex chassis 83rules 88

consolidation services offering 184container 161contaminants 164Cool Battery 137cooling 118, 135

FAQ 139Rear Door Heat eXchanger 135

Cooling Distribution Unit 136cooling options 39coverings, floor 167CPU options

dx320 server 53dx340 server 57dx360 server 61

CRAC units 135customer types 22

Ddata center

facts 2issues 164

Data Center Infrastructure Efficiency 133DDR InfiniBand support 31

. 203

Page 224: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

density 17Derby 105dew point 121DIM 178dimensions 119direct-dock power 46disk drive bays 82disk drive options 70Distributed Image Management 113, 178doorway access 119DPI PDUs 77DS3000 104DS3000 support 71DS3300 support 104DS3400 support 104DSA 179ducts 145dust 164dx320 server 52–55

configurations 83DIMM install order 54disk drive options 55front panel 52IPMI 55LEDs 64memory 53operating system support 103PCI Express card support 72ports 52power button 64processor options 53publications 53RAID controller options 67SAS and SATA drives 55storage options 55, 69

dx340 server 55–60configurations 83DIMM install order 58disk drive options 59IPMI 60layout 56LEDs 64memory 57–58operating system support 103PCI Express card support 72ports 56power button 64processor options 57RAID controller options 67

SAS and SATA drives 59storage options 59, 69

dx360 server 60–64configurations 83DIMM install order 62IPMI 64LEDs 64memory 62operating system support 103PCI Express card support 72power button 64processor options 61RAID controller options 67serial port 64storage options 63, 69

Dynamic System Analysis 9, 179

Eearthing systems 165e-mail notification 95energy efficiency assessment 185enterprise rack 6enterprise rack, use of 158environmental monitoring probe 80, 173Ethernet redundancy 25, 106Ethernet switches 74, 127European floor tiles 124example scenarios 108EXP3000 support 71expense

capital 12operating 13

external storage options 71

Ffactory assembled 14fan assembly 99–100fan failure 50fan redundancy 25fans 47feature comparison 50FiberRunner 131filler panels 16, 43financing 192firmware updates 95flex node

configurations 83introduction 4

204 Implementing an IBM System x iDataPlex Solution

Page 225: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Flex Node Technology 8, 44flexible design 15floor coverings 167footprint 124Force 10 switches 74formula, power 133fully buffered memory 25, 57

GGanglia 113Gartner 2Global Financing 192Global Technology Services 182GPFS 113Green IT 3grounding systems 165

Hhardware redundancy 24heat exchanger

See Rear Door Heat eXchangerheat pattern analysis

no Rear Door Heat eXchanger 139Rear Door Heat eXchanger 148

heterogeneous data center 155history 118hose couplings 153hot-air ducts 145hot-swap drives 69–70HPC 24

example 111humidifiers 135humidity 167humidity probe 80

II/O server configurations 82–83, 86I/O tray 66IBM Director 91, 176

Active Energy Manager 145IBM Dynamic System Analysis 9IBM PDU+ 172iDataPlex dx320

See dx320 serveriDataPlex dx340

See dx340 serveriDataPlex dx360

See dx360 serveriDataPlex rack

See rackInfiniBand 111InfiniBand support 31in-row cooling 135Intel Xeon processors 57internal storage 69introduction 1IPMI 8, 89

dx320 server 55dx340 server 60dx360 server 64

iSCSI HBA 104issues 164

JJava 105jumpstart 189

KKoomey, Jonathan 2

LLab Services 188LAMP 105LEDs 64LEED 185line cords 78Linux distributions 103Linux-HA 108low power memory 57

Mmanagement 89management appliance 8, 172management jumpstart 190memory

dx320 server 53dx340 server 58dx360 server 62features 25options 57

MergePoint 5300 appliance 36, 172Microsoft operating systems 103mixed rows 147monitoring probe 80

Index 205

Page 226: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Moore 118mud flaps 40, 124MySQL 105

Nnetwork boot 103network redundancy 107networking 74node density 30nodes

See dx320 serverSee dx340 serverSee dx360 server

Ooperating expense 13operating systems 103overhead cable pathways 131overview 34

Ppackaging materials 14PANDUIT

cabling best practices 128fingers bracket 38overhead cable pathways 131

PCI Express card support 72PCI Express slots 82, 86PCI Express tray 66PCI slot redundancy 25PDU cables 98PDU options 77PDU+ 172PDUs 128PEN line 165perforated floor tiles 167perimeter design 135PET 89PHP 105planning 121planning services 189plenum 185plumbing 122, 152pockets 16, 36Portable Modular Data Center 161, 186ports 56PostgreSQL 105

post-implementation jumpstart 189power 133power button 64power circuits 164power cords to chassis 46power distribution units 77, 172power feed consolidation 17power source 165power statistics 176power supply 47, 82

fault tolerance 98redundancy 25

Power Usage Effectiveness 133probes 173problems 164processor options

comparison 51dx320 server 53dx340 server 57dx360 server 61

Project Big Green 3Project Green IT 2PXE boot 104

QQuality of Service, Ethernet 76

Rrack 5, 35

capacity 6comparison to enterprise rack 6depth 6design 16enterprise rack, use of 158management appliance 8servicing the back 126

radio interference 164RAID controller options 67raised floor 122raised floor grounding 167readiness assessment 182Rear Door Heat eXchanger 41

air baffles 40airflow characteristics 152characteristics 153clearance 125cooling capacity 152couplings 153

206 Implementing an IBM System x iDataPlex Solution

Page 227: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

DS3000 installation 71efficiency 42FAQ 44, 121graph 43heat absorption 154heat extraction 18introduction 6plumbing 152specifications 120, 154thermal maps 148thermal photos 19

recirculation of air 145Redbooks Web site 201

Contact us xviredundancy 24, 106relocation services offering 184resistance, floor 167RF interference 164Ruby 105rules

configurations 88dx340 memory 58

SSan Clemente 52SAN connectivity 103SAS connectivity 68SAS drives 70, 84

dx320 server 55dx340 server 59dx360 server 63

SATA controller 100SATA drives 63, 70, 84, 111

dx320 server 55dx340 server 59dx360 server 63

scalable modular data center 184scale up versus scale out 26scale-out data centers 105scenarios 108Security, Ethernet 76seismic activity 124ServeRAID BR10i 68ServeRAID MR10i 68services offerings 181shared fans 47shared power supply 47shipping crate 122

shutters 100simple-swap drives 69single cold aisle 151SMBridge 90SMC switches 74SNMP 89, 173Software Distribution 94SOL 89spanning tree protocol 107specifications

rack 118Rear Door Heat eXchanger 120

standard power supply 82standby server 107stateful versus stateless 101static electricity 167statistics, power 176storage 69storage redundancy 25storage server configurations 82–83, 85, 87storage tray 65Stored Cooling Solution 137structured cabling 130subscription 95switch oversubscription 31switches 127

Ethernet 74locations of 128

System Cluster 1350 28System Packs 91System x 27

Ttemperature probes 80, 173thermal analysis services 183thermal imagery 19

no Rear Door Heat eXchanger 139Rear Door Heat eXchanger 148

three-phase power 77, 165tile pitch 139tile pitch comparison 148Tivoli 89TN-S and TN-C 165transistor density 118trays

I/O tray 66storage tray 65

Index 207

Page 228: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

Uunpacking 121Update Manager 91UpdateXpress System Packs 91US floor tiles 125

Vvertical pockets 16vibration 164VLAN 111

support 75

Wwarm air recirculation 167water cooling 42, 118, 152water temperature 155Web 2.0 23, 105

scenario 108WebSphere 106weight 120, 122

XxCAT 177Xeon processors 57

YY-cables 98

208 Implementing an IBM System x iDataPlex Solution

Page 229: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

(0.2”spine)0.17”<

->0.473”

90<->

249 pages

Implem

enting an IBM System

x iDataPlex Solution

Page 230: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu
Page 231: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu
Page 232: Implementing an IBM System x iDataPlex Solution Implementing an IBM System x iDataPlex Solution David Watts Srihari Angaluri Martin Bachmaier Sean Donnellan Duncan Furniss Kevin Xu

®

SG24-7629-01 ISBN 0738432520

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICALINFORMATION BASED ONPRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

®

Implementing an IBM System x iDataPlex Solution

Introduces the next generation data center solution for scale-out environments

Describes custom configurations to maximize compute density

Addresses power, cooling, and physical space requirements

The IBM iDataPlex data center solution is for Web 2.0, high performance computing (HPC) cluster, and corporate batch processing customers experiencing limitations of electrical power, cooling, or physical space. By providing a big picture approach to design, the iDataPlex solution uses innovative ways to integrate Intel-based processing at the node, rack, and data center levels to maximize power and cooling efficiencies, while providing necessary compute density.

An iDataPlex rack is built with industry-standard components to create flexible configurations of servers, chassis, and networking switches that integrate easily. Using technology for flexible node configurations, iDataPlex technology can configure customized solutions for applications to meet business needs for computing power, storage intensity, and the right I/O and networking.

This IBM Redbooks publication is for customers who want to understand and implement the iDataPlex solution. It introduces the iDataPlex solution and design innovations, outlines benefits, and positions it with IBM System x and BladeCenter servers. The book details iDataPlex components and supported configurations, describes application considerations and aspects of data center design that are central to an iDataPlex solution, and introduces IBM services offerings for planning and installing an iDataPlex solution.

Back cover