26
HPC Seminar – September 2013 © 2013 IBM Corporation 1 Scale Out Computing With NeXtScale Systems Karl Hansen, HPC and Technical Computing, IBM systemX, Nordic IBM Confidential – Presented under NDA

NeXtScale HPC seminar

Embed Size (px)

DESCRIPTION

Presentation from the HPC event at IBM Denmark - September 2013, Copenhagen

Citation preview

Page 1: NeXtScale HPC seminar

HPC Seminar – September 2013

© 2013 IBM Corporation1

Scale Out Computing With NeXtScale Systems

Karl Hansen, HPC and Technical Computing, IBM systemX, Nordic

IBM Confidential – Presented under NDA

Page 2: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Journey Started in 2008 – iDataPlex

Flexible computing optimized for Data Center serviceability

� Race car design– Performance centric approach

– Cost efficient

– Energy Conscious

� All-front access– Reduces time behind the rack

– Reduces cabling errors

– Highly energy efficient

� Low cost, Flexible chassis

© 2013 IBM CorporationIBM Confidential – Presented under NDA2

� Low cost, Flexible chassis– Support for servers, GPUs, and Storage

– Easy to install and service

– Greater density than traditional 1U systems

� Optimized for Top of Rack (TOR) switching– No expensive mid plane

– Latency Optimized

– Open Ecosystem

Page 3: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

IBM iDataPlex dx360 M4 Refresh 3

WHAT’S NEW:

� Intel Xeon E5-2600 v2 product family

� Intel Xeon Phi 7120P coprocessor

� New 1866MHz and 1.35V RDIMMs

© 2013 IBM CorporationIBM Confidential – Presented under NDA3

Higher Performance:

� Intel Xeon E5-2600 v2 processors providing up to 12 cores, 30MB cache and 1866MHz

maximum memory speed to deliver more performance in the same power envelope

� Intel Xeon Phi coprocessor delivers over 1 Teraflop of double precision peak

performance providing up to 4x more performance per watt than with processors alone

� Increased memory performance with 1866MHz DIMMs and new energy efficient 1.35V

RDIMM options, ideal for HPC workloads

Learn More: http://www-03.ibm.com/systems/x/hardware/rack/dx360m4/index.html

Page 4: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Compute

Chassis

Standard Rack

2013 - Introducing IBM NeXtScaleA superior building block approach for scale-out computing

Public Cloud

High Performance Computing

Private Cloud

Primary Target Workloads

© 2013 IBM CorporationIBM Confidential – Presented under NDA4

Compute

Storage

Acceleration

More Coming

� Better data center density and flexibility

� Compatible with standard racks

� Optimized for Top of Rack Switching

� Top BIN E-5 2600 v2 processors

� Designed for solution redundancy

� The best of iDataPlex

� Very powerful roadmap

Page 5: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

One Simple Light ChassisIBM NeXtScale n1200

IBM Rackor

Client Rack

IBM NeXtScale: Elegant SimplicityOne Architecture Optimized for Many Use Cases

© 2013 IBM CorporationIBM Confidential – Presented under NDA5

ComputeIBM NeXtScale nx360 M4

Storagenx360 M4 + Storage NeX

PCI – GPU / Phinx360 M4 + PCI NeX

� Dense Compute

� Top Performance

� Energy Efficient

� IO flexibility

� Swappable

� Add RAID card + cable

� Dense 32TB in 1U

� Simple direct connect

� No trade offs in base

� Mix and Match

� Add PCI riser + GPUs

� 2 x 300W GPU in 1U

� Full x16 Gen3 connect

� No trade offs in base

� Mix and Match

Page 6: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

n1200 Enclosure

Form factor 6U tall – standard rack

Number of Bays 12

Power Supplies 6 hot swap, non redundant, N+N or

N+1 Redundant. 80 PLUS Platinum

high energy efficiency 900W

Fans 10 Hot swap

n1200 enclosureNew MT: 5456

Deep Dive into the NeXtScale n1200 enclosureThe ultimate high density server, designed for your Technical, Grid, and Cloud computing workloads.

Twice the amount of density than regular 1U servers.

Dense Chassis – The Foundation

� 6U tall with 12 half-wide bays

� Mix and match compute, storage, or GPU nodes within chassis

– Each system is individually serviceable– No left or right specific parts – meaning system

can be put in any slot

� Can have up to 7 chassis (up to 84 servers) in

© 2013 IBM CorporationIBM Confidential – Presented under NDA6

� Can have up to 7 chassis (up to 84 servers) in a standard 19” rack

� No in-chassis networking integration– Systems connect to TOR switches– No need to manage the chassis via FSM, iMM,

etc

� Shared power and cooling– 6 non redundant, N+1, N+N, hot swap power

supplies to keep business critical applications up and running

– 10 hot swap fans

� Front access cabling – no need to go to rear of the rack or chassis

Page 7: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Sy

ste

m i

nfr

ast

ruct

ure

The Dense ChassisIBM NeXtScale n1200 enclosure

� 6U Chassis, 12 bays

� ½ wide component

support

� Up to 6 900W power

supplies N+N or N+1

configurations

Op

tim

ize

d s

ha

red

in

fra

stru

ctu

reNeXtScale – Dense Chassis

Bay 11

Bay 1 Bay 2

Bay 4

Bay 6

Bay 8

Bay 10

Bay 3

Bay 5

Bay 7

Bay 9

Bay 11

Bay 12

© 2013 IBM CorporationIBM Confidential – Presented under NDA7

Sy

ste

m i

nfr

ast

ruct

ure configurations

� Up to 10 hot swap fans

� Fan and Power Controller

� Mix and match compute,

storage, or GPU nodes

� No built in networking

� No chassis management

required

Fan and Power Controller

Front view of the IBM NeXtScale n1200 enclosure shown with 12 compute nodes installed

Rear view of the IBM NeXtScale n1200 enclosure

Op

tim

ize

d s

ha

red

in

fra

stru

ctu

re

3x power supplies3x power supplies

5x 80mm fans

5x 80mm fans

Page 8: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

262.7 mm

(6U)

n1200 Chassis Details

Front View

Rear View

© 2013 IBM CorporationIBM Confidential – Presented under NDA8

10 x 80mm

Fans

6 x Hot Swap

80+ Platinum 900W Power

Supplies

Power design supports non-redundant, N+1, and N+N power

Page 9: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

View of Chassis

Rear View

Access CoverPower

Distribution Board

Front View

Fan / Power

© 2013 IBM CorporationIBM Confidential – Presented under NDA9

10 x 80 mm Fans

6 Power Supplies

Fan & System LEDs

12 Node

Bays

Fan / Power

Control Card

Page 10: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Sy

ste

m i

nfr

ast

ruct

ure

The Compute NodeIBM NeXtScale nx360 M4 – Hyperscale Server

� New ½ Wide 1U, 2 socket

server

� Next generation Intel

processors (IVB EP)

� Flexible slot-less I/O design

� Generous PCIe capability

� Open design, works with

Sim

ple

arc

hit

ect

ure

NeXtScale – The Compute Node

Power button and information LED

PCIe 3.0 Slot

© 2013 IBM CorporationIBM Confidential – Presented under NDA10

Sy

ste

m i

nfr

ast

ruct

ure

� Open design, works with

existing x86 tools

� Versatile design with

flexible Native Expansion

options

� 32TB local storage

(Nov)

� GPUs/Phi adapters

(2014)

Sim

ple

arc

hit

ect

ure

Dual-port mezzanine card (IB/Ethernet)

KVM connector

Labeling tag

1 GbE ports IMM management

port

PCIe 3.0 Slot

x24 PCIe 3.0 slot

x8 mezz. connector

CPU #1

2x DIMMs2x DIMMs

CPU #2 2x DIMMs 2x DIMMs

Power connector

Drive bay(s)

Page 11: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

41 +/- 0.5 mm

216 +/- 0.5 mm

8.5 inches

Power Interposer Card

Storage Choice

1 x 3.5” HDD

2 x 2.5” HDD/SSD

4 x 1.8” SSD

½ Wide Node Details

© 2013 IBM CorporationIBM Confidential – Presented under NDA11

All External Cable Connectors Out

the front of server for easy access

on cool aisle

Power Interposer Card

Motherboard

IMM v2 (IPMI / SoL compliant BMC)

2 x 1Gb Intel NIC

8 DIMMs @ 1866MHzPCIe Adapter –

Full High, Half LengthMezzanine Card

(IO – IB, 10Gb)Power Button and

Information LEDs

Top BIN

processors x 2

Page 12: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

nx360 M4 Node

� The essentials– Dedicated or shared 1Gb for management

© 2013 IBM CorporationIBM Confidential – Presented under NDA12

Dedicated or shared 1Gb for management

– Two production 1Gb Intel NICs and 1 additional port for IMM

– Standard PCI card support

– Flexible LOM/Mezzanine for IO expansion

– Power, Basic LightPath, and KVM crash cart access

– Simple pull out asset tag for naming or RFID

– Intel Node Manager 2.0 Power Metering/Management

� The first silver System x server– Clean, simple, and lower cost

– Blade like weight and size – rack like individuality and control

Page 13: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

IBM NeXtScale: Elegant SimplicityNeXtScale will keep you in front (of the rack that is)

65-80ºF

>100 ºF!!!

© 2013 IBM CorporationIBM Confidential – Presented under NDA13

� Cold aisle accessibility to most components

� Tool-less access to servers

� Server removal without unplugging power

� Front-access to Networking cables & Switches

� Simple cable routing (front or traditional rear

switching)

� Power and LEDs all front facing

Which aisle would rather be working in?Know what cable you are pulling!

Service NeXtScale from the front of the rack

Page 14: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

nx360 M4 Block diagram:

© 2013 IBM CorporationIBM Confidential – Presented under NDA14

Page 15: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

nx360 M4 is optimized for HPC and Grid

� Full CPU lineup support up to 130W

� 8 DIMM slots– Optimized for max speed at 1 DIMM/channel 1866MHz– Optimized for HPC workloads

• 2-4GB/core with 24 cores fits nicely into 16GB cost sweet spot

– Optimized for cost (reduced board to 8 layers, HP has 12)– Optimized for efficiency (greater processor spread to reduce preheating)

� Infiniband FDR mezzanine– Optimized for performance and cost

© 2013 IBM CorporationIBM Confidential – Presented under NDA15

– Optimized for performance and cost

� Chassis capable of Non-redundant or N+1 power to reduce cost– HPC typically deploys non-redundant (software resiliency)– Option for N+1 to protect 12 nodes from throttling in PSU failure for minimal cost add

� Flexible integrated storage for boot and scratch– 1 3.5” (or stateless – no HDD) is common for HPC– 2 2.5” is used in some grid applications– 4 1.8” SSD for low power, additional flexibility

� Enabled for GPU and storage trays – Pre-positioned PCIe slots (1 in front, 1 in back)

Page 16: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

iDataPlex and NeXtScale – Complementary Offerings

� iDataPlex is being refreshed with Intel Xeon E5-2600 v2 processors – full stack– Will ship thousands nodes of iDataPlex in 3Q

– Expect continued sales through 2015

– Clients with proven iDataPlex solutions can continue to purchase

� iDataPlex provides several functions that Gen 1 NeXtScale will not– Water Cooling – Stay with iDataPlex for direct water cooling until next gen

– 16 DIMM slots – for users that need 256GB or more of memory, iDataPlex is a better

choice until next gen NeXtScale offering

© 2013 IBM CorporationIBM Confidential – Presented under NDA16

choice until next gen NeXtScale offering

– Short term we will use iDataPlex for our GPU/GPGPU support

� Key point – NeXtScale is not a near term replacement for iDataPlex

� NeXtScale will be our architecture of choice for HPC, Cloud, Grid, IPDC & Analytics– More flexible architecture with stronger roadmap

– As NeXtScale continues to add functionality - iDataPlex will no longer be needed –

outlook 2015

Page 17: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

NeXtScale Improves on an Already Great iDataPlex Platform

iDataPlex NeXtScale

iDataplex requires unique rack to achieve density –

most customers prefer standard rackNeXtScale fits in any standard rack

84 servers per rack is difficult to utilize and configure –

Infiniband fits into multiples of 18 or 24, creating a

mismatch with 84 servers

NeXtScale single rack allows all Infiniband and

Ethernet switching with 72 servers – the perfect

multiple

iDataplex clusters are difficult to configure with unused

switch ports at maximum density

NeXtScale offers 72 nodes per rack + infrastructure,

making configuring straightforward

Wide iDataPlex rack drives longer, higher cost cablesNeXtScale is optimized for 19” rack, reducing cable

length for rack to rack and cost

© 2013 IBM CorporationIBM Confidential – Presented under NDA17

length for rack to rack and cost

Other servers and storage in clusters forces addition of

standard racks to layout, which eliminates the iDataPlex

datacenter advantage

NeXtScale, System x and storage use the same rack

– easy to optimize and deploy

fillers fillers

x6 x6x6 x6

iDataPlex

Longer optical cables x3NeXtScale

Shorter copper cables x3

Page 18: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Shipping: Nov 2013 Shipping: 1H 2014

6U Dense Chassis1U tall 1/2 Wide Compute

node

PCI Native Expansion(PCI NeX)

1U (2U tall 1/2 Wide) GPU or Xeon Phi Support

Storage Native Expansion (Storage NeX)

1U (2U tall 1/2 Wide)Up to 32TB total capacity

NeXtScale Product Timeline

Shipping: Oct 2013

A Lot More Coming

More StorageMore IO Options

Next Gen Processors

© 2013 IBM CorporationIBM Confidential – Presented under NDA18

node

6U Chassis will support mix-match nodes

Next Gen ProcessorsMicroServers

Page 19: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Storage NeX

© 2013 IBM CorporationIBM Confidential – Presented under NDA19

Page 20: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Storage NeX

© 2013 IBM CorporationIBM Confidential – Presented under NDA20

Page 21: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Storage NeX – Internals

� Seven LFF (3.5” ) drives internal to

storage NeX, plus one additional drive on

the nx360 M4

� Cable attached to a SAS or SATA RAID

adapter or HBA on the nx360 M4

� Drives are not hotswap

01

23

© 2013 IBM CorporationIBM Confidential – Presented under NDA21

� Drives are not hotswap

� Initial capacity is up to 4TB per drive

2

45

6

7

Page 22: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

PCI NeX

© 2013 IBM CorporationIBM Confidential – Presented under NDA22

Page 23: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

PCI NeX

� Supports 2 Full Height, Full Length,

Double Wide adapters at up to 300W

� Provides 2 x16 slots

� Requires 1300W power supplies in the

chassis

� Will support Intel Xeon Phi and nVIDIA

© 2013 IBM CorporationIBM Confidential – Presented under NDA23

� Will support Intel Xeon Phi and nVIDIA

GPUs

� Expected availability 1H 2014

Page 24: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

PCI NeX - Slots

GPU 0Gen3

GPU 1Gen3

x16 x16

Note:

All PCIe Slots & Server GPU slots

have separate SMBus Connections

© 2013 IBM CorporationIBM Confidential – Presented under NDA24

x24 Planar Connector

Proc 0 Proc 1DRAM DRAM

Lower 1U

Upper 1U

x8

x16 Planar Connector

x16x24

x16 x16

x8 x8

QPI

x8

IB/10G…

Mezzanine

Connector

FHHL

Page 25: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

How the PCI NeX attaches

PCIe Connector

Locations

© 2013 IBM CorporationIBM Confidential – Presented under NDA25

Page 26: NeXtScale HPC seminar

IBM Systems and Technology Group Technical Symposium

Melbourne Australia | 20 – 23 August 2013

Dense chassis – flexibility for the future

Room to scale – future proof flexibility. The investment platform for HPC.

Dense Compute Dense Storage GPU / Accelerator Full-wideGPU / Accelerator

with IO

© 2013 IBM CorporationIBM Confidential – Presented under NDA26

2 socket12 compute

½ wide

2 socket6 compute

½ wide8 3.5” HDD

dense hot-swap storage

(4U)

2 socket6 compute

+ 2 GPU

2 socket

4 compute

+ 4 GPU

dense

microservers

(3U)

1-2 socket (full-wide)6 compute

Memory or HDD rich

More…

Dense StorageUltra-Dense