19
The Big Crunch The Future of Data Centres? 12 th March 2015 – Peter Hopton BEng, MBCS, FRSA - Founder and Chief Visionary Officer, Iceotope

The Big Crunch: The Future of Data Centres?

Embed Size (px)

Citation preview

Page 1: The Big Crunch: The Future of Data Centres?

The Big CrunchThe Future of Data Centres?12th March 2015 – Peter Hopton BEng, MBCS, FRSA - Founder and Chief Visionary Officer, Iceotope

Page 2: The Big Crunch: The Future of Data Centres?

Denser Is Coming

• Processor Efficiency Has Doubled Every 18 Months

• Data Transmission Efficiency Has Not

• IT Hardware Has Been Getting Denser, Data Centres Have Struggled to Keep Up

Page 3: The Big Crunch: The Future of Data Centres?

IT Hardware is Getting Denser

• Past – Packing A Rack to 5KW in Y2000 was unheard of.

• Present – off the shelf air cooled servers in a 48u cabinet: Up to 50kW – Uncoolable with air

• Future – in 3-5 years the densest motherboard will be 3”x5” and will use 350W

Page 4: The Big Crunch: The Future of Data Centres?

But DCs Struggle to Keep UpAverage Density is Very Low in Colocation

• Source: Uptime Institute Survey 2013

• Growth In Average Density – Evident But Much Slower than Server Growth

• Explanation: Colocation Business Model, Low Bandwidth Interconnect, Air Cooling & Energy Efficiency Drive

• Maybe DCs are Underpopulated?

Page 5: The Big Crunch: The Future of Data Centres?

What Does Average Density Look Like?

• Literally this, 2x2u high density units

• Imagine that in a 48u cabinet!

Page 6: The Big Crunch: The Future of Data Centres?

Why?Koomey’s Law

• IT Doubles in Efficiency Every 18 Months

• Remains Unbroken

Page 7: The Big Crunch: The Future of Data Centres?

And Interconnect?Struggling to Keep Up

• Copper Interconnect is “Maxed Out” – laws of physics.• Optical Interconnect is expensive, bulky and breaks even on energy at

50cm• Pico-watts per bit has increased from DDR3 to DDR4, rather than

decreased• IT Hardware vendors are moving RAM and storage closer to the CPU,

increasing Density.

Page 8: The Big Crunch: The Future of Data Centres?

Remember the 3” x 5”Denser is Coming

Page 9: The Big Crunch: The Future of Data Centres?

But Can We Keep on Spreading the Heat Out?10 little chips in a big 48u cabinet?

• Simply Put, Yes, But It Will be Expensive• The High Speed Interconnects Will All Have to Be Optical NOT Copper• Active Interconnects Will Use More Power and Cost More• Cost Economics of Dense Liquid Cooled Facilities Will Win Big.

Page 10: The Big Crunch: The Future of Data Centres?

The Commercial Opportunity In DensityDenser IT = More in the Same Facility

• Liquid Cooling Designs Already Offer 60kW Densities• Total Liquid Cooling Offers Reduced Infrastructure Costs• Total Liquid Cooling Offers Cooling PUE’s of 1.02• Making Servers Closer Reduces Interconnect Costs

Page 11: The Big Crunch: The Future of Data Centres?

Where’s This Going to Happen?

• HPC: Interconnect Speeds and Processing Intensities in HPC are Driving Density

• Cloud: Cloud and Virtualised Environments Will Follow – Why? • because they have high utilisations like HPC and high speed

connectivity is increasing.• Liquid Cooling is Already Key In New HPC Installs today

Page 12: The Big Crunch: The Future of Data Centres?

Who Are Iceotope?

• British Company based in Sheffield, UK• Backed By:• Schneider Electric• Solvay Specialist Plastics

• Additional partnerships with:• Intel• 3M

•>10,000,000 CPUcHours Of Use

Page 13: The Big Crunch: The Future of Data Centres?

Real World Results

• Up to 60kW Cabinet Density – 72 Blades Per Cabinet• Cooling Overhead (internal): as low as 0.33%• Cooling Overhead (external): 1.7% - 4% (pPUE 1.017-1.04

@30KW/cab)• Poznan National Supercomputing Centre indicated 1.7% (2% overall)

average• Romonet indicated 4% cooling overhead in 2N model in Houston Texas

• IT Energy Reduction vs Fan Cooled: 8%• Based on measurements at Leeds University (Chi et Al 2013)

• Overall performance per watt benefit vs equivalent servers in a high density, close coupled cooling environment: 40.8%• Based on studies at Leeds University (Chi et Al 2013)

Page 14: The Big Crunch: The Future of Data Centres?

Our Ethos: The Elimination Of Waste

1. Waste infrastructure required to support servers2. Waste power consumed in supporting the

electronics – and running the same electronics with greater speed and efficiency

3. Waste product – producing a viable waste product in the form of hot water

Page 15: The Big Crunch: The Future of Data Centres?

Air Conditioners

IT Cabinets AKA “Racks”Each contains 10-20 Servers

Raised Floor

Back Up Batteries AKA “UPS” Generators

x4

External Coolers

x8

Power Distribution

1. TLC needs no A/C

3.TLC servers have no fans – use less power

2. No Airflow, no raised floor

4. No A/C, average and peak power consumption reduced

x10

x4

x8

x78

x3

x0

5. At least 2x as many servers per Cabinet

x2

x2

x8x4

x3972

Iceotope’s Vision

Page 16: The Big Crunch: The Future of Data Centres?

PetaGen Blades (Common Blade Ecosystem)

Page 17: The Big Crunch: The Future of Data Centres?

The Iceotope Blade, is sealed, electronics are immersed

Exotic CoolantHigh Thermal Expansivity

Electrical Insulator

Certified non-flammableNon Ozone depleting

Clean & Safe

Page 18: The Big Crunch: The Future of Data Centres?

PetaGen – 72 Blade Cabinet

• Standard 800mmx1200mm footprint• Takes 72 Iceotope Blades• Zero Airflow• Deploy in “grey space” not “white space”• Just add (hot) coolant loop and power• Liquid Cooled 415V-48V Power Supplies• Fully redundant at 30kW• Maximum Capacity 60kW

Page 19: The Big Crunch: The Future of Data Centres?

Thanks For Listening@petehopton@Iceotope