Upload
mark-brown
View
239
Download
0
Embed Size (px)
Citation preview
Welcome To The 34th HPC User Forum
MeetingOctober 2009
Thank You To:
HLRS/University of Stuttgart
For Hosting The Meeting!
Thank You To Our Sponsors! Thank You To Our Sponsors!
Altair Engineering
Bull
IBM
Microsoft
Introduction: LogisticsIntroduction: Logistics
We have a very tight agenda (as usual) Please help us keep on time!
Review Agenda Times: Please take advantage of breaks and free
time to network with attendees Note: We will post most of the
presentations on the web site
HPC User Forum MissionHPC User Forum Mission
To improve the health of the high-performance computing industry
through open discussions, information-sharing and initiatives involving HPC
users in industry, government and academia, along with HPC vendors and
other interested parties.
HPC User Forum GoalsHPC User Forum Goals
Assist HPC users in solving their ongoing computing, technical and business problemsProvide a forum for exchanging information, identifying areas of common interest, and developing unified positions on requirements
By working with users in other sectors and vendors To help direct and push vendors to build better products Which should also help vendors become more successful
Provide members with a continual supply of information on: Uses of high end computers, new technologies, high end
best practices, market dynamics, computer systems and tools, benchmark results, vendor activities and strategies
Provide members with a channel to present their achievements and requirements to interested parties
1Q 2009 HPC
Market Update
1Q 2009 HPC
Market Update
Q109 HPC Market Result – Down 16.8%Q109 HPC Market Result – Down 16.8%
Departmental ($250K - $100K)
$754M
Divisional ($250K - $500K)
$237M
Supercomputers(Over $500K)
$802M
Workgroup(under $100K)
$282M
HPC Servers $2.1B
Source IDC, 2009
Q109 Vendor Share in RevenueQ109 Vendor Share in Revenue
Other11.7%
Dawning0.3%
Fujitsu3.4%
Appro0.4%
Bull0.6%
NEC9.4%
Sun3.6%
Dell12.0%
HP28.9%
IBM25.3%
SGI1.5%Cray
2.9%
Q109 Cluster Vendor SharesQ109 Cluster Vendor Shares
HP30.8%
Dell21.7%
Other21.6%
IBM12.3%
Bull1.1%
Fujitsu2.6%
Dawning0.6%
NEC1.8%
Sun5.7%
SGI1.1%
Appro0.7%
HPC Compared
To IDC
Server Numbers
HPC Compared
To IDC
Server Numbers
HPC Qview Tie To Server Tracker:1Q 2009 DataHPC Qview Tie To Server Tracker:1Q 2009 Data
All WW Servers As Reported In IDC Server
Tracker
$9.9B
HPC Qview
Compute Node
Revenues~$1.05B*
HPC Special Revenue Recognition Services
Includes those sold through custom engineering, R&D offsets, or paid for over multiple quartersHPC Special
Revenue Recognition
Services ~$474M
HPC Computer System Revenues Beyond The Base Compute Nodes:Includes interconnects and switches,
inbuilt storage, scratch disks, OS, middleware, warranties, installation fees, service nodes, special cooling
features, etc.
Revenue Beyond
Base Nodes~$576M
* This number ties the two data sets on an apples-to-apples basis
Tracker QST Data Focus: Compute Nodes
HPC Qview Data Focus: The Complete System: “Everything needed to turn it on”
3HPC
1QST
2HPC
2010 IDC HPC Research Areas 2010 IDC HPC Research Areas
• Quarterly HPC Forecast Updates Until the world economy recovers
• New HPC End-user Based Reports: Clusters, processors, accelerators, storage, interconnects,
system software, and applications The evolution of government HPC budgets China and Russia HPC trends
• Power and Cooling Research
• Developing a Market Model For Middleware and Management Software
• Extreme Computing
• Data Center Assessment and Benchmarking
• Tracking Petascale and Exascale Initiatives
Agenda: Day OneAgenda: Day One
12:45 HPC User Forum Welcome/Introductions: Steve Finn (Chair, HPC User Forum) and Earl Joseph (IDC)
13:00 HLRS Welcome/Introductions: Michael Resch, HLRS13:15 Michael Resch, HLRS, a European View of HPC13:45 Robert Singleterry, NASA HPC Directions, Issues and
Concerns 14:15 Tom Sterling, Trends and New Directions in HPC14:45 ISC Update15:00 Break 15:45 Jim Kasdorf, Pittsburgh Supercomputer Center, National
Science Foundation Directions16:15 Erich Schelkle, ASCS/Porsche, End User HPC Site Update 16:45 Vijay K. Agarwala, Developing a Coherent Cyberinfrastructure
from Local Campus to National Facilities17:00 Thomas Eickermann, Juelich Research Center, PRACE
Program Update 17:30 Networking Get-together18:30 End of first day
WelcomeTo Day 2 Of TheHPC User Forum
Meeting
Agenda: Day TwoAgenda: Day Two
9:10 Welcome/Logistics – Earl Joseph and Steve Finn, BAE Systems
9:15 Jack Collins, National Cancer Institute Update, Directions and Concerns
9:45 Marie-Christine Sawley, ETH Zurich, CERN group,Data taking and analysis at unprecedented scale: the example of CMS
10:15 Paul Muzio, HPC Directions at the City University of New York
10:45 Bull Technology Update, Jean-Marc Denis11:30 Break11:45 Lutz Schubert, HLRS, Workflow Management12:15 New Software Technology Directions at
Microsoft, Wolfgang Dreyer 12:30 Wrap up and plans for future HPC User Forum
meetings, Michael Resch, Earl Joseph and Steve Finn12:35 Farewell and Lunch
Important Dates For Your Calendar Important Dates For Your Calendar
FUTURE HPC USER FORUM MEETINGS:
October 2009 International HPC User Forum Meetings: HLRS/University of Stuttgart, October 5-6, 2009
(midday to midday) EPFL, Lausanne, Switzerland, October 8-9, 2009
(midday to midday)
US Meetings: April 12 to 14, 2010 Dearborn, Michigan at the
Dearborn Inn September 13 to 15, 2010 Seattle, Washington
Thank YouFor Attending The 34th
HPC User ForumMeeting
Welcome To The 35th HPC User Forum
MeetingOctober 2009
Thank You To:
Ecole Polytechnique Fédérale de Lausanne (EPFL)
For Hosting The Meeting!
Thank You To Our Sponsors! Thank You To Our Sponsors!
Altair Engineering
Bull
IBM
Microsoft
Introduction: LogisticsIntroduction: Logistics
We have a very tight agenda (as usual) Please help us keep on time!
Review Agenda Times: Please take advantage of breaks and free
time to network with attendees Note: We will post most of the
presentations on the web site
HPC User Forum MissionHPC User Forum Mission
To improve the health of the high-performance computing industry
through open discussions, information-sharing and initiatives involving HPC
users in industry, government and academia, along with HPC vendors and
other interested parties.
HPC User Forum GoalsHPC User Forum Goals
Assist HPC users in solving their ongoing computing, technical and business problemsProvide a forum for exchanging information, identifying areas of common interest, and developing unified positions on requirements
By working with users in other sectors and vendors To help direct and push vendors to build better products Which should also help vendors become more successful
Provide members with a continual supply of information on: Uses of high end computers, new technologies, high end
best practices, market dynamics, computer systems and tools, benchmark results, vendor activities and strategies
Provide members with a channel to present their achievements and requirements to interested parties
1Q 2009 HPC
Market Update
1Q 2009 HPC
Market Update
Q109 HPC Market Result – Down 16.8%Q109 HPC Market Result – Down 16.8%
Departmental ($250K - $100K)
$754M
Divisional ($250K - $500K)
$237M
Supercomputers(Over $500K)
$802M
Workgroup(under $100K)
$282M
HPC Servers $2.1B
Source IDC, 2009
Q109 Vendor Share in RevenueQ109 Vendor Share in Revenue
Other11.7%
Dawning0.3%
Fujitsu3.4%
Appro0.4%
Bull0.6%
NEC9.4%
Sun3.6%
Dell12.0%
HP28.9%
IBM25.3%
SGI1.5%Cray
2.9%
Q109 Cluster Vendor SharesQ109 Cluster Vendor Shares
HP30.8%
Dell21.7%
Other21.6%
IBM12.3%
Bull1.1%
Fujitsu2.6%
Dawning0.6%
NEC1.8%
Sun5.7%
SGI1.1%
Appro0.7%
HPC Compared
To IDC
Server Numbers
HPC Compared
To IDC
Server Numbers
HPC Qview Tie To Server Tracker:1Q 2009 DataHPC Qview Tie To Server Tracker:1Q 2009 Data
All WW Servers As Reported In IDC Server
Tracker
$9.9B
HPC Qview
Compute Node
Revenues~$1.05B*
HPC Special Revenue Recognition Services
Includes those sold through custom engineering, R&D offsets, or paid for over multiple quartersHPC Special
Revenue Recognition
Services ~$474M
HPC Computer System Revenues Beyond The Base Compute Nodes:Includes interconnects and switches,
inbuilt storage, scratch disks, OS, middleware, warranties, installation fees, service nodes, special cooling
features, etc.
Revenue Beyond
Base Nodes~$576M
* This number ties the two data sets on an apples-to-apples basis
Tracker QST Data Focus: Compute Nodes
HPC Qview Data Focus: The Complete System: “Everything needed to turn it on”
3HPC
1QST
2HPC
2010 IDC HPC Research Areas 2010 IDC HPC Research Areas
• Quarterly HPC Forecast Updates Until the world economy recovers
• New HPC End-user Based Reports: Clusters, processors, accelerators, storage, interconnects,
system software, and applications The evolution of government HPC budgets China and Russia HPC trends
• Power and Cooling Research
• Developing a Market Model For Middleware and Management Software
• Extreme Computing
• Data Center Assessment and Benchmarking
• Tracking Petascale and Exascale Initiatives
Agenda: Day OneAgenda: Day One
14:00 HPC User Forum Welcome/Introductions, Steve Finn and Earl Joseph
14:15 EPFL Welcome/Introductions, Henry Markram, EPFL and Giorgio Magaritondo, VP, EPFL
14:30 Neil Stringfellow, CSCS/ETHZ, HPC Strategy in Switzerland, Swiss National Supercomputing Centre
15:00 Henry Markram, Felix Schuermann, EPFL, and David Turek, IBM, "Blue Brain Project Update"
15:30 IBM Research Partnerships, Dave Turek15:45 Altair Technology Update, Paolo Masera16:00 Jack Collins, National Cancer Institute Update, Directions
and Concerns16:30 Break 16:45 Markus Schulz, CERN High-throughput computing17:15 Robert Singleterry, NASA18:00 End of First Day
WelcomeTo Day 2 Of TheHPC User Forum
Meeting
Agenda: Day TwoAgenda: Day Two
9:00 Welcome/Logistics – Earl Joseph and Steve Finn, BAE Systems, Summarizing the September '09 User Forum
9:00 Victor Reis, US Department of Energy9:30 Alan Gray, EPCC End User Site Update, University of
Edinburgh10:00 Jim Kasdorf, Pittsburgh Supercomputer Center, "National
Science Foundation Directions"10:30 Thomas Eickermann, Juelich Supercomputing Centre,
PRACE Project Update11:00 Break11:15 Panel on Using HPC to Advance Science-Based Simulation
Panel Moderators: Henry Markram and Steve Finn Panel Members: Jack Collins, Thomas Eickermann, Victor Reis, Felix Schuermann, Markus Schulz and Neil Stringfellow,
12:15 New Software Technology Directions at Microsoft 12:30 Wrap up and plans for future HPC User Forum meetings,
Henry Markram, Earl Joseph and Steve Finn12:45 Farewell and Lunch
Important Dates For Your Calendar Important Dates For Your Calendar
FUTURE HPC USER FORUM MEETINGS:
October 2009 International HPC User Forum Meetings: HLRS/University of Stuttgart, October 5-6, 2009
(midday to midday) EPFL, Lausanne, Switzerland, October 8-9, 2009
(midday to midday)
US Meetings: April 12 to 14, 2010 Dearborn, Michigan at the
Dearborn Inn September 13 to 15, 2010 Seattle, Washington
Thank YouFor Attending The 35th
HPC User ForumMeeting
OEM Mix Of HPC Special Revenue Recognition ServicesOEM Mix Of HPC Special Revenue Recognition Services
Non-SEC Reported Product Revenues = $474M
HP30%
IBM43%
Dell12%
Sun4%
Other11%
Notes:• Includes product sales that are not reported by OEMs as product revenue in a
given quarter Sometimes HPC systems are paid for across a number of quarters or even
years• Includes NRE – if required for a specific system • Includes custom engineering sales• Some examples – Earth Simulator, ASCI Red, ASCI Red Storm, DARPA systems,
and many small and medium HPC systems that are sold through a custom engineering or services group because that need extra things added
2
Areas Of HPC “Uplift” Revenues Areas Of HPC “Uplift” Revenues
How The $576M "Uplift" Revenues Are Distributed
Computer hardware (in
cabinet) 45%
External interconnects
12%
External storage 12%
Software16%
Bundled warranties 8%
Misc. items7%3
Areas Of HPC “Uplift” Revenues Areas Of HPC “Uplift” Revenues
Notes:
* Computer hardware (in cabinet) -- hybrid nodes, service nodes, accelerators, GPGPUs, FPGAs, internal interconnects, in-built disks, in-built switches, special cabinet doors, special signal processing parts, etc.
* External interconnects -- switches, cables, extra cabinets to hold them, etc.
* External storage -- scratch disks, interconnects to them, cabinets to hold them, etc. (This excludes user file storage devices)
* Software -- includes both bundled and separately charged software if sold by the OEM, or on the purchase contract -- includes the operating system, license fees, the entire middleware stack, compilers, job schedules, etc. (it excludes all ISV applications unless sold by the OEM and in the purchase contract)
* Bundled warranties * Misc. items -- Since the HPC taxonomy includes
everything required to turn on the system and make it operational, items like bundled installation services, special features and other add-on hardware, and even a special paint job if required
3
Special Paint Jobs Are Back … Special Paint Jobs Are Back …
http://www.afrl.hpc.mil/consolidated/hardware.php