Upload
karlie-sommerfeld
View
244
Download
0
Tags:
Embed Size (px)
Citation preview
Vendor Update: Penguin Computing Vendor Update: Penguin Computing
Arend Dittmer, Penguin Computing
Copyright © 2009 Penguin Computing, Inc. All rights reserved
100% Linux focus
Founded 1998 Initial focus on ‘reliable’ Linux systems
Shift to HPC in 2003
Over 2500 Customers - Enterprise, Academia, Government, Web Companies
Penguin Computing HistoryPenguin Computing History
Penguin Computing is a global specialist in the technical computing market delivering
solutions and services from the workstation to the cloud with a focus on ease of use, cutting edge
technology and delivering superior customer value
Penguin Computing is a global specialist in the technical computing market delivering
solutions and services from the workstation to the cloud with a focus on ease of use, cutting edge
technology and delivering superior customer value
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Scyld ClusterWare™
Scyld TaskMaster™
Enterprise Linux
Compilers & Tools
HPC Cluster SolutionsHPC Cluster Solutions
3
Professional Professional Services and Services and EngineeringEngineering
Cluster Cluster Management Management
SoftwareSoftware
Linux SystemsLinux Systems
Storage
WorkstationsServers
HPC Clusters
HPC as a HPC as a Service - Service -
Penguin on Penguin on DemandDemand
Optimized for Linux Ease of Management Elastic Computing Linux and Cluster Expertise
Intel® & AMD® RackmountServers
Storage
Networking
Infrastructure
GPGPUs
Professional Workstations
Factory Integration
Onsite Installation
Training
Product Support
Software Support
Customized Service Engagements
On-demand environment
HPC ‘optimized’
Tiered ‘Pay-as-you-go’ pricing
Premium set-up and support services
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Dual-processor server
Tested and qualified for ‘super-clocked’ AMD processor
Highest clock speeds available in any current AMD server
Standard 1U form factor
Best price performance for $/GHz
Hardware Update: Altus 1750Hardware Update: Altus 1750
4
Copyright © 2009 Penguin Computing, Inc. All rights reserved
High Performance Computing Today - ChallengesHigh Performance Computing Today - Challenges
5
Variable demand
Expertise required to provide quality service
Technology obsolescence
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Access on-demand
Dynamically scalable
‘Pay-as-you-go’
Expertise included
Environment setup
Optimization
Applications
On-going support
Ideal as ‘spill over’ for ‘bursty’ workload
HPC as a Service – The future of HPCHPC as a Service – The future of HPC
6
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Sample of Current POD UsageSample of Current POD Usage
7
Gene sequencing - SOLiDBioScope.com
Digital Content encoding
Web data processing for 3D mapping
Fluid dynamics analysis – heart modeling
Transient finite element analysis – safety systems
Real time rendering – mental images Reality Server
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Customer Case StudiesCustomer Case Studies
8
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Backup SlidesBackup Slides
9
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Customer Case StudiesCustomer Case Studies
10
Replace in-house cluster with POD compute and storage solution.
Integrated POD with Amazon’s EC2 to optimize Earthmine’s workflow. Combined with POD’s Disk2Storage service, Earthmine is able to ship and process data fromworld-wide collectionsystems completelyon-demand.
POD’s CloudManagement Platformprocesses up to 50,000Earthmine jobs at a time.
HPC cloud cycles and storage to determine Ozzy Osbourne's genetic sequence.
Fast genome sequencing using Life Technologies BioScope SaaS offering, “SOLIDBioScope.com”. Project completed in 1 weekusing POD’s HPC cluster.
Recently featured in Bio-ITWorld.com and ST. LouisBusiness Journal
Part of an ongoing $400,000POD Services Contract withLife Technologies.
“Penguin Computing's Disk2Server Solution allows us to ship terabytes of data from our collection vehicles
deployed across the world directly to the POD facility for processing with very minimal downtime and almost no
hassle.”- John Ristevski, CTO and Co-founder, Earthmine
Solution Provided
“Matt Dyer used cloud resources offered by Penguin Computing (Penguin Computing On Demand). After
Cofactor uploaded the sequence data … Dyer logged on and took over with immediate access to thousands
of compute nodes”- Bio-IT World.com, October 30, 2010
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Initial capital costs($100K and up)
Ongoing operating costs(power, cooling, support, service contracts, administration)
High cost per core hour(peak can be 3 to 4x average)
Technology obsolescence(current for 12 months)
Time to results(high utilization = delays in job execution)
Fixed Capacity(demand is variable)
Difficult to collaborate outside of the firewall(data collaboration requirements are growing)
High Performance Computing Today - ChallengesHigh Performance Computing Today - Challenges
Total Core Hour Demand Per Month
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Day
Per
cen
t U
tiliz
atio
n
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Capacity Xeon E5540/EE5430 quad-core processors
GigE or Infiniband interconnect, 4GB/core
Panasas high-speed storage, 10GigE
NetApp direct-attached file storage, GigE
1TB scratch (local) storage per node
NVIDIA Tesla GPUs
150Mb average bandwidth(burstable to 1Gb)
Additional Capacity Higher core-count servers (24-cores per node)
Larger memory (128GB plus)
HPC Capabilities on PoDHPC Capabilities on PoD
12
Copyright © 2009 Penguin Computing, Inc. All rights reserved
“Fire and forget” compute Persistent, secure login and compute environment with compute
and storage proximity Scales from 1 to 640 cores per user (soft limit) Maximum of 50% average utility (headroom = minimum wait
times) Standard interface
ssh, rcp, scp Typical directory structure
/home, /data, /scratch
Easy data movement 50Mb Internet connection per user “Helping hands” for POD Caddy
data transfer (>50GB) Server to server drive transfer
POD FeaturesPOD Features
13
Copyright © 2009 Penguin Computing, Inc. All rights reserved
POD Unit – 640 CoresPOD Unit – 640 Cores
14
Copyright © 2009 Penguin Computing, Inc. All rights reserved
POD Architecture (User’s view)POD Architecture (User’s view)
15
Virtual Login NodeVirtual Login Node
POD Login Node (VM)• all software and middleware typical of a Head Node• submit jobs, look at results, compile, etc.• remote mount to data shares• TORQUE client (points to Head Node)
POD Head NodePOD Head Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
sshscprsync Job scheduler
(fair share policy)
/scratch/[username]
/home /panasas/[username] /podstore/[username]
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Example POD CustomersExample POD Customers
16
Bio / Life SciencesBio / Life Sciences
Enterprise / ISVEnterprise / ISV
ResearchResearch
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Sample of Current POD UsageSample of Current POD Usage
17
Gene sequencing and bio informatics - SOLiDBioScope.com
Movie post-processing
Web data processing for mapping
Fluid dynamics analysis – heart modeling
Transient finite element analysis – safety systems
Real time rendering – mental images Reality Server
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Developing encoding IP
HDTV, Blu Ray etc.
Scalable compute intensive algorithms
Iterative development process
Faster turnaround of encoding jobs Increased development efficiency
Massive amounts of ‘raw’ data as encoding input
Limited ‘in-house’ resources and admin capabilities
Uneven workload over time
Case Study: Dolby Laboratories Case Study: Dolby Laboratories
18
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Migration of scalable algorithms to POD
Scale out of up to 100 cores per job
Data management
‘Raw’ data upload through POD’s caddy service
Processed encoded data downloaded
Reduced job runtime from 2 days 2 hours
“The use of POD is contributing significantly to making our development process more efficient”
Gopi Lakshminarayanan, Director of Software Development
The POD SolutionThe POD Solution
19
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Specialized in 3D street level imagery
Library of 3D street level data/API’s for web applications
Allows for application integration of interactive street level maps
Images spatially accurate
Eight 2D images per location need to be processed to create accurate 3D ‘image’
Image processing computationally expensive
Uneven workload dependent on # of data collection vehicles
Limited in-house resources for data processing
Case Study: Earthmine Inc. Case Study: Earthmine Inc.
20
Copyright © 2009 Penguin Computing, Inc. All rights reserved
“Overall, the experience has been great.POD is fast, reliable, and works as described.
In particular, POD has providedexcellent support and Penguin has gone
out of its way many times to accommodateour requests for technical help and to meet
our fast-changing needs."
John Ristevski, CTO Earthmine
The POD SolutionThe POD Solution
21
POD provides ‘overflow’ capacity
Submission of up to 40,000 jobs per day
Integration with Amazon EC2 hosting Earthmine cloud services
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Penguin’s EcosystemPenguin’s Ecosystem
Application / ISVPartners
Application / ISVPartners
Operating System& Compiler PartnersOperating System
& Compiler Partners
HardwarePartners
HardwarePartners
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Accessible from anywhere
Highly-secure, state-of-the-art facility protects account information, data and job execution
Expertise included environment set up optimization on-going support
Pay as you go
On-demand access, dynamically scalable 1280 compute cores per POD - each compute node has dual
2.67GHz Xeon processors, 8 cores each. 32GB DDR2-667 memory per server (4GB per core) and
1TB local disk scratch space. GigE or Infiniband (10Gb) interconnect between
compute nodes. Direct attached storage on a 10Gb network, with the
option of Panasas high-speed parallel file systemstorage.
Web servers directly attached to the computeinfrastructure for hosting customer facing Web sites.
HPC as a ServiceHPC as a Service
Copyright © 2009 Penguin Computing, Inc. All rights reserved
“Overall, the experience has been great. POD is fast, reliable, and works as described. In particular, POD has provided excellent support and Penguin has gone out of its way many times to accommodate our requests for technical help and to meet our fast-changing needs as far as software and schedule.”
- John Ristevski, Earthmine Inc.
“Using the POD on-demand computing system for processing high-end genetics data was an extremely positive experience in high performance cloud computing … The on-demand availability of powerful servers coupled with exemplary technical support makes POD the superior choice for cutting-edge medical research.”
- Milan Radovich, Indiana School of Medicine
"The computational resources to process next-gen sequencing data keeps growing as sequencers increase the amount of DNA generated. Penguin Computing's POD solution extends our computational capacity on demand. In that way we can process unpredicted picks of load we may have. PC's support people is highly efficient and the infrastructure delivers the power our computations require."
- David Rio Deiros, Baylor College of Medicine
POD TestimonialsPOD Testimonials
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Penguin on Demand$~0.25/core hour - no monthly commitment
Academic pricing available
Cost ComparisonCost Comparison
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Usage RatesUsage Rates
26
Service Fee
Core hour rates $0.27 to $0.20 pch (depends on usage)
Dedicated servers $1,400 to $1,650 per month ($0.16 to $0.19 pch)
On-demand Storage and Fast-access Scratch Storage
$200 per TB-month
Combination Login/Storage Node (48TB raw) $2200/month (diskless)
Persistent Login Node• Virtual Machine - $165/month• Physical Server - $290/month
Bandwidth (over first TB per month)• $0.25 per GB-in• $0.15 per GB-out
Support• Included with 2,000 core hour per month users• Per Incident Support available at $350
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Storage Options Capacity Cost
PODStore (R4724) 48TB each – on demand $200/TB-month
Panasas (Series 8)40TB total – on demand (fast scratch)
$650/TB-month
R1701 (4 drive bays)4 – 8 TB dedicated,long-term
$60-120/TB-month (requires annual commitment)
R4724 (24 drive bays)24 – 48 TB dedicated,long-term
$50-100/TB-month (requires annual commitment
POD Storage OptionsPOD Storage Options
27
• Dedicated Storage Node also serves as a physical Login Node
• User data transferred to POD on disks can be immediately available
• User can have complete control over data transferred (software RAID, encryption, etc.)
Copyright © 2009 Penguin Computing, Inc. All rights reserved
1 year, 2048 cores
Fixed capacity
$0.15 PCH
$2.7M
Hybrid Usage – Annual Commitment Plus On-DemandHybrid Usage – Annual Commitment Plus On-Demand
28
1 year, 1024 cores (base utilization)
1024 cores on demand (33% utilization)
$0.17 PCH (average)
$2.1M
Same peak capacity
Annual commitment,fixed capacity
Pure on-demandScalability
$ p
er
core
ho
ur
Annual commitment Hybrid – Partial Annual Commitment plus On-Demand
Using on-demand to satisfy peaks reduces overall cost even though $ per
core hour is higher
Total Core Hour Demand Per Month
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Day
Per
cen
t U
tiliz
atio
n
Total Core Hour Demand Per Month
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Day
Per
cen
t U
tiliz
atio
n
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Founded 1998 – One of HPC industry’s longest track records of success
Over 2500 Customers in Enterprise, Academia, Government and Leading Web Companies
Donald Becker, CTO – Inventor of Beowulf architecture and primary contributor to Linux kernel
Penguin Vision and FocusPenguin Vision and Focus
Penguin Computing is fueling the next generation ofgrowth in the HPC computing market by deliveringunprecedented ease of use, scalability and utility to
the industry’s most demanding users.
Penguin Computing is fueling the next generation ofgrowth in the HPC computing market by deliveringunprecedented ease of use, scalability and utility to
the industry’s most demanding users.
Copyright © 2009 Penguin Computing, Inc. All rights reserved
2,500 Tier 1 Customers2,500 Tier 1 Customers
30
Government / Defense
Government / Defense
Life SciencesLife Sciences
Research & EducationResearch & Education
Enterprise /Manufacturing
Enterprise /Manufacturing
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Scyld ClusterWare™
Scyld TaskMaster™
Enterprise Linux
Compilers & Tools
Penguin Solutions Offer Flexible Computing OptionsPenguin Solutions Offer Flexible Computing Options
31
Professional Professional Services and Services and EngineeringEngineering
Cluster Cluster Management Management
SoftwareSoftware
Linux SystemsLinux Systems
Storage
WorkstationsServers
HPC Clusters
Cloud/On- Cloud/On- Demand Demand Hosting Hosting ServicesServices
Optimized for Linux Ease of Management Elastic Computing Linux and Cluster Experts
Intel® & AMD® RackmountServers
Professional Workstations
Storage
Networking
Datacenter Infrastructure
Factory Integration
Onsite Installation
Training
Product Support
Software Support
Customized Service Engagements
On-demand HPC environment
Simple data services with tiered pricing
Premium set-up and support services
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Penguin’s EcosystemPenguin’s Ecosystem
Application / ISVPartners
Application / ISVPartners
Operating System& Compiler PartnersOperating System
& Compiler Partners
HardwarePartners
HardwarePartners
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Initial capital costs($100K and up)
Ongoing operating costs(power, cooling, support, service contracts, administration)
High cost per core hour(peak can be 3 to 4x average)
Technology obsolescence(current for 12 months)
Time to results(high utilization = delays in job execution)
Fixed Capacity(demand is variable)
Difficult to collaborate outside of the firewall(data collaboration requirements are growing)
High Performance Computing Today - ChallengesHigh Performance Computing Today - Challenges
Total Core Hour Demand Per Month
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Day
Per
cen
t U
tiliz
atio
n
Copyright © 2009 Penguin Computing, Inc. All rights reserved
HPC in the cloud (rather than cloud computing)
Highly-secure, state-of-the-art facility protects account information, data and job execution.
Expertise included: environment set up optimization on-going support
Accessible from anywhere in the world
Pay as you go
On-demand access, dynamically scalable 1280 compute cores per POD - each compute node has dual
2.67GHz Xeon processors, 8 cores each. 32GB DDR2-667 memory per server (4GB per core) and
1TB local disk scratch space. GigE or Infiniband (10Gb) interconnect between
compute nodes. Direct attached storage on a 10Gb network, with the
option of Panasas high-speed parallel file systemstorage.
Web servers directly attached to the computeinfrastructure for hosting customer facing Web sites.
HPC as a Service – A Future of HPCHPC as a Service – A Future of HPC
Copyright © 2009 Penguin Computing, Inc. All rights reserved
“Fire and forget” compute Persistent, secure login and compute environment with compute and storage
proximity Scales from 1 to 640 cores per user (soft limit) Maximum of 50% average utility (headroom = minimum wait times) Standard interface
ssh, rcp, scp Typical directory structure
/home, /data, /scratch
Easy data movement 50Mb Internet connection per user “Helping hands” for POD Caddy
data transfer (>50GB) Server to server drive transfer
HPC expertise and fanatical on-linesupport
High-speed Internet connection(50Mbps per user)
POD FeaturesPOD Features
35
Copyright © 2009 Penguin Computing, Inc. All rights reserved
POD Unit – 1280 CoresPOD Unit – 1280 Cores
36
Internet (50Mb, burstable to 1Gb)
x 2
Copyright © 2009 Penguin Computing, Inc. All rights reserved
POD Architecture (User’s view)POD Architecture (User’s view)
37
Virtual Login NodeVirtual Login Node
POD Login Node (VM)• all software and middleware typical of a Head Node• submit jobs, look at results, compile, etc.• remote mount to data shares• TORQUE client (points to Head Node)
POD Head NodePOD Head Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
POD Compute NodePOD Compute Node
sshscprsync Job scheduler
(fair share policy)
/scratch/[username]
/home /panasas/[username] /podstore/[username]
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Storage Options Capacity Cost
PODStore (R4724) 48TB each – on demand $200/TB-month
Panasas (Series 8)40TB total – on demand (fast scratch)
$650/TB-month
R1701 (4 drive bays)4 – 8 TB dedicated,long-term
$60-120/TB-month (requires annual commitment)
R4724 (24 drive bays)24 – 48 TB dedicated,long-term
$50-100/TB-month (requires annual commitment
POD Storage OptionsPOD Storage Options
38
• Dedicated Storage Node also serves as a physical Login Node
• User data transferred to POD on disks can be immediately available
• User can have complete control over data transferred (software RAID, encryption, etc.)
Copyright © 2009 Penguin Computing, Inc. All rights reserved
“Overall, the experience has been great. POD is fast, reliable, and works as described. In particular, POD has provided excellent support and Penguin has gone out of its way many times to accommodate our requests for technical help and to meet our fast-changing needs as far as software and schedule.”
- John Ristevski, Earthmine Inc.
“Using the POD on-demand computing system for processing high-end genetics data was an extremely positive experience in high performance cloud computing … The on-demand availability of powerful servers coupled with exemplary technical support makes POD the superior choice for cutting-edge medical research.”
- Milan Radovich, Indiana School of Medicine
"The computational resources to process next-gen sequencing data keeps growing as sequencers increase the amount of DNA generated. Penguin Computing's POD solution extends our computational capacity on demand. In that way we can process unpredicted picks of load we may have. PC's support people is highly efficient and the infrastructure delivers the power our computations require."
- David Rio Deiros, Baylor College of Medicine
POD TestimonialsPOD Testimonials
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Penguin on Demand$~0.25/core hour - no monthly commitment
Academic pricing available
Cost ComparisonCost Comparison
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Usage RatesUsage Rates
41
Service Fee
Core hour rates $0.27 to $0.20 pch (depends on usage)
Dedicated servers $1,400 to $1,650 per month ($0.16 to $0.19 pch)
On-demand Storage and Fast-access Scratch Storage
$200 per TB-month
Combination Login/Storage Node (48TB raw) $2200/month (diskless)
Persistent Login Node• Virtual Machine - $165/month• Physical Server - $290/month
Bandwidth (over first TB per month)• $0.25 per GB-in• $0.15 per GB-out
Support• Included with 2,000 core hour per month users• Per Incident Support available at $350
Copyright © 2009 Penguin Computing, Inc. All rights reserved
1 year, 2048 cores
Fixed capacity
$0.15 PCH
$2.7M
Hybrid Usage – Annual Commitment Plus On-DemandHybrid Usage – Annual Commitment Plus On-Demand
42
1 year, 1024 cores (base utilization)
1024 cores on demand (33% utilization)
$0.17 PCH (average)
$2.1M
Same peak capacity
Annual commitment,fixed capacity
Pure on-demandScalability
$ p
er
core
ho
ur
Annual commitment Hybrid – Partial Annual Commitment plus On-Demand
Using on-demand to satisfy peaks reduces overall cost even though $ per
core hour is higher
Total Core Hour Demand Per Month
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Day
Per
cen
t U
tiliz
atio
n
Total Core Hour Demand Per Month
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Day
Per
cen
t U
tiliz
atio
n
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Example POD CustomersExample POD Customers
43
Bio / Life SciencesBio / Life Sciences
Enterprise / ISVEnterprise / ISV
ResearchResearch
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Sample of Current POD UsageSample of Current POD Usage
44
Gene sequencing and bio informatics (SOLiDBioScope.com)
Film post-processing
Batch data analysis (financial)
Web data processing (mapping)
FEA as a service – heart modeling
Fluid dynamics analysis
General purpose transient finite element analysis
Copyright © 2009 Penguin Computing, Inc. All rights reserved
State-Of-The-Art Data Center
MPLS Network Capable for VPN
Virtual Private Cluster technology in Q3, 2010
Acceptable Use Policies
Dedicated Storage Not shared Cheaper than on-demand storage
ITAR Compliance US Citizens Data and application security (dedicated Login Node)
“Server to Server” Data Transfer User sets security level Full encryption capability
POD SecurityPOD Security
45
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Programmatic interface (Web services)
Options for increased security – “DSC” Dynamically scalable cluster Created on-demand at the switch level VPN, MPLS capable Full access to compute resources (e.g. IPMI)
Options for increased flexibility Root access to physical head node Subnet management OS provisioning Dedicated compute resources
Design Goals for POD2Design Goals for POD2
46
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Virtual Private ClusterVirtual Private Cluster
Internet (150Mb, burstable to 1Gb)
x 2 Virtual Cluster Manager
Copyright © 2009 Penguin Computing, Inc. All rights reserved
DSC Management Resource availability
Cluster reservation
Cluster creation, scaling, tear-down
Subnet management
Reporting/Administration
Storage management
Scyld for the Cloud (Q3, 2010)Scyld for the Cloud (Q3, 2010)
Scyld ClusterWare Functionality> Job scheduling
(TORQUE/SGE/TaskMaster)
> Node monitoring and control (IPMI/Beostatus)
> Asset Monitoring (synchronized with Support Portal)
> Hybrid provisioning (node grouping)
Integrated Management Framework
POD
Internal clusters
Internet orMPLS network
LAN or VPN
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Development Schedule – April 2010Development Schedule – April 2010
49
Copyright © 2009 Penguin Computing, Inc. All rights reserved
State-Of-The-Art Data Center
MPLS Network Capable for VPN
Virtual Private Cluster technology in Q3, 2010
Acceptable Use Policies
Dedicated Storage Not shared Cheaper than on-demand storage
ITAR Compliance US Citizens Data and application security (dedicated Login Node)
“Server to Server” Data Transfer User sets security level Full encryption capability
POD SecurityPOD Security
50
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Programmatic interface (Web services)
Options for increased security – “DSC” Dynamically scalable cluster Created on-demand at the switch level VPN, MPLS capable Full access to compute resources (e.g. IPMI)
Options for increased flexibility Root access to physical head node Subnet management OS provisioning Dedicated compute resources
Design Goals for POD2Design Goals for POD2
51
Copyright © 2009 Penguin Computing, Inc. All rights reserved
Virtual Private ClusterVirtual Private Cluster
Internet (150Mb, burstable to 1Gb)
x 2 Virtual Cluster Manager
Copyright © 2009 Penguin Computing, Inc. All rights reserved
DSC Management Resource availability
Cluster reservation
Cluster creation, scaling, tear-down
Subnet management
Reporting/Administration
Storage management
Scyld for the Cloud (Q3, 2010)Scyld for the Cloud (Q3, 2010)
Scyld ClusterWare Functionality> Job scheduling
(TORQUE/SGE/TaskMaster)
> Node monitoring and control (IPMI/Beostatus)
> Asset Monitoring (synchronized with Support Portal)
> Hybrid provisioning (node grouping)
Integrated Management Framework
POD
Internal clusters
Internet orMPLS network
LAN or VPN