www.nasa.gov
Project Status Report High End Computing Capability Strategic Capabilities Assets Program
Dr. Rupak Biswas – Project Manager NASA Ames Research Center, Moffett Field, CA [email protected] (650) 604-4411
November 10, 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
HECC Facilities Team Completes Bringing Infrastructure Online and Testing of MSF • Working with NASA Ames Code J and SGI/
CommScope staff, the HECC Facilities team connected the Modular Supercomputing Facility (MSF) to site electrical utilities, a 2.5-megawatt transformer, water supply, drainage, and emergency services (such as fire and physical security).
• The MSF’s first system, Electra, is now operational and consumes 440 kilowatts of power with a LINPACK benchmark load. The initial Power Usage Effectiveness (PUE) during the LINPACK run (slide 4) was between 1.02–1.03.
• The HECC team received training on and have taken over responsibility for operation of the Electra module. The MSF control systems are completely functional and we are in the process of conducting environmental testing.
• Electra is still on schedule to go into production in December 2016.
Mission Impact: By working closely with NASA Ames Center Operations and SGI/CommScope staff, the HECC Modular Supercomputing Facility (MSF) project team ensured the successful installation of the first MSF module.
Left: The Modular Supercomputing Facility’s Programmable Logic Controller (PLC) user interface. Right: The MSF construction site, located across from the NASA Advanced Supercomputing facility at NASA’s Ames Research Center.
POCs: William Thigpen, [email protected], (650) 604-1061, NASA Advanced Supercomputing (NAS) Division; Chris Tanner, [email protected], (650) 604-6754, NAS Division, CSRA LLC
2
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Successful LINPACK and HPCG Benchmark Runs Completed on Pleiades • During dedicated time, HECC and SGI
engineers ran the LINPACK and High Performance Conjugate Gradient (HPCG) benchmarks on the Pleiades supercomputer.
• Pleiades achieved 5.95 petaflops (PF) on the LINPACK benchmark, with 83.7% efficiency, using the majority of the system. This number would have moved the system up to 9th place on the June 2016 TOP500 list. Pleiades has a theoretical peak performance of 7.25 PF.
• Pleiades achieved 175 teraflops on the HPCG benchmark. This result would have moved the system up to 7th place on the June 2016 HPCG list.
• The benchmarks provide a good stress test on the system and help to quickly identify marginal hardware on the system. Over 60 components were identified as sub-optimal and replaced during this test.
Mission Impact: Running the LINPACK and HPCG benchmarks across a large portion of Pleiades provides HECC with a good method to identify and address system issues, thereby improving overall reliability for users.
The LINPACK and HPCG benchmarks are widely used to evaluate the performance of different high-performance computers, and provide two complementary viewpoints on performance on different workloads.
POCs: Bob Ciotti, [email protected], (650) 604-4408, NASA Advanced Supercomputing (NAS) Division; Davin Chan, [email protected], (650) 604-3613, NAS Division, CSRA LLC
3
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Successful LINPACK and HPCG Benchmark Runs Completed on Electra • SGI completed running the LINPACK and
HPCG benchmarks on the Electra supercomputer as part of a stress test.
• Electra achieved 1.096 petaflops (PF) on the LINPACK benchmark, with 88.5% efficiency (95.7% efficiency based on the Advanced Vector Extensions, or AVS frequency.) This would have placed the system in 80th place on the June 2016 TOP500 list. Electra has a theoretical peak performance of 1.2 PF.
• Electra achieved 25.188 teraflops on the HPCG benchmark. This number would have placed the system in 38th place on the June 2016 HPCG list.
• The Power Utilization Efficiency (PUE) measurement, which is the total facility power divided by the IT equipment power, fluctuated between 1.02–1.03 during the LINPACK benchmark. A PUE of 1.0 would be indicative of a system that used no power for cooling.
Mission Impact: HECC’s new Electra system demonstrates that a high-performance computer can be power-efficient while still providing the resources to meet NASA’s growing computational requirements.
The Electra system, consisting of 1,152 Broadwell nodes, is able to run efficiently while running demanding benchmarks like LINPACK and HPCG.
POCs: Bob Ciotti, [email protected], (650) 604-4408, NASA Advanced Supercomputing (NAS) Division; Davin Chan, [email protected], (650) 604-3613, NAS Division, CSRA LLC
4
Davin:CanyoupleasegetanimagefromChrisB.orChrisT.thattheyhaven’tusedalreadythismonth?
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
HECC Networks Team Leads Agency Design on DNS Service • The agency’s Domain Name System (DNS) service
was centralized to run out of Marshall Space Flight Center by the NASA Integrated Communications Services (NICS) contract under the Discrete Data Input/Internet Protocol Address Management (DDI/IPAM) project. HECC provided key input requirements at the start of the project that were later adopted by the agency as a more reliable and standard model. – The HECC Networks team provided requirements for
high availability across DNS servers in a virtual cluster pair. In the event that one DNS server goes down, another will take over its role, resulting in no downtime.
– HECC’s NASLAN network was the first to have this implemented and now the agency has 5 other sites set up with a similar architecture.
– The HECC requirement of having redundant array of independent disk (RAID) hard drives mirrored in case of drive failure is now standard practice across all DNS servers to increase availability.
• NASLAN continues to be the only network site, throughout the agency, in compliance with IPv6 requirements to have its DNS servers supporting native IPv6. We will continue working with NICS as the IPv6 part of this effort moves forward.
Mission Impact: HECC network architecture designs are a model for the agency, and lead to reduced downtime and higher availability of DNS service to all of NASA.
This diagram shows the high-availability Domain Name System (DNS) architecture that HECC network engineers helped the NASA Integrated Communications Services team design.
POC: Nichole Boscia, [email protected], (650)604-0891, NASA Advanced Supercomputing Division, CSRA LLC
5
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Assessing Liquid Rocket Engine Feed Line Flow for the Space Launch System *
Mission Impact: HECC supercomputing resources enabled modeling of the full-scale geometry with fine details, providing designers of the SLS liquid propulsion system with information that can help ensure its components can withstand the complex flow environments generated by the propulsion system.
A planar cut through the simulation model of the Space Launch System feed line, showing the average axial velocity of the liquid propellant near the pump inlet for the straight duct with the RS-25 low pressure fuel pump.
POC: Andrew Mulder, [email protected], (256) 544-6750, NASA Marshall Space Flight Center
6
*HECCprovidedsupercompu1ngresourcesandservicesinsupportofthiswork
• Researchers at NASA Marshall Spaceflight Center ran simulations on Pleiades to help designers better understand how fuel will flow within the Space Launch System (SLS) liquid propulsion system. – The section of the SLS propulsion system that was
modeled included a liquid hydrogen (LH) duct bend, a ball-strut tie rod joint, a gimbal joint, and the RS-25 engine’s low pressure fuel pump (LPFP).
– Three distinct cases were modeled: the LH feed line with no angulation and no inducer; the LH feed line with no angulation, coupled to the RS-25 LPFP; and the LH feed line with 4.5 degrees gimbal angle, coupled to the RS-25 LPFP.
– The simulations assessed the flow environment in the LH feed line upstream of the LPFP, at normal RS-25 operating conditions during SLS ascent.
– Flow uniformity, a key factor in LPFP performance and operation, was calculated for the three cases and compared. The various sources of flow variation were identified and their relative contributions to the flow field were determined.
• Results showed that the current configuration appears to have an acceptable flow environment.
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Enabling NASA Earth Exchange (NEX) Petabyte Data Production Systems * • To support a growing number of projects that are
developing new, large-scale science data products, the NEX team developed a hybrid HPC-cloud data production system during the last year.
• The NEX system represents both the production rules and the execution of the rules as a directed graph, from which user can analyze and reproduce scientific results.
• The system makes it possible for small science teams to execute projects that would not have been possible previously. Examples include: – Production pipeline for Landsat and NASA Global
Imagery and Browse Service – 37 steps, 7 petabytes of data, and 670,000 processor-hours.
– NEX Downscaled Climate Projections (NEX-DCP30) and NEX Global Daily Downscaled Projections (NEX-GDDP) pipeline – 40 terabytes (TB) of data and 320,000 processor-hours.
– Estimating biomass by counting trees across the United States by using machine learning and computer vision – 40 TB of data/100,000 processor-hours.
• Over the last year, NEX projects used more than 9 million processor-hours on Pleiades and processed more than 3 petabytes of data.
Mission Impact: The Pleiades supercomputer, combined with HECC’s massive data storage and high-speed networks, enables the NASA Earth Exchange (NEX) team to engage large scientific communities and provide large-scale modeling and data analysis capabilities not previously available to most scientists.
NEX production pipeline example: Preliminary national forest disturbance map from the North American Forest Dynamics (NAFD) team, developed using a combination of 25 years of Landsat data (1985–2010) and over 200,000 compute hours. The product is a wall-to-wall forest disturbance map for the entire United States at 30-meter spatial resolution.
POCs: Petr Votava, [email protected], (650) 604-4675, Andrew Michaelis, [email protected], (650) 604-5413, NASA Ames Research Center
7
*HECCprovidedsupercompu1ngresourcesandservicesinsupportofthiswork
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
HECC Facility Hosts Several Visitors and Tours in October 2016 • HECC hosted 9 tour groups in October;
guests learned about the agency-wide missions being supported by HECC assets, and some groups also viewed the D-Wave 2X quantum computer system. Visitors this month included: – Peter Hughes, Center Chief Technologist for NASA
Goddard Spaceflight Center. – A group from the United Nations who is interested in the
technological partnership opportunities with NASA Ames Research Center in planetary sustainability/Earth science and space exploration.
– Technical staff from the Office of Naval Research. – A group from the California Council on Science and
Technology. – A large group of students from Mt. Diablo High School in
Oakland, part of an at-risk-youth program. – Members of the 2016 Haas School of Business,
Berkeley Innovation Fair. – Students from the Ames Fall Internship program.
NAS Division Chief Piyush Mehrotra presents computational fluid dynamics results on the NAS facility hyperwall to members of the 2016 Haas School of Business, Berkeley Innovation Fair.
POC: Gina Morello, [email protected], (650) 604-4462, NASA Advanced Supercomputing Division
8
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Papers
• “Extension of the MURaM Radiative MHD Code for Coronal Simulations, M. Rempel, arXiv:1609.09818 [astro-ph.SR], September 30, 2016. * https://arxiv.org/abs/1609.09818
• “Cosmological Hydrodynamic Simulations of Preferential Accretion in the SMDH of Milky Way Size Galaxies,” N. Sanchez, et al., arXiv:1610.01155 [astro-ph.GA], October 4, 2016. * https://arxiv.org/abs/1610.01155
• “The Influence of Daily Meteorology on Boreal Fire Emissions and Regional Trace Gas Variability,” E. Wiggins, et al., Journal of Geophysical Research: Biosciences, October 6, 2016. * http://onlinelibrary.wiley.com/doi/10.1002/2016JG003434/full
• “Properties of Dark Matter Halos as a Function of Local Environment Density,” C. Lee, et al. arXiv:1610.02108 [astro-ph.GA] October 7, 2016. * https://arxiv.org/abs/1610.02108
• “Colors, Star Formation Rates, and Environments of Star Forming and Quiescent Galaxies at the Cosmic Noon,” R. Feldmann, et al., arXiv:1610:02411 [astro-ph.GA] October 7, 2016. * https://arxiv.org/abs/1610.02411
9
*HECCprovidedsupercompu1ngresourcesandservicesinsupportofthiswork
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Papers (cont.)
• “Optimization of Flexible Wings and Distributed Flaps at Off-Design Conditions,” D. Rodriguez, M. Aftosmis, M. Nemec, G. Anderson, Journal of Aircraft (AIAA), Published online October 7, 2016. * http://arc.aiaa.org/doi/abs/10.2514/1.C033535
• “The Role of Baryons in Creating Statistically Significant Planes of Satellites Around Milky Way-Mass Galaxies,” S. Ahmed, A. Brooks, C. Christensen, arXiv:1610.03077 [astro-ph.GA], October 10, 2016. * https://arxiv.org/abs/1610.03077
• “Why Do High-Redshift Galaxies Show Diverse Gas-Phase Metallicity Gradients?,” X. Ma, et al., arXiv:1610.03498 [astro-ph.GA], October 11, 2016. * https://arxiv.org/abs/1610.03498
• “Effect of Thermal Nonequilibrium on Ignition in Scramjet Combustors,” R. Fievet, et al., Proceedings of the Combustion Institute, Published online October 13, 2016. * http://www.sciencedirect.com/science/article/pii/S1540748916304552
• “Incompressible Modes Excited by Supersonic Shear in Boundary Layers: Acoustic CFS Instability,” M. Belyaev, arXiv:1610.04289 [astro-ph.HE], October 13, 2016. * https://arxiv.org/abs/1610.04289
• “SatCNN: Satellite Image Dataset Classification Using Agile Convolutional Neural Networks,” Y. Zhong, et al., Remote Sensing Letters, Vol. 8, Issue 2, October 25, 2016. http://www.tandfonline.com/doi/full/10.1080/2150704X.2016.1235299
10
*HECCprovidedsupercompu1ngresourcesandservicesinsupportofthiswork
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Papers (cont.)
• “Shocks and Angular Momentum Flips: A Different Path to Feeding the Nuclear Regions of Merging Galaxies,” P. Capelo, M. Dotti, arXiv:1610.08507 [astro-ph.GA], October 26, 2016. * https://arxiv.org/abs/1610.08507
• “Conversion of Internal Gravity Waves into Magnetic Waves,” D. Lecoanet, et al., arXiv:1610.08506 [astro-ph.SR], October 26, 2016. * https://arxiv.org/abs/1610.08506
• “Quantifying Supernovae-Driven Multiphase Outflows,” M. Li, et al., arXiv:1610.08971 [astro-ph.GA], October 27, 2016. * https://arxiv.org/abs/1610.08971
• “A NASA Perspective on Quantum Computing: Opportunities and Challenges,” R. Biswas, et al., Parallel Computing Journal, accepted for publication, October 27, 2016. http://www.sciencedirect.com/science/article/pii/S0167819116301326
11
*HECCprovidedsupercompu1ngresourcesandservicesinsupportofthiswork
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
News and Events
• Looking to the Stars, The La Grande Observer, October 21, 2016—Adam Moreno, a forest ecologist, has been awarded a fellowship from NASA to do research at its Ames Research Center with assistance from the Pleiades supercomputer and data from the space agency’s satellite system. He will use these tools to develop a model for forecasting when a forest is reaching the verge of catastrophic collapse. “I want to develop an early warning system,” Moreno said. http://www.lagrandeobserver.com/news/local/4777546-151/looking-to-the-stars
12
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
HECC Utilization
13
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Pleiades Endeavour Merope Production
Share Limit
Job Drain
Dedtime Drain
Limits Exceeded
Unused Devel Queue
Insufficient CPUs
Held
Queue Not Schedulable
Non-Chargable
Not Schedulable
No Jobs
Dedicated
Down
Degraded
Boot
Used
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
HECC Utilization Normalized to 30-Day Month
14
0
2,000,000
4,000,000
6,000,000
8,000,000
10,000,000
12,000,000
14,000,000
16,000,000
18,000,000
20,000,000
22,000,000
Stan
dard
Bill
ing
Uni
ts
NAS
NLCS
NESC
SMD
HEOMD
ARMD
Alloc.toOrgs
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
HECC Utilization Normalized to 30-Day Month
15
0
2
4
6
8
10
12
Stan
dard
Bill
ing
Uni
ts in
Mill
ions
ARMD ARMD Allocation With Agency Reserve
ARMD
1 2 3 4
5 678
9
116WestmereracksreJredfromPleiades214HaswellracksaddedtoPleiades37Nehalem½racksreJredfromMerope47Westmere½racksaddedtoPleiades516WestmereracksreJredfromPleiades610BroadwellracksaddedtoPleiades74BroadwellracksaddedtoPleiades814(all)WestmereracksreJredfromPleiades914BroadwellracksaddedtoPleiades(10.5DedicatedtoARMD)
0
2
4
6
8
10
12
Stan
dard
Bill
ing
Uni
ts in
Mill
ions
HEOMD NESC HEOMD+NESC Allocation
HEOMD, NESC
54321 7
89
6
0
2
4
6
8
10
12
Stan
dard
Bill
ing
Uni
ts in
Mill
ions
SMD SMD Allocation With Agency Reserve
SMD
2 1 34
5 6
7
89
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Tape Archive Status
16
0
50
100
150
200
250
300
350
400
450
500
550
600
Unique File Data Unique Tape Data Total Tape Data Tape Capacity Tape Library Capacity
Peta
Byt
es
Capacity
Used
HECC
Non Mission Specific NAS
NLCS
NESC
SMD
HEOMD
ARMD
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Tape Archive Status
17
0
50
100
150
200
250
300
350
400
450
500
550
600
Peta
Byt
es Tape Library Capacity
Tape Capacity
Total Tape Data
Unique Tape Data
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Pleiades: SBUs Reported, Normalized to 30-Day Month
18
0
2,000,000
4,000,000
6,000,000
8,000,000
10,000,000
12,000,000
14,000,000
16,000,000
18,000,000
20,000,000
22,000,000
Stan
dard
Bill
ing
Uni
ts
NAS
NLCS
NESC
SMD
HEOMD
ARMD
Alloc. to Orgs
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Pleiades: Devel Queue Utilization
19
0
250,000
500,000
750,000
1,000,000
1,250,000
1,500,000
1,750,000
2,000,000
Stan
dard
Bill
ing
Uni
ts
NAS
NLCS
NESC
SMD
HEOMD
ARMD
Devel Queue Alloc.
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Pleiades: Monthly Utilization by Job Length
20
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
3,000,000
3,500,000
4,000,000
4,500,000
0-1hours >1-4hours >4-8hours >8-24hours >24-48hours
>48-72hours
>72-96hours
>96-120hours
>120hours
Stan
dardBillingUnits
JobRunTime(hours) October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Pleiades: Monthly Utilization by Size and Mission
21
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
3,000,000
3,500,000
4,000,000
1-32 33-64 65-128 129-256
257-512
513-1024
1025-2048
2049-4096
4097-8192
8193-16384
16385-32768
32769-65536
65537-262144
Stan
dardBillingUnits
JobSize(cores)
NAS
NLCS
NESC
SMD
HEOMD
ARMD
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Pleiades: Monthly Utilization by Size and Length
22
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
3,000,000
3,500,000
4,000,000
1-32 33-6465-128 129-256
257-512
513-1024
1025-2048
2049-4096
4097-8192
8193-16384
16385-32768
32769-65536
65537-262144
Stan
dardBillingUnits
JobSize(cores)
>120hours
>96-120hours
>72-96hours
>48-72hours
>24-48hours
>8-24hours
>4-8hours
>1-4hours
0-1hours
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Pleiades: Average Time to Clear All Jobs
23
0
24
48
72
96
120
144
168
192
216
240
264
288
312
Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16
Hours
ARMD HEOMD/NESC SMD
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Pleiades: Average Expansion Factor
24
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
9.00
10.00
Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16
ARMD HEOMD SMD
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Endeavour: SBUs Reported, Normalized to 30-Day Month
25
0
10,000
20,000
30,000
40,000
50,000
60,000
70,000
80,000
90,000
100,000
Stan
dard
Bill
ing
Uni
ts
NAS
NLCS
NESC
SMD
HEOMD
ARMD
Alloc. to Orgs
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Endeavour: Monthly Utilization by Job Length
26
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8,000
9,000
10,000
0-1hours >1-4hours >4-8hours >8-24hours >24-48hours >48-72hours >72-96hours>96-120hours >120hours
Stan
dardBillingUnits
JobRunTime(hours)October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Endeavour: Monthly Utilization by Size and Mission
27
0
2,500
5,000
7,500
10,000
12,500
15,000
17,500
20,000
1-32 33-64 65-128 129-256 257-512 513-1024
Stan
dardBillingUnits
JobSize(cores)
NAS
NESC
SMD
HEOMD
ARMD
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Endeavour: Monthly Utilization by Size and Length
28
0
2,500
5,000
7,500
10,000
12,500
15,000
17,500
20,000
1-32 33-64 65-128 129-256 257-512 513-1024
Stan
dardBillingUnits
JobSize(cores)
>120hours
>96-120hours
>72-96hours
>48-72hours
>24-48hours
>8-24hours
>4-8hours
>1-4hours
0-1hours
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Endeavour: Average Time to Clear All Jobs
29
0
6
12
18
24
30
36
42
48
54
60
66
72
78
84
Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16
Hours
ARMD HEOMD/NESC SMD
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Endeavour: Average Expansion Factor
30
1.00
1.50
2.00
2.50
3.00
3.50
4.00
Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16
ARMD HEOMD SMD
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Merope: SBUs Reported, Normalized to 30-Day Month
31
0
100,000
200,000
300,000
400,000
500,000
600,000
700,000
Stan
dard
Bill
ing
Uni
ts
NAS
NLCS
NESC
SMD
HEOMD
ARMD
Alloc. to Orgs
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Merope: Monthly Utilization by Job Length
32
0
50,000
100,000
150,000
200,000
250,000
300,000
350,000
400,000
450,000
500,000
0-1hours >1-4hours >4-8hours >8-24hours >24-48hours >48-72hours >72-96hours>96-120hours >120hours
Stan
dardBillingUnits
JobRunTime(hours)October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016 33
Merope: Monthly Utilization by Size and Mission
0
50,000
100,000
150,000
200,000
250,000
300,000
350,000
400,000
1-32 33-64 65-128 129-256 257-512513-1024 1025-2048
2049-4096
4097-8192
8193-16384
Stan
dardBillingUnits
JobSize(cores)
NAS
NLCS
NESC
SMD
HEOMD
ARMD
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016 34
Merope: Monthly Utilization by Size and Length
0
50,000
100,000
150,000
200,000
250,000
300,000
350,000
400,000
1-32 33-64 65-128 129-256 257-512 513-1024 1025-2048
2049-4096
4097-8192
8193-16384
Stan
dardBillingUnits
JobSize(cores)
>120hours
>96-120hours
>72-96hours
>48-72hours
>24-48hours
>8-24hours
>4-8hours
>1-4hours
0-1hours
October 2016
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Merope: Average Time to Clear All Jobs
35
0
4
8
12
16
20
24
28
32
Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16
Hours
ARMD HEOMD/NESC SMD
National Aeronautics and Space Administration High-End Computing Capability Project November 10, 2016
Merope: Average Expansion Factor
36
1.00
2.00
3.00
4.00
5.00
6.00
7.00
Nov-15 Dec-15 Jan-16 Feb-16 Mar-16 Apr-16 May-16 Jun-16 Jul-16 Aug-16 Sep-16 Oct-16
ARMD HEOMD SMD