30
WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS Hank Childs Lawrence Berkeley Lab & UC Davis 11/2/10 P. Navratil, TACC D. Pugmire, ORNL

WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

  • Upload
    hashim

  • View
    19

  • Download
    0

Embed Size (px)

DESCRIPTION

P. Navratil, TACC. D. Pugmire, ORNL. WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS. 11/2/10. Hank Childs Lawrence Berkeley Lab & UC Davis. Supercomputing 101. Why simulation? Simulations are sometimes more cost effective than experiments. - PowerPoint PPT Presentation

Citation preview

Page 1: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

WHY THE RULES ARE CHANGING FOR LARGE DATA

VISUALIZATION AND ANALYSIS

Hank Childs

Lawrence Berkeley Lab & UC Davis11/2/10

P. Navratil, TACC D. Pugmire, ORNL

Page 2: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Supercomputing 101

Why simulation? Simulations are sometimes more cost effective than

experiments. New model for science has three legs: theory, experiment,

and simulation. What is the “petascale” / “exascale”?

1 FLOP = 1 FLoating point OPeration per second 1 GigaFLOP = 1 billion FLOPs, 1 TeraFLOP = 1000 GigaFLOPs 1 PetaFLOP = 1,000,000 GigaFLOPs, 1 ExaFLOP = billion billion

FLOPs PetaFLOPs + petabytes on disk + petabytes of memory petascale ExaFLOPs + exabytes on disk + petabytes of memory exascale

Why petascale / exascale? More compute cycles, more memory, etc, lead for faster

and/or more accurate simulations.

Page 3: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Petascale computing is here.

Existing petascale machines

LANL RoadRunner ORNL Jaguar

Julich JUGeneUTK Kraken

Page 4: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Supercomputing is not slowing down.

LLNL Sequoia NCSA BlueWaters

Page 5: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Exascale machine: requirements Timeline: 2018-2021 Total cost: <$200M Total power consumption: < 20MW Accelerators a certainty FLASH drives to stage data will change

I/O patterns (very important for vis!)

Page 6: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

How does the petascale/exascale affect visualization?

Large # of time steps

Large ensembles

Large scale

Large # of variables

Page 7: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Why is petascale/exascale visualization going to change the rules?

Michael Strayer (U.S. DoE Office of Science): “petascale is not business as usual” Especially true for visualization and analysis!

Large scale data creates two incredible challenges: scale and complexity

Scale is not “business as usual” Supercomputing landscape is changing Solution: we will need “smart” techniques in

production environments More resolution leads to more and more

complexity Will the “business as usual” techniques still

suffice?

Outline

Page 8: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Production visualization tools use “pure parallelism” to process data.

P0P1

P3

P2

P8P7 P6

P5

P4

P9

Pieces of data

(on disk)

Read Process Render

Processor 0

Read Process Render

Processor 1

Read Process Render

Processor 2

Parallelized visualizationdata flow network

P0P0 P3P3P2P2

P5P5P4P4 P7P7P6P6

P9P9P8P8

P1P1

Parallel Simulation Code

Page 9: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Pure parallelism: pros and cons Pros:

Easy to implement Cons:

Requires large amount of primary memory Requires large I/O capabilities requires big machines

Page 10: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Pure parallelism performance is based on # bytes to process and I/O rates.

Amount of data to visualize is typically O(total mem)

Vis is almost always >50% I/O and sometimes 98% I/O FLOPs Memory I/O

Today’s machine

Tomorrow’s machine

Two big factors: ① how much data you have to read② how fast you can read it

Relative I/O (ratio of total memory and I/O) is key

Page 11: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Anedoctal evidence: relative I/O is getting slower.

Machine name Main memory I/O rate

ASC purple 49.0TB 140GB/s 5.8min

BGL-init 32.0TB 24GB/s 22.2min

BGL-cur 69.0TB 30GB/s 38.3min

Sequoia ?? ?? >>40min

Time to write memory to disk

Page 12: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Why is relative I/O getting slower?

“I/O doesn’t pay the bills” And I/O is becoming a dominant cost in the

overall supercomputer procurement. Simulation codes aren’t as exposed.

And will be less exposed with proposed future architectures.

Page 13: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Recent runs of trillion cell data sets provide further evidence that I/O dominates

13

● Weak scaling study: ~62.5M cells/core

13

#coresProblem Size

TypeMachine

8K0.5TZAIXPurple

16K1TZSun LinuxRanger

16K1TZLinuxJuno

32K2TZCray XT5JaguarPF

64K4TZBG/PDawn

16K, 32K1TZ, 2TZCray XT4Franklin2T cells, 32K procs on Jaguar

2T cells, 32K procs on Franklin

-Approx I/O time: 2-5 minutes-Approx processing time: 10 seconds

-Approx I/O time: 2-5 minutes-Approx processing time: 10 seconds

Page 14: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Pure parallelism is not well suited for the petascale. Emerging problem:

Pure parallelism emphasizes I/O and memory And: pure parallelism is the dominant

processing paradigm for production visualization software.

Solution? … there are “smart techniques” that de-emphasize memory and I/O. Data subsetting Multi-resolution Out of core In situ

Page 15: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Data subsetting eliminates pieces that don’t contribute to the final picture.

P0P1

P3

P2

P8P7 P6

P5

P4

P9

Pieces of data

(on disk)

Read Process Render

Processor 0

Read Process Render

Processor 1

Read Process Render

Processor 2

Parallelized visualizationdata flow network

P0P0 P3P3P2P2

P5P5P4P4 P7P7P6P6

P9P9P8P8

P1P1

Parallel Simulation Code

Page 16: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Data Subsetting: pros and cons Pros:

Less data to process (less I/O, less memory) Cons:

Extent of optimization is data dependent Only applicable to some algorithms

Page 17: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Multi-resolution techniques use coarse representations then refine.

P0P1

P3

P2

P8P7 P6

P5

P4

P9

Pieces of data

(on disk)

Read Process Render

Processor 0

Read Process Render

Processor 1

Read Process Render

Processor 2

Parallelized visualizationdata flow network

P0P0 P3P3P2P2

P5P5P4P4 P7P7P6P6

P9P9P8P8

P1P1

Parallel Simulation Code

P2P2

P4P4

Page 18: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Multi-resolution: pros and cons

Pros Avoid I/O & memory requirements

Cons Is it meaningful to process simplified

version of the data?

Page 19: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Out-of-core iterates pieces of data through the pipeline one at a time.

P0P1

P3

P2

P8P7 P6

P5

P4

P9

Pieces of data

(on disk)

Read Process Render

Processor 0

Read Process Render

Processor 1

Read Process Render

Processor 2

Parallelized visualizationdata flow network

P0P0 P3P3P2P2

P5P5P4P4 P7P7P6P6

P9P9P8P8

P1P1

Parallel Simulation Code

Page 20: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Out-of-core: pros and cons

Pros: Lower requirement for primary memory Doesn’t require big machines

Cons: Still paying large I/O costs

(Slow!)

Page 21: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

In situ processing does visualization as part of the simulation.

P0P1

P3

P2

P8P7 P6

P5

P4

P9

Pieces of data

(on disk)

Read Process Render

Processor 0

Read Process Render

Processor 1

Read Process Render

Processor 2

P0P0 P3P3P2P2

P5P5P4P4 P7P7P6P6

P9P9P8P8

P1P1

Parallel Simulation Code

Page 22: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

In situ processing does visualization as part of the simulation.

P0P1

P3

P2

P8P7 P6

P5

P4

P9

GetAccessToData

Process Render

Processor 0

Parallelized visualization data flow networkParallel Simulation Code

GetAccessToData

Process Render

Processor 1

GetAccessToData

Process Render

Processor 2

GetAccessToData

Process Render

Processor 9

… … … …

Page 23: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

In situ: pros and cons

Pros: No I/O! Lots of compute power available

Cons: Very memory constrained Many operations not possible

Once the simulation has advanced, you cannot go back and analyze it

User must know what to look a priori Expensive resource to hold hostage!

Page 24: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Summary of Techniques and Strategies Pure parallelism can be used for anything,

but it takes a lot of resources Smart techniques can only be used

situationally Strategy #1 (do nothing):

Stick with pure parallelism and live with high machine costs & I/O wait times

Other strategies? Assumption:

We can’t afford massive dedicated clusters for visualization

We can fall back on the super computer, but only rarely

Page 25: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Now we know the tools … what problem are we trying to solve? Three primary use cases:

Exploration Confirmation Communication

Examples:Scientific discoveryDebugging

Examples:Scientific discoveryDebugging

Examples:Data analysisImages / moviesComparison

Examples:Data analysisImages / moviesComparison

Examples:Data analysisImages / movies

Examples:Data analysisImages / movies

Page 26: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Notional decision process

Need all data at

full resolution

?

Need all data at

full resolution

?

No Multi-resolution(debugging &

scientific discovery)

Multi-resolution(debugging &

scientific discovery)

Yes

Do operations require all the data?

Do operations require all the data?

NoData

subsetting(comparison

& data analysis)

Data subsetting(comparison

& data analysis)

Yes

Do you know what you want

do a priori?

Do you know what you want

do a priori?

Yes

In Situ(data analysis & images / movies)

In Situ(data analysis & images / movies)

No

Do algorithms require all

data in memory?

Do algorithms require all

data in memory?

No Interactivity

required?

Interactivity

required?No

Out-of-core(Data

analysis & images / movies)

Out-of-core(Data

analysis & images / movies)

ExplorationExplorationConfirmationConfirmationCommunicationCommunication

Pure parallelism

(Anything & esp. comparison)

Pure parallelism

(Anything & esp. comparison)

Yes Yes

Page 27: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Alternate strategy: smart techniques

All visualization and analysis work

Multi-res

In situ

Out-of-coreDo remaining

~5% on SC

Data subsetting

Page 28: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Difficult conversations in the future… Multi-resolution:

Do you understand what a multi-resolution hierarchy should look like for your data?

Who do you trust to generate it? Are you comfortable with your I/O routines

generating these hierarchies while they write?

How much overhead are you willing to tolerate on your dumps? 33+%?

Willing to accept that your visualizations are not the “real” data?

Page 29: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

Difficult conversations in the future… In situ:

How much memory are you willing to give up for visualization?

Will you be angry if the vis algorithms crash?

Do you know what you want to generate a priori? Can you re-run simulations if necessary?

Page 30: WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS

How Supercomputing Trends Will Changes the Rules For Vis & Analysis Future machines will not be well suited

for pure parallelism, because of its high I/O and memory costs.

We won’t be able to use pure parallelism alone any more

We will need algorithms to work in multiple processing paradigms