75
Slide-1 SC2002 Tutorial MIT Lincoln Laboratory DoD Sensor Processing: Applications and Supporting Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization Office under Air Force Contract F19628-00-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the Unites States Government.

DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

Slide-1SC2002 Tutorial

MIT Lincoln Laboratory

DoD Sensor Processing:Applications and Supporting

Software TechnologyDr. Jeremy Kepner

MIT Lincoln Laboratory

This work is sponsored by the High Performance Computing Modernization Office under Air Force Contract F19628-00-C-0002. Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the Unites States Government.

Page 2: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-2

SC2002 Tutorial

Preamble: Existing Standards

P0 P1 P2 P3

Node Controller

Parallel Embedded Processor

Data Communication:MPI, MPI/RT, DRI

System Controller

ControlCommunication:

CORBA, HP-CORBASCA

OtherComputersConsoles

Computation:VSIPL

DefinitionsVSIPL = Vector, Signal, and Image

Processing LibraryMPI = Message-passing interfaceMPI/RT = MPI real-timeDRI = Data Re-org InterfaceCORBA = Common Object Request Broker

ArchitectureHP-CORBA = High Performance CORBA

• A variety of software standards support existing DoD signal processing systems

Page 3: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-3

SC2002 Tutorial

Preamble: Next Generation Standards

• Software Initiative Goal: transition research into commercial standards• Software Initiative Goal: transition research into commercial standards

Performance (1.5x)

Porta

bilit

y (3

x) Productivity (3x)

HPECSoftwareInitiative

Demonstrate

Develop Appl

ied

Rese

arch

Object OrientedOp

en S

tand

ards

Interoperable & Scalable

Portability ≡ lines-of-code changed to port/scale to new systemProductivity ≡ lines-of-code added to add new functionalityPerformance ≡ computation and communication benchmarks

Page 4: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-4

SC2002 Tutorial

HPEC-SI: VSIPL++ and Parallel VSIPL

TimePhase 3

Demonstrate insertions into fielded systems (e.g., CIP)• Demonstrate 3x portability

High-level code abstraction• Reduce code size 3x

Unified embedded computation/ communication standard•Demonstrate scalability

Demonstration: Existing Standards

Development: Object-Oriented Standards

Applied Research: Unified Comp/Comm Lib

Demonstration: Object-Oriented Standards

Demonstration: Unified Comp/Comm Lib

Development: Unified Comp/Comm Lib

VSIPL++

prototypeParallelVSIPL++

VSIPLMPI

VSIPL++

ParallelVSIPL++

Applied Research: Self-optimizationPhase 2Development: Fault tolerance

Applied Research: Fault tolerance

prototypePhase 1

Func

tiona

lity

Page 5: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-5

SC2002 Tutorial

Preamble: The Links

High Performance Embedded Computing Workshophttp://www.ll.mit.edu/HPEC

High Performance Embedded Computing Software Initiativehttp://www.hpec-si.org/

Vector, Signal, and Image Processing Libraryhttp://www.vsipl.org/

MPI Software Technologies, Inc.http://www.mpi-softtech.com/Data Reorganization Initiative

http://www.data-re.org/CodeSourcery, LLC

http://www.codesourcery.com/MatlabMPI

http://www.ll.mit.edu/MatlabMPI

Page 6: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-6

SC2002 Tutorial

Outline

• DoD Needs• Parallel Stream Computing• Basic Pipeline Processing

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 7: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-7

SC2002 Tutorial

Why Is DoD Concerned with Embedded Software?

$0.0

$1.0

$2.0

$3.0

FY98

Source: “HPEC Market Study” March 2001

Estimated DoD expenditures for embedded signal and image processing hardware and software ($B)

• COTS acquisition practices have shifted the burden from “point design” hardware to “point design” software (i.e. COTS HW requires COTS SW)

• Software costs for embedded systems could be reduced by one-third with improved programming models, methodologies, and standards

• COTS acquisition practices have shifted the burden from “point design” hardware to “point design” software (i.e. COTS HW requires COTS SW)

• Software costs for embedded systems could be reduced by one-third with improved programming models, methodologies, and standards

Page 8: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-8

SC2002 Tutorial

Embedded Stream Processing

Requires high performance computing and networkingRequires high performance computing and networking

Peak

Bis

ectio

n B

andw

idth

(GB

/s)

10000.0

1000.0

100.0

10.0

1.0

0.11 10 100 1000 10000 100000

Peak Processor Power (Gflop/s)

Moore’sLaw

FasterNetworks

Desired region of performance

TodayCOTS

Goal

VideoMedicalWireless

SonarRadar

Scientific Encoding

Page 9: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

Slide-9SC2002 Tutorial

MIT Lincoln Laboratory

Military Embedded Processing

MIT Lincoln Laboratory• Signal processing drives computing requirements• Rapid technology insertion is critical for sensor dominance• Signal processing drives computing requirements• Rapid technology insertion is critical for sensor dominance

REQUIREMENTS INCREASINGBY AN ORDER OF MAGNITUDEEVERY 5 YEARS

EMBEDDED PROCESSING REQUIREMENTS WILLEXCEED 10 TFLOPS IN THE 2005-2010 TIME FRAME

EMBEDDED PROCESSING REQUIREMENTS WILLEXCEED 10 TFLOPS IN THE 2005-2010 TIME FRAME

Page 10: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-10

SC2002 Tutorial

Military Query Processing

Sensors ParallelComputing

Wide AreaImaging

Hyper SpecImaging

SAR/GMTI

BoSSNET

Targeting

ForceLocation

InfrastructureAssessment

High SpeedNetworks

Missions

ParallelDistributedSoftware

MultiSensor

Algorithms

Software

• Highly distributed computing• Fewer very large data movements• Highly distributed computing• Fewer very large data movements

Page 11: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-11

SC2002 Tutorial

Parallel Pipeline

ParallelComputer

BeamformXOUT = w *XIN

DetectXOUT = |XIN|>c

FilterXOUT = FIR(XIN )

Signal Processing Algorithm

Mapping

• Data Parallel within stages• Task/Pipeline Parallel across stages • Data Parallel within stages• Task/Pipeline Parallel across stages

Page 12: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-12

SC2002 Tutorial

Filtering

XOUT = FIR(XIN,h)

Xin

Nchannel

Nsamples

Nchannel

Xout

Nsamples/Ndecimation

• Fundamental signal processing operation• Converts data from wideband to narrowband via filter

O(Nsamples Nchannel Nh / Ndecimation)• Degrees of parallelism: Nchannel

• Fundamental signal processing operation• Converts data from wideband to narrowband via filter

O(Nsamples Nchannel Nh / Ndecimation)• Degrees of parallelism: Nchannel

Page 13: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-13

SC2002 Tutorial

Beamforming

Xin Xout

Nchannel

Nsamples

Nbeams

Nsamples

XOUT = w *XIN

• Fundamental operation for all multi-channel receiver systems• Converts data from channels to beams via matrix multiply

O(Nsamples Nchannel Nbeams)• Key: weight matrix can be computed in advance• Degrees of Parallelism: Nsamples

• Fundamental operation for all multi-channel receiver systems• Converts data from channels to beams via matrix multiply

O(Nsamples Nchannel Nbeams)• Key: weight matrix can be computed in advance• Degrees of Parallelism: Nsamples

Page 14: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-14

SC2002 Tutorial

Detection

Xin

Nbeams

Nsamples

Xout

Ndetects

XOUT = |XIN|>c

• Fundamental operation for all processing chains• Converts data from a stream to a list of detections via

thresholding O(Nsamples Nbeams)• Number detections is data dependent• Degrees of parallelism: Nbeams Nchannels or Ndetects

• Fundamental operation for all processing chains• Converts data from a stream to a list of detections via

thresholding O(Nsamples Nbeams)• Number detections is data dependent• Degrees of parallelism: Nbeams Nchannels or Ndetects

Page 15: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-15

SC2002 Tutorial

Types of Parallelism

InputInput

FIRFIlters

FIRFIlters

SchedulerScheduler

Detector2

Detector2

Detector1

Detector1

Beam-former 2Beam-

former 2Beam-

former 1Beam-

former 1

Task ParallelTask Parallel

Pipeline Pipeline

Round RobinRound Robin

Data ParallelData Parallel

Page 16: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-16

SC2002 Tutorial

Outline

• Filtering• Beamforming• Detection

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 17: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-17

SC2002 Tutorial

FIR Overview

FIR

• Uses: pulse compression, equalizaton, …

• Formulation: y = h o x– y = filtered data [#samples]– x = unfiltered data [#samples]– f = filter [#coefficients]– o = convolution operator

• Algorithm Parameters: #channels, #samples, #coefficents, #decimation

• Implementation Parameters: Direct Sum or FFT based

MIT Lincoln Laboratory

Page 18: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-18

SC2002 Tutorial

Basic Filtering via FFT

• Fourier Transform (FFT) allows specific frequencies to be selected O(N log N)

time frequencyDC

FFTFFT

time frequencyDC

Page 19: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-19

SC2002 Tutorial

Basic Filtering via FIR

• Finite Impulse Response (FIR) allows a range of frequencies to be selected O(N Nh)

(Example: Band-Pass Filter)

freqFIR(x,h)x y

Power in anyfrequency

Power onlybetweenf1 and f2

DC f1 f2

Σ

h1 h2 hLh3

Delay Delay Delay

y

Page 20: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-20

SC2002 Tutorial

Multi-Channel Parallel FIR filter

FIRFIRFIRFIR

Channel 1Channel 2Channel 3Channel 4

• Parallel Mapping Constraints:– #channels MOD #processors = 0– 1st parallelize across channels– 2nd parallelize within a channel based on #samples and

#coefficients

MIT Lincoln Laboratory

Page 21: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-21

SC2002 Tutorial

Outline

• Filtering• Beamforming• Detection

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 22: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-22

SC2002 Tutorial

Beamforming Overview

Beamform

• Uses: angle estimation

• Formulation: y = wHx– y = beamformed data [#samples x #beams]– x = channel data [#samples x #channels]– w = (tapered) stearing vectors [#channels x #beams]

• Algorithm Parameters: #channels, #samples, #beams, (tapered) steering vectors,

MIT Lincoln Laboratory

Page 23: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-23

SC2002 Tutorial

Basic Beamforming Physics

Wavefr

onts

ReceivedPhasefront

θ

• Received phasefront creates complex exponential across array with frequency directly related to direction of propagation

• Estimating frequency of impinging phasefront indicates direction of propagation

• Direction of propagation is also known as angle-of-arrival (AOA) or direction-of arrival (DOA)

e j1φ(θ) e j2φ(θ) e j3φ(θ) e j4φ(θ) e j5φ(θ) e j6φ(θ) e j7φ(θ)Direction of

propagation

Source

Page 24: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-24

SC2002 Tutorial

Parallel Beamformer

Segment 1Segment 2Segment 3Segment 4

BeamformBeamform

BeamformBeamform

• Parallel Mapping Constraints:– #segment MOD #processors = 0– 1st parallelize across segments– 2nd parallelize across beams

MIT Lincoln Laboratory

Page 25: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-25

SC2002 Tutorial

Outline

• Filtering• Beamforming• Detection

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 26: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-26

SC2002 Tutorial

CFAR Detection Overview

CFAR

• Constant False Alarm Rate (CFAR)

• Formulation: x[n] > T[n]– x[n] = cell under test– T[n] = Sum(xi)/2M, Ngaurd < |i - N| < M + Ngaurd– Angle estimate: take ratio of beams; do lookup

• Algorithm Parameters: #samples, #beams, steering vectors, #noise samples, #max detects

• Implementation Parameters: Greatest Of, Censored Greatest Of, Ordered Statistics, … Averaging vs Sorting

MIT Lincoln Laboratory

Page 27: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-27

SC2002 Tutorial

Two-Pass Greatest-Of Excision CFAR(First Pass)

L L L L L L L LTTTTTTTTM

.... ....1/M 1/M

G G MInput Data x[i]

Noise EstimateBuffer b[i]

[ ] MnnzT ...,; 1=Range

[ ]ixRange cell under test Trailing training cellsT

[ ] MnnzL ...,; 1=Guard cells Leading training cellsL

[ ] [ ] [ ] ,

Retain

Excise

× ∑∑

==

><

M

nL

M

nT nznz

MTix

1

2

1

212 max

Reference: S. L. Wilson, Analysis of NRL’s two-pass greatest-of excision CFAR,Internal Memorandum, MIT Lincoln Laboratory, October 5 1998.

Page 28: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-28

SC2002 Tutorial

Two-Pass Greatest-Of Excision CFAR(Second Pass)

Input Data x[i]

L L L L L L L LT TTTTTTT

T

L

Cell under test

Guard cells

Trailing training cells

Leading training cells

MM GG

Noise EstimateBuffer b[i]

[ ] MnnzT ...,; 1=[ ]ix[ ] MnnzL ...,; 1=

[ ] [ ] [ ] ,

Noise

Target

× ∑∑

==

><

M

nL

M

nT nznz

MTix

1

2

1

222 max

( ),, 1 FAPTMfT =2

Page 29: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-29

SC2002 Tutorial

Parallel CFAR Detection

Segment 1Segment 2Segment 3Segment 4

CFARCFARCFARCFAR

• Parallel Mapping Constraints:– #segment MOD #processors = 0– 1st parallelize across segments– 2nd parallelize across beams

MIT Lincoln Laboratory

Page 30: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-30

SC2002 Tutorial

Outline

• Latency vs. Throughput• Corner Turn• Dynamic Load Balancing

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 31: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-31

SC2002 Tutorial

Latency and throughput

BeamformXOUT = w *XIN

DetectXOUT = |XIN|>c

FilterXOUT = FIR(XIN)

Signal Processing Algorithm

0.5seconds

0.5seconds

1.0seconds

0.3seconds

0.8seconds

Latency = 0.5+0.5+1.0+0.3+0.8 = 3.1 secondsThroughput = 1/max(0.5,0.5,1.0,0.3,0.8) = 1/second

ParallelComputer

• Latency: total processing + communication time for one frame of data (sum of times)

• Throughput: rate at which frames can be input (max of times)

• Latency: total processing + communication time for one frame of data (sum of times)

• Throughput: rate at which frames can be input (max of times)

Page 32: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-32

SC2002 Tutorial

Example: Optimum System Latency

• Simple two component system• Local optimum fails to satisfy

global constraints• Need system view to find

global optimum

• Simple two component system• Local optimum fails to satisfy

global constraints• Need system view to find

global optimum

FilterLatency = 2/N

BeamformLatency = 1/N

GlobalOptimum

1

10

100

0 8 16 24 32

Filter

Beamform

Latency < 8

Hardw

are < 32

Hardware Units (N)

Late

ncy

Component LatencyLocalOptimum

0

8

16

24

32

0 8 16 24 32

Hardware < 32

Latency < 8

Beamform Hardware

Filte

r Har

dwar

e

System Latency

Page 33: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-33

SC2002 Tutorial

System Graph

Filter Beamform Detect

Edge is the conduit between a pair of parallel mappings

Node is a unique parallel mapping of a computation task

• System Graph can store the hardware resource usage of every possible Task & Conduit

• System Graph can store the hardware resource usage of every possible Task & Conduit

Page 34: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-34

SC2002 Tutorial

Optimal Mapping of Complex Algorithms

Input

XINXIN

Low Pass Filter

XINXIN

W1W1

FIR1FIR1 XOUTXOUT

W2W2

FIR2FIR2

Beamform

XINXIN

W3W3

multmult XOUTXOUT

Matched Filter

XINXIN

W4W4

FFTFFTIFFTIFFT XOUTXOUT

Application

PowerPCCluster

IntelCluster

Different Optimal Maps

Workstation

EmbeddedMulti-computerEmbedded

Board

Hardware• Need to automate process of

mapping algorithm to hardware• Need to automate process of

mapping algorithm to hardware

Page 35: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-35

SC2002 Tutorial

Outline

• Latency vs. Throughput• Corner Turn• Dynamic Load Balancing

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 36: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-36

SC2002 Tutorial

Channel Space -> Beam Space

Weights

Beam M

Weights

Beam 2Beam 1

InputChannel N

21

InputeChannel 1

N

• Data enters system via different channels• Filtering performed in a channel parallel fashion• Beamforming requires combining data from multiple channels

Page 37: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-37

SC2002 Tutorial

Corner Turn Operation

Processor Cha

nnel

s

Samples

Cha

nnel

s

Samples

Original Data Matrix

Each processor sends data to each other processor

Each processor sends data to each other processor

Half the data movesacross the bisectionof the machine

Corner-turned

Data Matrix

BeamformFilter

Page 38: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-38

SC2002 Tutorial

Corner Turn for Signal Processing

Corner turn changes matrix distribution to exploit parallelism in successive pipeline stages

Sample Sample

ChannelChannel

Pulse Pulse

Corner Turn Model

TCT = P1P2 (α + B/β)Q

P1 Processors P2 Processors

B = Bytes per messageQ = Parallel pathsα = Message startup costβ = Link bandwidth

All-to-all communication where each of P1 processors sends a message of size B to each of P2 processors

Total data cube size is P1P2B

Page 39: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-39

SC2002 Tutorial

Outline

• Latency vs. Throughput• Corner Turn• Dynamic Load Balancing

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 40: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-40

SC2002 Tutorial

Dynamic Load Balancing

Image Processing Pipeline

0.13

0.15

0.24

0.97

0.080.11

0.30

0.10EstimationDetection

∝Work Pixels(static)

Detections(dynamic)∝Work

Static Parallel Implementation

0.13

0.15

0.24

0.97

0.080.11

0.30

0.10

Load: balanced Load: unbalanced

• Static parallelism implementations lead to unbalanced loads• Static parallelism implementations lead to unbalanced loads

Page 41: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-41

SC2002 Tutorial

Static Parallelism and Poisson’s Walli.e. “Ball into Bins”

1

10

100

1000

1

15% efficient

• Random fluctuations bound performance• Much worse if targets are correlated• Sets max targets in nearly every system

• Random fluctuations bound performance• Much worse if targets are correlated• Sets max targets in nearly every system

50% efficient

M = # units of workf = allowed failure rate

Page 42: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-42

SC2002 Tutorial

Static Derivation

speedup ≡NdNf

Nd ≡ Total detectionsNf ≡ Allowed detections with failure rate fNp ≡ Number of processors

λ ≡ Nd Np

Nf : Pλ (N f )N p = 1− f

Pλ (N) = λne−λ

n!n=0,N∑

Page 43: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-43

SC2002 Tutorial

Dynamic Parallelism

1

10

100

1000

1

50% efficient

• Assign work to processors as needed• Large improvement even in “worst case”• Assign work to processors as needed• Large improvement even in “worst case”

94% efficient

M = # units of workf = allowed failure rate

Page 44: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-44

SC2002 Tutorial

Dynamic Derivation

worst case speedup =Nd

λ + gNd=

NdNd Np + gNd

=Np

1 + gNpNd ≡ Total detectionsNp ≡ Number of processorsg ≡ granularity of work

λ ≡ Nd Np

Page 45: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-45

SC2002 Tutorial

Static vs Dynamic Parallelism

1

10

100

1000

1 4 16 64 256 1024

LinearDynamicStatic 15% efficient

50% efficient Para

llel S

peed

up

Number of Processors

• Dynamic parallelism delivers good performance even in worst case

• Static parallelism is limited by random fluctuations (up to 85% of processors are idle)

• Dynamic parallelism delivers good performance even in worst case

• Static parallelism is limited by random fluctuations (up to 85% of processors are idle)

50% efficient

94% efficient

Page 46: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-46

SC2002 Tutorial

Outline

• PVL• PETE• S3P• MatlabMPI

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 47: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-47

SC2002 Tutorial

Current Standards for Parallel Coding

Vendor SuppliedLibraries

Vendor SuppliedLibraries

CurrentIndustry

Standards

CurrentIndustry

Standards

ParallelOO

Standards

ParallelOO

Standards

• Industry standards (e.g. VSIPL, MPI) represent a significant improvement over coding with vendor-specific libraries

• Next generation of object oriented standards will provide enough support to write truly portable scalable applications

Page 48: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-48

SC2002 Tutorial

Goal: Write Once/Run Anywhere/Anysize

Develop codeon a workstation

A = B + C;D = FFT(A);

.(matlab like)

Demo Real-Timewith a cluster

(no code changes;roll-on/roll-off)

Deploy onEmbedded System(no code changes)

Scalable/portable code provides high productivityScalable/portable code provides high productivity

Page 49: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-49

SC2002 Tutorial

Current Approach to Parallel Code

CodeAlgorithm + Mappingwhile(!done){

if ( rank()==1 || rank()==2 )stage1 ();

else if ( rank()==3 || rank()==4 )stage2();

}

Proc1

Proc1

Proc2

Proc2

Stage 1

Proc3

Proc3

Proc4

Proc4

Stage 2

Proc5

Proc5

Proc6

Proc6

while(!done){

if ( rank()==1 || rank()==2 )stage1();

else if ( rank()==3 || rank()==4) ||rank()==5 || rank==6 )

stage2();}

• Algorithm and hardware mapping are linked• Resulting code is non-scalable and non-portable

• Algorithm and hardware mapping are linked• Resulting code is non-scalable and non-portable

Page 50: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-50

SC2002 Tutorial

Scalable Approach

Single Processor Mapping#include <Vector.h>#include <AddPvl.h>

void addVectors(aMap, bMap, cMap) {Vector< Complex<Float> > a(‘a’, aMap, LENGTH);Vector< Complex<Float> > b(‘b’, bMap, LENGTH);Vector< Complex<Float> > c(‘c’, cMap, LENGTH);

b = 1;c = 2;a=b+c;

}

A = B + C

Multi Processor Mapping

A = B + C

• Single processor and multi-processor code are the same• Maps can be changed without changing software• High level code is compact

• Single processor and multi-processor code are the same• Maps can be changed without changing software• High level code is compact

Page 51: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-51

SC2002 Tutorial

PVL Evolution

Parallel Processing

Library

Parallel Communications

Single processor Library

= Scientific (non-real-time) computing

= Real-time signal processing

MPI/RT

LAPACK

MPI

ScaLAPACK

STAPL

• Fortran• Object-based

• C• Object-based

• Fortran

• C• Object-based

• C• Object-based

• C• Object-based

• C++• Object-

oriented

PETE • C++• Object-oriented

PVLApplicability

VSIPL

1988 89 90 91 92 93 94 95 96 97 98 99 2000

• Transition technology from scientific computing to real-time• Moving from procedural (Fortran) to object oriented (C++)• Transition technology from scientific computing to real-time• Moving from procedural (Fortran) to object oriented (C++)

Page 52: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-52

SC2002 Tutorial

Anatomy of a PVL Map

ConduitComputationVector/Matrix Task

• All PVL objects contain maps

• PVL Maps contain

•Grid•List of nodes•Distribution•Overlap

• All PVL objects contain maps

• PVL Maps contain

•Grid•List of nodes•Distribution•Overlap

Map

{0,2,4,6,8,10}List of Nodes

GridDistribution Overlap

Page 53: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-53

SC2002 Tutorial

Library Components

ParallelismDescriptionClass

Sign

al P

roce

ssin

g &

Con

trol

Data & TaskPerforms signal/image processing functions on matrices/vectors (e.g. FFT, FIR, QR)

Computation

DataUsed to perform matrix/vector algebra on data spanning multiple processors

Vector/Matrix

Task & Pipeline

Supports data movement between tasks (i.e. the arrows on a signal flow diagram)

Conduit

Task & Pipeline

Supports algorithm decomposition (i.e. the boxes in a signal flow diagram)

Task

Map

ping

Organizes processors into a 2D layoutGrid

Data, Task & Pipeline

Specifies how Tasks, Vectors/Matrices, and Computations are distributed on processor

Map

• Simple mappable components support data, task and pipeline parallelism• Simple mappable components support data, task and pipeline parallelism

Page 54: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-54

SC2002 Tutorial

PVL Layered Architecture

Vector/MatrixVector/Matrix CompCompTask

Conduit

Grid

Math Kernel (VSIPL) Messaging Kernel (MPI)

ParallelVectorLibrary

UserInterface

HardwareInterface

PowerPCCluster

IntelCluster

Productivity

Portability

Performance MapDistribution

Application

Hardware

Input OutputAnalysis

WorkstationEmbedded

BoardEmbedded

Multi-computer

• Layers enable simple interfaces between the application, the library, and the hardware

• Layers enable simple interfaces between the application, the library, and the hardware

Page 55: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-55

SC2002 Tutorial

Outline

• PVL• PETE• S3P• MatlabMPI

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 56: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-56

SC2002 Tutorial

C++ Expression Templates and PETE

BinaryNode<OpAssign, Vector,BinaryNode<OpAdd, VectorBinaryNode<OpMultiply, Vector,

Vector >>>

s

Expression TypeParse Tree

B+C A

Main

Operator +

Operator =

+B& C&

1. Pass B and Creferences tooperator +

4. Pass expression treereference to operator

2. Create expressionparse tree

3. Return expressionparse tree

5. Calculate result andperform assignment

copy &

copy

B&,C&

Parse trees, not vectors, createdParse trees, not vectors, created

A=B+C*DA=B+C*DExpression

ExpressiTem onplate

• Expression Templates enhance performance by allowing temporary variables to be avoided

• Expression Templates enhance performance by allowing temporary variables to be avoided

Page 57: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-57

SC2002 Tutorial

Experimental Platform

• Network of 8 Linux workstations– 800 MHz Pentium III processors

• Communication– Gigabit ethernet, 8-port switch– Isolated network

• Software– Linux kernel release 2.2.14– GNU C++ Compiler – MPICH communication library over

TCP/IP

Page 58: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-58

SC2002 Tutorial

Experiment 1: Single Processor

A=B+C A=B+C*D A=B+C*D/E+fft(F)

VSIPLPVL/VSIPLPVL/PETE

VSIPLPVL/VSIPLPVL/PETE

Vector Length Vector Length

0.6

0.8

0.9

1

1.2

0.7

1.1

0.8

0.9

1.1

1

VSIPLPVL/VSIPLPVL/PETE

0.9

1

1.1

1.2

1.3

32 128 5122048

Vector Length

Rel

ativ

e Ex

ecut

ion

Tim

e

819232768

1310728

Rel

ativ

e Ex

ecut

ion

Tim

e

Rel

ativ

e Ex

ecut

ion

Tim

e 1.2

32 128 5122048

819232768

1310728 32 128 5122048

819232768

1310728

• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL

Page 59: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-59

SC2002 Tutorial

Experiment 2: Multi-Processor(simple communication)

A=B+C A=B+C*D A=B+C*D/E+fft(F)

CC++/VSIPLC++/PETE

CC++/VSIPLC++/PETE

CC++/VSIPLC++/PETE

Vector Length

0.9

1

1.2

1.3

1.4

1.1

1.5

0.6

0.80.7

1.21.3

0.9

1.4

1.11

0.9

1

1.1

Rel

ativ

e Ex

ecut

ion

Tim

e

Rel

ativ

e Ex

ecut

ion

Tim

e

Rel

ativ

e Ex

ecut

ion

Tim

e

32 128 5122048

819232768

1310728

Vector Length32 128 512

20488192

327681310728

Vector Length32 128 512

20488192

327681310728

• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL• PVL with VSIPL has a small overhead• PVL with PETE can surpass VSIPL

Page 60: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-60

SC2002 Tutorial

Experiment 3: Multi-Processor(complex communication)

A=B+C A=B+C*D A=B+C*D/E+fft(F)

CC++/VSIPLC++/PETE

CC++/VSIPLC++/PETE

1. 1

1

0.9

1

1.1

0.9

0.8

Rel

ativ

e Ex

ecut

ion

Tim

e

Rel

ativ

e Ex

ecut

ion

Tim

e

Vector Length32 128 512

20488192

327681310728

1

Vector Length32 128 512

20488192

32761

8

CC++/VSIPLC++/PETE

0.9

1

1.1

Rel

ativ

e Ex

ecut

ion

Tim

e

831072

Vector Length32 128 512

20488192

327681310728

• Communication dominates performance• Communication dominates performance

Page 61: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-61

SC2002 Tutorial

Outline

• PVL• PETE• S3P• MatlabMPI

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 62: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-62

SC2002 Tutorial

S3P Framework Requirements

• Each compute stage can be mapped to different sets of hardware and timed

Measurableresource usage of each mapping

Conduit

Conduit

Decomposableinto Tasks (comp)and Conduits (comm)

Task

TaskBeamformXOUT = w *XIN Task

FilterXOUT = FIR(XIN)

Mappableto different sets of hardware

DetectXOUT = |XIN|>c

• Each compute stage can be mapped to different sets of hardware and timed

Page 63: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-63

SC2002 Tutorial

S3P Engine

Hardware InformationHardware

Information

S3P EngineS3P Engine

ApplicationProgram

ApplicationProgram

Algorithm InformationAlgorithm

Information “Best”SystemMapping

“Best”SystemMappingSystem

ConstraintsSystem

Constraints

• Map Generator constructs the system graph for all candidate mappings• Map Timer times each node and edge of the system graph• Map Selector searches the system graph for the optimal set of maps

• Map Generator constructs the system graph for all candidate mappings• Map Timer times each node and edge of the system graph• Map Selector searches the system graph for the optimal set of maps

MapGenerator

MapGenerator

MapTimerMap

TimerMap

SelectorMap

Selector

Page 64: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-64

SC2002 Tutorial

Test Case: Min(#CPU | Throughput)

• Vary number of processors used on each stage• Time each computation stage and communication conduit• Find path with minimum bottleneck

• Vary number of processors used on each stage• Time each computation stage and communication conduit• Find path with minimum bottleneck

Input Low Pass Filter Beamform Matched Filter

3.2 31.5

1.4 15.7

1.0 10.4

0.7 8.2

16.1 31.4

9.8 18.0

6.5 13.7

3.3 11.5

52494642472721244429202460332315

1231-

571617-

28149.1-

181815-

14

8.38.73.32.67.38.39.48.0----

17141413

33 frames/sec(1.6 MHz BW)

66 frames/sec (3.2 MHz BW)

1 CPU

2 CPU

3 CPU

4 CPU

Page 65: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-65

SC2002 Tutorial

Dynamic Programming

• Graph construct is very general• Widely used for optimization problems• Many efficient techniques for choosing “best” path (under constraints)

such as Dynamic Programming

• Graph construct is very general• Widely used for optimization problems• Many efficient techniques for choosing “best” path (under constraints)

such as Dynamic ProgrammingN = total hardware unitsM = number of tasksPi = number of mappings for task i

t = MpathTable[M][N] = all infinite weight pathsfor( j:1..M ){ for( k:1..Pj){ for( i:j+1..N-t+1){

if( i-size[k] >= j ){if( j > 1 ){w = weight[pathTable[j-1][i-size[k]]] +weight[k] +weight[edge[last[pathTable[j-1][i-size[k]]],k]

p = addVertex[pathTable[j-1][i-size[k]], k]}else{w = weight[k]p =makePath[k]

}if( weight[pathTable[j][i]] > w ){pathTable[j][i] = p

} } } }t = t -1}

Page 66: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-66

SC2002 Tutorial

Predicted and AchievedLatency and Throughput

Problem Size

• Find– Min(latency | #CPU)– Max(throughput | #CPU)

• S3P selects correct optimal mapping

• Excellent agreement between S3P predicted and achieved latencies and throughputs

• Find– Min(latency | #CPU)– Max(throughput | #CPU)

• S3P selects correct optimal mapping

• Excellent agreement between S3P predicted and achieved latencies and throughputs

#CPU

Late

ncy

(sec

onds

)

Large (48x128K)Small (48x4K)

Thro

ughp

ut(fr

ames

/sec

)

4 5 6 7 8

25

20

15

100.25

0.20

0.15

0.10

1.5

1.0

0.5

predictedachieved

predictedachieved

predictedachieved

predictedachieved

5.0

4.0

3.0

2.04 5 6 7 8

#CPU

1-1-1-1

1-1-1-1

1-1-1-1

1-1-1-1

1-1-2-1

1-2-2-1

1-2-2-2 1-3-2-2

1-1-2-1

1-1-2-2

1-2-2-21-3-2-2

1-1-1-2

1-1-2-2

1-2-2-21-2-2-3

1-1-2-1

1-2-2-1

1-2-2-22-2-2-2

Page 67: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-67

SC2002 Tutorial

Outline

• PVL• PETE• S3P• MatlabMPI

• Introduction

• Processing Algorithms

• Parallel System Analysis

• Software Frameworks

• Summary

Page 68: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-68

SC2002 Tutorial

Modern Parallel Software Layers

Vector/MatrixVector/Matrix CompCompTask

Conduit

Math Kernel Messaging Kernel

ParallelLibrary

UserInterface

HardwareInterface

Application

Hardware

Input OutputAnalysis

PowerPCCluster

IntelCluster

• Can build any parallel application/library on top of a few basic messaging capabilities

• MatlabMPI provides this Messaging Kernel

• Can build any parallel application/library on top of a few basic messaging capabilities

• MatlabMPI provides this Messaging Kernel

Workstation

Page 69: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-69

SC2002 Tutorial

MatlabMPI “Core Lite”

• Parallel computing requires eight capabilities– MPI_Run launches a Matlab script on multiple processors– MPI_Comm_size returns the number of processors– MPI_Comm_rank returns the id of each processor– MPI_Send sends Matlab variable(s) to another processor– MPI_Recv receives Matlab variable(s) from another processor– MPI_Init called at beginning of program– MPI_Finalize called at end of program

Page 70: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-70

SC2002 Tutorial

MatlabMPI:Point-to-point Communication

MPI_Send (dest, tag, comm, variable);

Sender Receiver

variable Data file

createLock file

Shared File System

load

detect

savevariable

variable = MPI_Recv (source, tag, comm);

• Sender saves variable in Data file, then creates Lock file• Receiver detects Lock file, then loads Data file• Sender saves variable in Data file, then creates Lock file• Receiver detects Lock file, then loads Data file

Page 71: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-71

SC2002 Tutorial

Example: Basic Send and Receive

• Initialize• Get processor ranks• Initialize• Get processor ranks

MPI_Init; % Initialize MPI.comm = MPI_COM M _WORLD; % Create communicator.comm_size = MPI_Com m_size(comm); % Get size.my_rank = MPI_Comm_rank(com m); % Get rank.source = 0; % Set source.dest = 1; % Set destination.tag = 1; % Set message tag.

if(comm_size == 2) % Check size.if (my_rank == source) % If source.data = 1:10; % Create data.MPI_Send(dest,tag,comm,data); % Send data.

endif (my_rank == dest) % If destination.data=MPI_Recv(source,tag,comm); % Receive data.end

end

MPI_Finalize; % Finalize Matlab MPI.exit; % Exit Matlab

• Execute send• Execute recieve• Execute send• Execute recieve

• Finalize• Exit• Finalize• Exit

• Uses standard message passing techniques• Will run anywhere Matlab runs• Only requires a common file system

• Uses standard message passing techniques• Will run anywhere Matlab runs• Only requires a common file system

Page 72: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-72

SC2002 Tutorial

MatlabMPI vs MPI bandwidth

1.E+05

1.E+06

1.E+07

1.E+08

1K 4K 16K 64K 256K 1M 4M 32M

C MPIMatlabMPI

Message Size (Bytes)

Ban

dwid

th (B

ytes

/sec

)Bandwidth (SGI Origin2000)

• Bandwidth matches native C MPI at large message size• Primary difference is latency (35 milliseconds vs. 30 microseconds)• Bandwidth matches native C MPI at large message size• Primary difference is latency (35 milliseconds vs. 30 microseconds)

Page 73: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-73

SC2002 Tutorial

Image Filtering Parallel Performance

Parallel performance

1

10

100

1 2 4 8 16 32 64

LinearMatlabMP

Fixed Problem Size (SGI O2000)

0

1

10

100

1 10 100 1000

MatlabMPILinear

Number of Processors

Gig

aflo

ps

Scaled Problem Size (IBM SP2)

Number of Processors

Spee

dup

• Achieved “classic” super-linear speedup on fixed problem• Achieved speedup of ~300 on 304 processors on scaled problem• Achieved “classic” super-linear speedup on fixed problem• Achieved speedup of ~300 on 304 processors on scaled problem

Page 74: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-74

SC2002 Tutorial

Productivity vs. Performance

10

100

1000

0.1 1 10 100 1000

Matlab

VSIPL/MPI

SingleProcessor

SharedMemory

DistributedMemory

Matlab

C

Peak Performance

Line

s of

Cod

e

C++

VSIPL

ParallelMatlab*

MatlabMPI

VSIPL/OpenMP

PVLVSIPL/MPI

CurrentResearch

CurrentPractice

• Programmed image filtering several ways

•Matlab•VSIPL•VSIPL/OpenMPI•VSIPL/MPI•PVL•MatlabMPI

• MatlabMPI provides•high productivity•high performance

• Programmed image filtering several ways

•Matlab•VSIPL•VSIPL/OpenMPI•VSIPL/MPI•PVL•MatlabMPI

• MatlabMPI provides•high productivity•high performance

Page 75: DoD Sensor Processing: Applications and Supporting ... · Software Technology Dr. Jeremy Kepner MIT Lincoln Laboratory This work is sponsored by the High Performance Computing Modernization

MIT Lincoln LaboratorySlide-75

SC2002 Tutorial

Summary

• Exploiting parallel processing for streaming applications presents unique software challenges.

• The community is developing software librariea to address many of these challenges:

– Exploits C++ to easily express data/task parallelism– Seperates parallel hardware dependencies from software– Allows a variety of strategies for implementing dynamic

applications(e.g. for fault tolerance)– Delivers high performance execution comparable to or better than

standard approaches

• Our future efforts will focus on adding to and exploiting the features of this technology to:

– Exploit dynamic parallelism– Integrate high performance parallel software underneath mainstream

programming environments (e.g Matlab, IDL, …)– Use self-optimizing techniques to maintain performance