Transcript
Page 1: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures

CSCI/ EENG – 641 - W01Computer Architecture 1

Dr. Babak Beheshti

Slides based on the PowerPoint Presentations created by David Patterson as part of the Instructor Resources for the textbook by Hennessy & Patterson

Page 2: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Taxonomy

• Flynn’s Taxonomy– Classify by Instruction Stream and Data Stream– SISD Single Instruction Single Data

• Conventional processor

– SIMD Single Instruction Multiple Data• One instruction stream• Multiple data items• Several Examples Produced

– MISD Multiple Instruction Single Data• Systolic Arrays

– MIMD Multiple Instruction Multiple Data• Multiple Threads of execution• General Parallel Processors

Page 3: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

SIMD - Single Instruction Multiple Data

• Originally thought to be the ultimatemassively parallel machine!

• Some machines built– Illiac IV– Thinking Machines CM2– MasPar– Vector processors (special category!)

Page 4: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

SIMD - Single Instruction Multiple Data

• Each PE is asimple ALU(1 bit in CM-1,small processor in some)

• Control Procissues sameinstruction toeach PE in eachcycle

• Each PE has different data

Page 5: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

SIMD

• SIMD performance depends on– Mapping problem ð processor architecture– Image processing

• Maps naturally to 2D processor array• Calculations on individual pixels trivialCombining data is the problem!

– Some matrix operations also

Page 6: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

SIMD

• Matrix multiplication– Each PE

• * then• +

– PEij ð Cij

Note the B matrixis transposed!

Page 7: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Parallel Processing

• Communication patterns– If the system provides the “correct” data paths,

then good performance is obtainedeven with slow PEs

– Without effective communicationbandwidth,even fast PEs are starved of data!

– In a multiple PE system, we have• Raw communication bandwidth

– Equivalent processor ó memory bandwidth

• Communications patterns– Imagine the Matrix Multiplication problem if the matrices are not already

transposed!

• Network topology

Page 8: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Vector Processors - The Supercomputers

• Optimized for vector & matrix operations

“Conventional”scalar processor section not shown

Page 9: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

• Example– Dot product

or in terms of the elements

• Fetch each element of each vector in turn

– Stride• “Distance” between successive elements of a vector

– 1 in dot-product case

Vector Processors - Vector operations

y = A l B

y = Sak * bk

Page 10: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Vector Processors - Vector operations

• Example– Matrix multiply

or in terms of the elements

C = A B

cij = Saik * bkj

Page 11: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Vector Operations

• Fetch data into vector register– Address Generation Unit manages stride

Very high effectivebandwidthto memoryLong “burst”accesses withAGU managingaddresses

Page 12: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

SIMD

• SIMD architectures can exploit significant data-level parallelism for:– matrix-oriented scientific computing– media-oriented image and sound processors

• SIMD is more energy efficient than MIMD– Only needs to fetch one instruction per data operation– Makes SIMD attractive for personal mobile devices

• SIMD allows programmer to continue to think sequentially

Introduction

Page 13: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

SIMD Parallelism

• Vector architectures• SIMD extensions• Graphics Processor Units (GPUs)

• For x86 processors:– Expect two additional cores per chip per year– SIMD width to double every four years– Potential speedup from SIMD to be twice that

from MIMD!

Introduction

Page 14: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Vector Architectures

• Basic idea:– Read sets of data elements into “vector registers”– Operate on those registers– Disperse the results back into memory

• Registers are controlled by compiler– Used to hide memory latency– Leverage memory bandwidth

Vector A

rchitectures

Page 15: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

VMIPS• Example architecture: VMIPS

– Loosely based on Cray-1– Vector registers

• Each register holds a 64-element, 64 bits/element vector• Register file has 16 read ports and 8 write ports

– Vector functional units• Fully pipelined• Data and control hazards are detected

– Vector load-store unit• Fully pipelined• One word per clock cycle after initial latency

– Scalar registers• 32 general-purpose registers• 32 floating-point registers

Vector A

rchitectures

Page 16: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

VMIPS Instructions• ADDVV.D: add two vectors• ADDVS.D: add vector to a scalar• LV/SV: vector load and vector store from address

• Example: DAXPYL.D F0,a ; load scalar aLV V1,Rx ; load vector XMULVS.D V2,V1,F0 ; vector-scalar multiplyLV V3,Ry ; load vector YADDVV V4,V2,V3 ; addSV Ry,V4 ; store the result

• Requires 6 instructions vs. almost 600 for MIPS

Vector A

rchitectures

Page 17: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Vector Execution Time

• Execution time depends on three factors:– Length of operand vectors– Structural hazards– Data dependencies

• VMIPS functional units consume one element per clock cycle– Execution time is approximately the vector length

• Convoy– Set of vector instructions that could potentially execute

together

Vector A

rchitectures

Page 18: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Chimes

• Sequences with read-after-write dependency hazards can be in the same convey via chaining

• Chaining– Allows a vector operation to start as soon as the individual

elements of its vector source operand become available

• Chime– Unit of time to execute one convoy– m convoys executes in m chimes– For vector length of n, requires m x n clock cycles

Vector A

rchitectures

Page 19: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

ExampleLV V1,Rx ;load vector XMULVS.D V2,V1,F0 ;vector-scalar multiplyLV V3,Ry ;load vector YADDVV.D V4,V2,V3 ;add two vectorsSV Ry,V4 ;store the sum

Convoys:1 LV MULVS.D2 LV ADDVV.D3 SV

3 chimes, 2 FP ops per result, cycles per FLOP = 1.5For 64 element vectors, requires 64 x 3 = 192 clock cycles

Vector A

rchitectures

Page 20: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Challenges• Start up time

– Latency of vector functional unit– Assume the same as Cray-1

• Floating-point add => 6 clock cycles• Floating-point multiply => 7 clock cycles• Floating-point divide => 20 clock cycles• Vector load => 12 clock cycles

• Improvements:– > 1 element per clock cycle– Non-64 wide vectors– IF statements in vector code– Memory system optimizations to support vector processors– Multiple dimensional matrices– Sparse matrices– Programming a vector computer

Vector A

rchitectures

Page 21: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Multiple Lanes• Element n of vector register A is “hardwired” to element n of

vector register B– Allows for multiple hardware lanes

Vector A

rchitectures

Page 22: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Vector Length Register

• Vector length not known at compile time?• Use Vector Length Register (VLR)• Use strip mining for vectors over the maximum length:

low = 0;VL = (n % MVL); /*find odd-size piece using modulo op % */for (j = 0; j <= (n/MVL); j=j+1) { /*outer loop*/

for (i = low; i < (low+VL); i=i+1) /*runs for length VL*/Y[i] = a * X[i] + Y[i] ; /*main operation*/

low = low + VL; /*start of next vector*/VL = MVL; /*reset the length to maximum

vector length*/}

Vector A

rchitectures

Page 23: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Vector Mask Registers

• Consider:for (i = 0; i < 64; i=i+1)

if (X[i] != 0)X[i] = X[i] – Y[i];

• Use vector mask register to “disable” elements:LV V1,Rx ;load vector X into V1LV V2,Ry ;load vector YL.D F0,#0 ;load FP zero into F0SNEVS.D V1,F0 ;sets VM(i) to 1 if V1(i)!=F0SUBVV.D V1,V1,V2 ;subtract under vector maskSV Rx,V1 ;store the result in X

• GFLOPS rate decreases!

Vector A

rchitectures

Page 24: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Programming Vec. Architectures

• Compilers can provide feedback to programmers• Programmers can provide hints to compiler

Vector A

rchitectures

Page 25: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

SIMD Implementations

• Implementations:– Intel MMX (1996)

• Eight 8-bit integer ops or four 16-bit integer ops

– Streaming SIMD Extensions (SSE) (1999)• Eight 16-bit integer ops• Four 32-bit integer/fp ops or two 64-bit integer/fp ops

– Advanced Vector Extensions (2010)• Four 64-bit integer/fp ops

– Operands must be consecutive and aligned memory locations

SIM

D Instruction S

et Extensions for M

ultimedia

Page 26: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Example SIMD Code• Example DXPY:

L.D F0,a ;load scalar aMOV F1, F0 ;copy a into F1 for SIMD MULMOV F2, F0 ;copy a into F2 for SIMD MULMOV F3, F0 ;copy a into F3 for SIMD MULDADDIU R4,Rx,#512 ;last address to load

Loop: L.4D F4,0[Rx] ;load X[i], X[i+1], X[i+2], X[i+3]MUL.4D F4,F4,F0 ;a×X[i],a×X[i+1],a×X[i+2],a×X[i+3]L.4D F8,0[Ry] ;load Y[i], Y[i+1], Y[i+2], Y[i+3]ADD.4D F8,F8,F4 ;a×X[i]+Y[i], ..., a×X[i+3]+Y[i+3]S.4D 0[Ry],F8 ;store into Y[i], Y[i+1], Y[i+2], Y[i+3]DADDIU Rx,Rx,#32;increment index to XDADDIU Ry,Ry,#32 ;increment index to YDSUBU R20,R4,Rx ;compute boundBNEZ R20,Loop ;check if done

SIM

D Instruction S

et Extensions for M

ultimedia

Page 27: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Roofline Performance Model

• Basic idea:

– Plot peak floating-point throughput as a function of arithmetic intensity

– Ties together floating-point performance and memory performance for a target machine

• Arithmetic intensity

– Floating-point operations per byte read

SIM

D Instruction S

et Extensions for M

ultimedia

Page 28: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Examples

• Attainable GFLOPs/sec Min = (Peak Memory BW × Arithmetic Intensity, Peak Floating Point Perf.)

SIM

D Instruction S

et Extensions for M

ultimedia

Page 29: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Loop-Level Parallelism

• Focuses on determining whether data accesses in later iterations are dependent on data values produced in earlier iterations– Loop-carried dependence

• Example 1:for (i=999; i>=0; i=i-1)

x[i] = x[i] + s;

• No loop-carried dependence

Detecting and E

nhancing Loop-Level Parallelism

Page 30: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Loop-Level Parallelism

• Example 2:for (i=0; i<100; i=i+1)

{A[i+1] = A[i] + C[i]; /* S1 */B[i+1] = B[i] + A[i+1]; /* S2 */

}

• S1 and S2 use values computed by S1 in previous iteration

• S2 uses value computed by S1 in same iteration

Detecting and E

nhancing Loop-Level Parallelism

Page 31: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Loop-Level Parallelism

• Example 3:for (i=0; i<100; i=i+1) {

A[i] = A[i] + B[i]; /* S1 */B[i+1] = C[i] + D[i]; /* S2 */

}• S1 uses value computed by S2 in previous iteration but dependence is not circular

so loop is parallel• Transform to:

A[0] = A[0] + B[0];for (i=0; i<99; i=i+1) {

B[i+1] = C[i] + D[i];A[i+1] = A[i+1] + B[i+1];

}B[100] = C[99] + D[99];

Detecting and E

nhancing Loop-Level Parallelism

Page 32: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Copyright © 2012, Elsevier Inc. All rights reserved.

Loop-Level Parallelism

• Example 4:for (i=0;i<100;i=i+1) {

A[i] = B[i] + C[i];D[i] = A[i] * E[i];

}

• Example 5:for (i=1;i<100;i=i+1) {

Y[i] = Y[i-1] + Y[i];}

Detecting and E

nhancing Loop-Level Parallelism

Page 33: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

VECTOR PROCESSING - EXAMPLE

• Consider the following vector-multiplication problem:X * Y = Z, where X, Y, and Z are 100- value

vectors (arrays of size 100).• In C (to help visualize the connection to the Vector and

MIPS Pseudo-Code) this would be written as:for (i=0; i<100; i++)

Z(i) = X(i) * Y(i)

Page 34: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Example (Cont’d)

• Were this to be implemented in a MIPS machine, each addition would take 4 clock-cycles. The entire loop would be in excess of 400 cycles.

• Were this to be implemented in a Vector Processing machine, first, a number of elements from X and a number from Y would be loaded into separate vector registers (can be done simultaneously).

Page 35: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

Example (Cont’d)

• Next, the multiply pipeline would begin taking in elements from X and Y. After a single clock-cycle, another set of elements would be fed into this pipeline. After 4 clock-cycles the first result would be completed and stored in vector register Z. The second result would be completed in clock-cycle 5, and so on.

• Finally, once all this is complete, the values are taken from vector register Z and stored in main memory.

• The time it takes for the multiplication by itself is a mere 103 clock-cycles.

Page 36: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

PSEUDO CODE - VECTOR PROCESSING

• VLOAD X VR1 //loading X into VR1, a vector registerVLOAD Y VR2 //loading Y into VR2, a vector registerVMULT VR1 VR2 VR3 //vector multiplying VR1

by VR2, storing results in VR3

VSTORE VR3 Z //store vector register VR3 into main memory as Z

Page 37: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

PSEUDO CODE – MIPS

• LW X[i], $a0 //load first element of X into a register

• LW Y[i], $a1 //load first element of Y into a register

• “MULT” $a2, $a0, $a1 //multiply $a0 and $a1 and store

result in $a2• SW $a2, Z[i] //store $a2 into memory• //Repeat 100 times

Page 38: Chapter 4 - Data-Level Parallelism in Vector, SIMD, and GPU Architectures CSCI/ EENG – 641 - W01 Computer Architecture 1 Dr. Babak Beheshti Slides based

SUMMARY

• The Vector machine is faster at performing mathematical operations on larger vectors than is the MIPS machine.

• The Vector processing computer’s vector register architecture makes it better able to compute vast amounts of data quickly.


Recommended