Upload
charlotte-gilmore
View
226
Download
4
Tags:
Embed Size (px)
Citation preview
Chapter 3
Arithmetic for Computers
CprE 381 Computer Organization and Assembly Level Programming, Fall 2013
Zhao ZhangIowa State UniversityRevised from original slides provided by MKP
Chapter 2 — Instructions: Language of the Computer — 2
Week 7 Overview Lecture:
Integer division IEEE floating point standard Floating-point arithmetic
Lab Mini-Project A: Arithmetic Unit 1-bit ALU 32-bit ALU Barrel shifter
Quiz resumes this week
§2.8 Supporting P
rocedures in Com
puter Hardw
are
Chapter 3 — Arithmetic for Computers — 3
Division Check for 0 divisor Long division approach
If divisor ≤ dividend bits 1 bit in quotient, subtract
Otherwise 0 bit in quotient, bring down next
dividend bit Restoring division
Do the subtract, and if remainder goes < 0, add divisor back
Signed division Divide using absolute values Adjust sign of quotient and remainder
as required
10011000 1001010 -1000 10 101 1010 -1000 10
n-bit operands yield n-bitquotient and remainder
quotient
dividend
remainder
divisor
§3.4 Division
Chapter 3 — Arithmetic for Computers — 4
Division Hardware
Initially dividend
Initially divisor in left half
Chapter 3 — Arithmetic for Computers — 5
Optimized Divider
One cycle per partial-remainder subtraction Looks a lot like a multiplier!
Same hardware can be used for both
Chapter 3 — Arithmetic for Computers — 6
Faster Division Can’t use parallel hardware as in multiplier
Subtraction is conditional on sign of remainder Faster dividers (e.g. SRT devision)
generate multiple quotient bits per step Still require multiple steps
Chapter 3 — Arithmetic for Computers — 7
MIPS Division Use HI/LO registers for result
HI: 32-bit remainder LO: 32-bit quotient
Instructions div rs, rt / divu rs, rt No overflow or divide-by-0 checking
Software must perform checks if required Use mfhi, mflo to access result
Chapter 3 — Arithmetic for Computers — 8
Floating Point Representation for non-integral numbers
Including very small and very large numbers Like scientific notation
–2.34 × 1056
+0.002 × 10–4
+987.02 × 109
In binary ±1.xxxxxxx2 × 2yyyy
Types float and double in C
normalized
not normalized
§3.5 Floating P
oint
Chapter 3 — Arithmetic for Computers — 9
Floating Point Standard Defined by IEEE Std 754-1985 Developed in response to divergence of
representations Portability issues for scientific code
Now almost universally adopted Two representations
Single precision (32-bit) Double precision (64-bit)
Chapter 3 — Arithmetic for Computers — 10
IEEE Floating-Point Format
S: sign bit (0 non-negative, 1 negative) Normalize significand: 1.0 ≤ |significand| < 2.0
Always has a leading pre-binary-point 1 bit, so no need to represent it explicitly (hidden bit)
Significand is Fraction with the “1.” restored Exponent: excess representation: actual exponent + Bias
Ensures exponent is unsigned Single: Bias = 127; Double: Bias = 1203
S Exponent Fraction
single: 8 bitsdouble: 11 bits
single: 23 bitsdouble: 52 bits
Bias)(ExponentS 2Fraction)(11)(x
Chapter 3 — Arithmetic for Computers — 11
Single-Precision Range Exponents 00000000 and 11111111 reserved Smallest value
Exponent: 00000001 actual exponent = 1 – 127 = –126
Fraction: 000…00 significand = 1.0 ±1.0 × 2–126 ≈ ±1.2 × 10–38
Largest value exponent: 11111110
actual exponent = 254 – 127 = +127 Fraction: 111…11 significand ≈ 2.0 ±2.0 × 2+127 ≈ ±3.4 × 10+38
Chapter 3 — Arithmetic for Computers — 12
Double-Precision Range Exponents 0000…00 and 1111…11 reserved Smallest value
Exponent: 00000000001 actual exponent = 1 – 1023 = –1022
Fraction: 000…00 significand = 1.0 ±1.0 × 2–1022 ≈ ±2.2 × 10–308
Largest value Exponent: 11111111110
actual exponent = 2046 – 1023 = +1023 Fraction: 111…11 significand ≈ 2.0 ±2.0 × 2+1023 ≈ ±1.8 × 10+308
Chapter 3 — Arithmetic for Computers — 13
Floating-Point Precision Relative precision
all fraction bits are significant Single: approx 2–23
Equivalent to 23 × log102 ≈ 23 × 0.3 ≈ 6 decimal digits of precision
Double: approx 2–52
Equivalent to 52 × log102 ≈ 52 × 0.3 ≈ 16 decimal digits of precision
Chapter 3 — Arithmetic for Computers — 14
Floating-Point Example Represent –0.75
–0.75 = (–1)1 × 1.12 × 2–1
S = 1 Fraction = 1000…002
Exponent = –1 + Bias Single: –1 + 127 = 126 = 011111102
Double: –1 + 1023 = 1022 = 011111111102
Single: 1011111101000…00 Double: 1011111111101000…00
Chapter 3 — Arithmetic for Computers — 15
Floating-Point Example What number is represented by the single-
precision float11000000101000…00 S = 1 Fraction = 01000…002
Fxponent = 100000012 = 129 x = (–1)1 × (1 + 012) × 2(129 – 127)
= (–1) × 1.25 × 22
= –5.0
Chapter 3 — Arithmetic for Computers — 16
Denormal Numbers Exponent = 000...0 hidden bit is 0
Smaller than normal numbers allow for gradual underflow, with
diminishing precision
Denormal with fraction = 000...0
Two representations of 0.0!
BiasS 2Fraction)(01)(x
0.0 BiasS 20)(01)(x
Chapter 3 — Arithmetic for Computers — 17
Infinities and NaNs Exponent = 111...1, Fraction = 000...0
±Infinity Can be used in subsequent calculations,
avoiding need for overflow check Exponent = 111...1, Fraction ≠ 000...0
Not-a-Number (NaN) Indicates illegal or undefined result
e.g., 0.0 / 0.0 Can be used in subsequent calculations
Chapter 3 — Arithmetic for Computers — 18
Floating-Point Addition Consider a 4-digit decimal example
9.999 × 101 + 1.610 × 10–1
1. Align decimal points Shift number with smaller exponent 9.999 × 101 + 0.016 × 101
2. Add significands 9.999 × 101 + 0.016 × 101 = 10.015 × 101
3. Normalize result & check for over/underflow 1.0015 × 102
4. Round and renormalize if necessary 1.002 × 102
Chapter 3 — Arithmetic for Computers — 19
Floating-Point Addition Now consider a 4-digit binary example
1.0002 × 2–1 + –1.1102 × 2–2 (0.5 + –0.4375) 1. Align binary points
Shift number with smaller exponent 1.0002 × 2–1 + –0.1112 × 2–1
2. Add significands 1.0002 × 2–1 + –0.1112 × 2–1 = 0.0012 × 2–1
3. Normalize result & check for over/underflow 1.0002 × 2–4, with no over/underflow
4. Round and renormalize if necessary 1.0002 × 2–4 (no change) = 0.0625
Chapter 3 — Arithmetic for Computers — 20
FP Adder Hardware Much more complex than integer adder Doing it in one clock cycle would take too
long Much longer than integer operations Slower clock would penalize all instructions
FP adder usually takes several cycles Can be pipelined
Chapter 3 — Arithmetic for Computers — 21
FP Adder Hardware
Step 1
Step 2
Step 3
Step 4
A B C
D E
FP Adder HardwareAssume inputs are X and Y, X.exp ≥ Y.exp Steps 1 and 2
The Small ALU output the exponent differnce Mux A selects X.exp, the bigger one Mux B selects Y.frac, which will be shifted to right Mux C selects X.frac The Big ALU add X.frac and shifted Y.frac Muxs D and E selects X.exp and Bit ALU output (added frac)
Steps 3 and 4, if active Muxes D and E selects the exponent and fraction from the
round hardware output The control unit generates control signals to all muxes
and other components
Chapter 3 — Arithmetic for Computers — 22
Chapter 3 — Arithmetic for Computers — 23
Floating-Point Multiplication Consider a 4-digit decimal example
1.110 × 1010 × 9.200 × 10–5
1. Add exponents For biased exponents, subtract bias from sum New exponent = 10 + –5 = 5
2. Multiply significands 1.110 × 9.200 = 10.212 10.212 × 105
3. Normalize result & check for over/underflow 1.0212 × 106
4. Round and renormalize if necessary 1.021 × 106
5. Determine sign of result from signs of operands +1.021 × 106
Chapter 3 — Arithmetic for Computers — 24
Floating-Point Multiplication Now consider a 4-digit binary example
1.0002 × 2–1 × –1.1102 × 2–2 (0.5 × –0.4375) 1. Add exponents
Unbiased: –1 + –2 = –3 Biased: (–1 + 127) + (–2 + 127) = –3 + 254 – 127 = –3 + 127
2. Multiply significands 1.0002 × 1.1102 = 1.1102 1.1102 × 2–3
3. Normalize result & check for over/underflow 1.1102 × 2–3 (no change) with no over/underflow
4. Round and renormalize if necessary 1.1102 × 2–3 (no change)
5. Determine sign: +ve × –ve –ve –1.1102 × 2–3 = –0.21875
Chapter 3 — Arithmetic for Computers — 25
FP Arithmetic Hardware FP multiplier is of similar complexity to FP
adder But uses a multiplier for significands instead of
an adder FP arithmetic hardware usually does
Addition, subtraction, multiplication, division, reciprocal, square-root
FP integer conversion Operations usually takes several cycles
Can be pipelined
Chapter 3 — Arithmetic for Computers — 26
FP Instructions in MIPS FP hardware is coprocessor 1
Adjunct processor that extends the ISA Separate FP registers
32 single-precision: $f0, $f1, … $f31 Paired for double-precision: $f0/$f1, $f2/$f3, …
Release 2 of MIPs ISA supports 32 × 64-bit FP reg’s FP instructions operate only on FP registers
Programs generally don’t do integer ops on FP data, or vice versa
More registers with minimal code-size impact FP load and store instructions
lwc1, ldc1, swc1, sdc1 e.g., ldc1 $f8, 32($sp)
Chapter 3 — Arithmetic for Computers — 27
FP Instructions in MIPS Single-precision arithmetic
add.s, sub.s, mul.s, div.s e.g., add.s $f0, $f1, $f6
Double-precision arithmetic add.d, sub.d, mul.d, div.d
e.g., mul.d $f4, $f4, $f6 Single- and double-precision comparison
c.xx.s, c.xx.d (xx is eq, lt, le, …) Sets or clears FP condition-code bit
e.g. c.lt.s $f3, $f4 Branch on FP condition code true or false
bc1t, bc1f e.g., bc1t TargetLabel
Chapter 3 — Arithmetic for Computers — 28
Accurate Arithmetic IEEE Std 754 specifies additional rounding
control Extra bits of precision (guard, round, sticky) Choice of rounding modes Allows programmer to fine-tune numerical behavior of
a computation Not all FP units implement all options
Most programming languages and FP libraries just use defaults
Trade-off between hardware complexity, performance, and market requirements
Chapter 3 — Arithmetic for Computers — 29
Associativity Parallel programs may interleave
operations in unexpected orders Assumptions of associativity may fail
§3.6 Parallelism
and Com
puter Arithm
etic: Associativity
(x+y)+z x+(y+z)x -1.50E+38 -1.50E+38y 1.50E+38z 1.0 1.0
1.00E+00 0.00E+00
0.00E+001.50E+38
Need to validate parallel programs under varying degrees of parallelism
Chapter 3 — Arithmetic for Computers — 30
x86 FP Architecture Originally based on 8087 FP coprocessor
8 × 80-bit extended-precision registers Used as a push-down stack Registers indexed from TOS: ST(0), ST(1), …
FP values are 32-bit or 64 in memory Converted on load/store of memory operand Integer operands can also be converted
on load/store Very difficult to generate and optimize code
Result: poor FP performance
§3.7 Real S
tuff: Floating P
oint in the x86
Chapter 3 — Arithmetic for Computers — 31
Streaming SIMD Extension 2 (SSE2)
Adds 4 × 128-bit registers Extended to 8 registers in AMD64/EM64T
Can be used for multiple FP operands 2 × 64-bit double precision 4 × 32-bit double precision Instructions operate on them simultaneously
Single-Instruction Multiple-Data
Chapter 3 — Arithmetic for Computers — 32
Right Shift and Division Left shift by i places multiplies an integer
by 2i
Right shift divides by 2i? Only for unsigned integers
For signed integers Arithmetic right shift: replicate the sign bit e.g., –5 / 4
111110112 >> 2 = 111111102 = –2 Rounds toward –∞
c.f. 111110112 >>> 2 = 001111102 = +62
§3.8 Fallacies and P
itfalls
Chapter 3 — Arithmetic for Computers — 33
Who Cares About FP Accuracy?
Important for scientific code But for everyday consumer use?
“My bank balance is out by 0.0002¢!”
The Intel Pentium FDIV bug The market expects accuracy See Colwell, The Pentium Chronicles
Chapter 3 — Arithmetic for Computers — 34
Concluding Remarks ISAs support arithmetic
Signed and unsigned integers Floating-point approximation to reals
Bounded range and precision Operations can overflow and underflow
MIPS ISA Core instructions: 54 most frequently used
100% of SPECINT, 97% of SPECFP Other instructions: less frequent
§3.9 Concluding R
emarks
Extra Slides
Chapter 1 — Computer Abstractions and Technology — 35
Chapter 3 — Arithmetic for Computers — 36
x86 FP Instructions
Optional variations I: integer operand P: pop operand from stack R: reverse operand order But not all combinations allowed
Data transfer Arithmetic Compare Transcendental
FILD mem/ST(i)FISTP mem/ST(i)FLDPIFLD1FLDZ
FIADDP mem/ST(i)FISUBRP mem/ST(i) FIMULP mem/ST(i) FIDIVRP mem/ST(i)FSQRTFABSFRNDINT
FICOMPFIUCOMPFSTSW AX/mem
FPATANF2XMIFCOSFPTANFPREMFPSINFYL2X
Exam 1 Review: Problem 1___ Performance is defined as 1/(Execution Time).
___ CPU Time = Instruction Count × CPI × Clock Rate
___ Processor dynamic power consumption is proportional to the voltage (of power supply)
___ Improving the algorithm of a program may reduce the clock cycles of the execution
___ Compiler optimization may reduce the number of instructions being executed
Chapter 1 — Computer Abstractions and Technology — 37
Problem 2
For a given benchmark program, a processor spends 40% of time in integer instructions (including branches and jumps) and 60% of time in floating-point instructions. Design team A may improve the integer performance by 2.0 times, and design team B may improve the floating-point performance by 1.5 times, for the same cost. Which design team may improve the overall performance more than the other?
Chapter 1 — Computer Abstractions and Technology — 38
Problem 2
Use Amdahl’s Law
Overall speedup = 1/(1-f+f/s)
A: f = 40%, s = 2.0,
speedup = 1/(0.6+0.4/2) = 1.25 times.
B: f = 60%, s = 1.5,
speedup = 1/(0.6/1.5+0.4) = 1.25 times.
The speedups will be the same, or neither team is better than the other.
Chapter 1 — Computer Abstractions and Technology — 39
Problem 3
f = f & 0x00FFFF00;
lw $t0, 200($gp) # load flui $t1, 0x00FFori $t1, 0xFF00and $t0, $t0, $t1sw $t0, 200($gp)
Chapter 1 — Computer Abstractions and Technology — 40
Problem 3
g = (g >> 16) | (g << 16);
lw $t0, 204($gp)srl $t1, $t0, 16sll $t2, $t0, 16or $t0, $t1, $t2sw $t0, 204($gp)
Chapter 1 — Computer Abstractions and Technology — 41
Problem 4
void swap(int X[], int i, int j){ if (i != j) { char int tmp1 = X[i]; char int tmp2 = X[j]; X[i] = tmp2; X[j] = tmp1; }}
Chapter 1 — Computer Abstractions and Technology — 42
Problem 4
This question tests MIPS call convention, leaf function, if statement, and array access. Call convention: X in $a0, i in $a1, j in $a2 Leaf function: Prefer t-regs for new values If statement: branch out if condition is false Array access: &X[i] = X + i*sizeof(X[i])
Three instructions: SLL, ADD, LW
Chapter 1 — Computer Abstractions and Technology — 43
Problem 4swap: beq $a1, $a2, endif # i == j, skip sll $t0, $a1, 2 # $t0 = i*4 add $t0, $a0, $t0 # $t0 = &X[i] lw $t1, 0($t0) # $t1 = X[i] sll $t2, $a2, 2 # $t2 = i*4 add $t2, $a0, $t2 # $t2 = &Y[i] lw $t3, 0($t2) # $t2 = Y[i] sw $t3, 0($t0) # X[i] = $t2 sw $t1, 0($t2) # Y[i] = $t1endif: jr $ra
Chapter 1 — Computer Abstractions and Technology — 44
Problem 4 What about this C code? How would a
compiler generate the code?void swap(int X[], int i, int j){ if (i != j) { char tmp1 = X[i]; char tmp2 = X[j]; X[i] = tmp2; X[j] = tmp1; }}
Chapter 1 — Computer Abstractions and Technology — 45
Problem 4swap: beq $a1, $a2, endif # i == j, skip sll $t0, $a1, 2 # $t0 = i*4 add $t0, $a0, $t0 # $t0 = &X[i] lb $t1, 0($t0) # $t1 = X[i] sll $t2, $a2, 2 # $t2 = i*4 add $t2, $a0, $t2 # $t2 = &Y[i] lb $t3, 0($t2) # $t2 = Y[i] sw $t3, 0($t0) # X[i] = $t2 sw $t1, 0($t2) # Y[i] = $t1endif: jr $ra
Chapter 1 — Computer Abstractions and Technology — 46
Problem 5// Prototype of the floor function extern float floor(float x); // Floor all elements of array X. N is the size of the array.void array_floor(float X[], int N){ for (int i = 0; i < N; i++) X[i] = floor(X[i]);}
Chapter 1 — Computer Abstractions and Technology — 47
Problem 5This question tests MIPS call convention, non-leaf function, FP part of the call convention, for loop, FP load/store, and function call. Call convention with FP
void array_floor(float X[], int N): X in $a0, N in $a1 float floor(float x): x in $f12, return value in $f0
Non-leaf function: Prefer s-regs for new values, must have a stack frame
For loop: Follow the for-loop template FP load/store: lwc1/swc1 for float type Function call for “X[i] = floor(X[i])”
Load X[i] in $f12, make the call, save $f0 back to X[i]
Chapter 1 — Computer Abstractions and Technology — 48
Problem 5Function prologue: Save $ra and all s-regs in use
$s0 for X, $s1 for N, $s2 for i, $s3 for &X[i] May write the function body first before prologue
array_floor: addi $sp, $sp, -20 sw $ra, 16($sp) sw $s3, 12($sp) sw $s2, 8($sp) sw $s1, 4($sp) sw $s0, 0($sp)
Chapter 1 — Computer Abstractions and Technology — 49
Problem 5 # Move X, N to $s0, $s1; clear i add $s0, $a0, $zero # s0 = X add $s1, $a1, $zero # s1 = N add $s2, $zero, $zero # i = 0 j for_cond
Chapter 1 — Computer Abstractions and Technology — 50
Problem 5for_loop: sll $s3, $s2, 2 # $s3=4*i add $s3, $s0, $s3 # $s0=&X[i] lwc1 $f12, 0($s3) # $f12=X[i] jal floor # floor(X[i]) swc1 $f0, 0($s3) # save X[i] addi $s2, $s2, 1 # i++for_cond: slt $t0, $s2, $s1 # i<N
bne $t0, $zero, for_loop
Chapter 1 — Computer Abstractions and Technology — 51
Problem 5 # function epilogue, restore $s0-$s3 # and $ra and $sp then returnret: lw $s0, 0($sp) lw $s1, 4($sp) lw $s2, 8($sp) lw $s3, 12($sp) lw $ra, 16($sp) addi $sp, $sp, 20 jr $ra
Chapter 1 — Computer Abstractions and Technology — 52
In Class Exercise What single-precision FP number does 0xC0F00000 represent?
Procedure1. Covert it to 32-bit binary
2. Split to 1-bit sign, 8-bit exponent, 23-bit fraction
3. Get the actual exponent
4. Get the significand
5. Put everything together
Chapter 1 — Computer Abstractions and Technology — 53
Covert Hex to FP Binary: 1100 0000 1111 0000 0000 0000 0000 Split the bit fields
Sign bit: 1 Exponent: 10000001 => 12910
Fraction: 11100…0
Actual exponent = 129 – 127 = 2 Significand = 1.11100…02=1.1112
Putting together: -1.112×22 = 111.12
= -7.510
Chapter 1 — Computer Abstractions and Technology — 54
Convert FP to Hex What is the binary represent of single-
precision FP number of +4.2? Procedure
1. Convert it to binary floating point number: Sign, significand, exponent
2. Get the biased component, binary form
3. Get the fraction, binary form
4. Put every thing together
Chapter 1 — Computer Abstractions and Technology — 55
Convert FP to Hex Binary floating point: +4.210
4.0 = 1002
0.2 = ???
Chapter 1 — Computer Abstractions and Technology — 56
Convert FP to Hex Use a multiplication table
0.2 x 2 = 0.4 0
0.4 x 2 = 0.8 0
0.8 x 2 = 1.6 1
0.6 x 2 = 1.2 1
0.2 x 2 = 0.4 0
0.4 x 2 = 0.8 0
0.8 x 2 = 1.6 1
0.6 x 2 = 1.2 1
Repeat …
0.210 = 0.0011001100110011…2
Note 0.2 = 1/8+1/16+1/128+1/256+…Chapter 1 — Computer Abstractions and Technology — 57
Convert FP to Hex Binary floating point: +4.210=+100.00110011…2
Normalize: +4.210=+1.0000110011…2×22
Get the components Sign bit = 0 Actual exponent = 210
Significand = 1.0000110011…2
Biased exponent = 2+127 = 129 = 100_0000_12
Fraction = 0000110011…2 = 000_0110_0110_0110_0110_01102 (23-bit)
Putting together: 0100_0000_1000_0110_0110_0110_0110_0110= 0x40866666
Chapter 1 — Computer Abstractions and Technology — 58
Why Study It If it sounds tedious… The Pentium FDIV bug (1994)
Intel used look-up table to speed up FP division, a few entries in the table were missing
A professor of math noticed the bug, and proved it Intel initially said it was not serious (the probability
of error is one in every 9 billion FDIVs) But scientific programs execute billions and trillions
of FP ops! It costs Intel $475 million to replace those
processors
http://en.wikipedia.org/wiki/Pentium_FDIV_bugChapter 1 — Computer Abstractions and Technology — 59