Precision Going back to constant prop, in what cases would we lose precision?

  • View
    214

  • Download
    0

Embed Size (px)

Text of Precision Going back to constant prop, in what cases would we lose precision?

  • Slide 1
  • Precision Going back to constant prop, in what cases would we lose precision?
  • Slide 2
  • Precision Going back to constant prop, in what cases would we lose precision? if (p) { x := 5; } else x := 4; }... if (p) { y := x + 1 } else { y := x + 2 }... y... if (...) { x := -1; } else x := 1; } y := x * x;... y... x := 5 if ( ) { x := 6 }... x... where is equiv to false
  • Slide 3
  • Precision The first problem: Unreachable code solution: run unreachable code removal before the unreachable code removal analysis will do its best, but may not remove all unreachable code The other two problems are path-sensitivity issues Branch correlations: some paths are infeasible Path merging: can lead to loss of precision
  • Slide 4
  • MOP: meet over all paths Information computed at a given point is the meet of the information computed by each path to the program point if (...) { x := -1; } else x := 1; } y := x * x;... y...
  • Slide 5
  • MOP For a path p, which is a sequence of statements [s 1,..., s n ], define: F p (in) = F s n (...F s 1 (in)... ) In other words: F p = Given an edge e, let paths-to(e) be the (possibly infinite) set of paths that lead to e Given an edge e, MOP(e) = For us, should be called JOP (ie: join, not meet)
  • Slide 6
  • MOP vs. dataflow MOP is the best possible answer, given a fixed set of flow functions This means that MOP v dataflow at edge in the CFG In general, MOP is not computable (because there can be infinitely many paths) vs dataflow which is generally computable (if flow fns are monotonic and height of lattice is finite) And we saw in our example, in general, MOP dataflow
  • Slide 7
  • MOP vs. dataflow However, it would be great if by imposing some restrictions on the flow functions, we could guarantee that dataflow is the same as MOP. What would this restriction be? x := -1; y := x * x;... y... x := 1; y := x * x;... y... Merge x := -1;x := 1; Merge y := x * x;... y... DataflowMOP
  • Slide 8
  • MOP vs. dataflow However, it would be great if by imposing some restrictions on the flow functions, we could guarantee that dataflow is the same as MOP. What would this restriction be? Distributive problems. A problem is distributive if: 8 a, b. F(a t b) = F(a) t F(b) If flow function is distributive, then MOP = dataflow
  • Slide 9
  • Summary of precision Dataflow is the basic algorithm To basic dataflow, we can add path-separation Get MOP, which is same as dataflow for distributive problems Variety of research efforts to get closer to MOP for non-distributive problems To basic dataflow, we can add path-pruning Get branch correlation To basic dataflow, can add both: meet over all feasible paths
  • Slide 10
  • Program Representations
  • Slide 11
  • Representing programs Goals
  • Slide 12
  • Representing programs Primary goals analysis is easy and effective just a few cases to handle directly link related things transformations are easy to perform general, across input languages and target machines Additional goals compact in memory easy to translate to and from tracks info from source through to binary, for source-level debugging, profilling, typed binaries extensible (new opts, targets, language features) displayable
  • Slide 13
  • Option 1: high-level syntax based IR Represent source-level structures and expressions directly Example: Abstract Syntax Tree
  • Slide 14
  • Option 2: low-level IR Translate input programs into low-level primitive chunks, often close to the target machine Examples: assembly code, virtual machine code (e.g. stack machines), three-address code, register-transfer language (RTL) Standard RTL instrs:
  • Slide 15
  • Option 2: low-level IR
  • Slide 16
  • Comparison
  • Slide 17
  • Advantages of high-level rep analysis can exploit high-level knowledge of constructs easy to map to source code (debugging, profiling) Advantages of low-level rep can do low-level, machine specific reasoning can be language-independent Can mix multiple reps in the same compiler
  • Slide 18
  • Components of representation Control dependencies: sequencing of operations evaluation of if & then side-effects of statements occur in right order Data dependencies: flow of definitions from defs to uses operands computed before operations Ideal: represent just those dependencies that matter dependencies constrain transformations fewest dependences ) flexibility in implementation
  • Slide 19
  • Control dependencies Option 1: high-level representation control implicit in semantics of AST nodes Option 2: control flow graph (CFG) nodes are individual instructions edges represent control flow between instructions Options 2b: CFG with basic blocks basic block: sequence of instructions that dont have any branches, and that have a single entry point BB can make analysis more efficient: compute flow functions for an entire BB before start of analysis
  • Slide 20
  • Control dependencies CFG does not capture loops very well Some fancier options include: the Control Dependence Graph the Program Dependence Graph More on this later. Lets first look at data dependencies
  • Slide 21
  • Data dependencies Simplest way to represent data dependencies: def/use chains
  • Slide 22
  • Def/use chains Directly captures dataflow works well for things like constant prop But... Ignores control flow misses some opt opportunities since conservatively considers all paths not executable by itself (for example, need to keep CFG around) not appropriate for code motion transformations Must update after each transformation Space consuming
  • Slide 23
  • SSA Static Single Assignment invariant: each use of a variable has only one def
  • Slide 24
  • Slide 25
  • SSA Create a new variable for each def Insert pseudo-assignments at merge points Adjust uses to refer to appropriate new names Question: how can one figure out where to insert nodes using a liveness analysis and a reaching defns analysis.
  • Slide 26
  • Converting back from SSA Semantics of x 3 := (x 1, x 2 ) set x 3 to x i if execution came from ith predecessor How to implement nodes?
  • Slide 27
  • Converting back from SSA Semantics of x 3 := (x 1, x 2 ) set x 3 to x i if execution came from ith predecessor How to implement nodes? Insert assignment x 3 := x 1 along 1 st predecessor Insert assignment x 3 := x 2 along 2 nd predecessor If register allocator assigns x 1, x 2 and x 3 to the same register, these moves can be removed x 1.. x n usually have non-overlapping lifetimes, so this kind of register assignment is legal
  • Slide 28
  • Flow functions X := Y op Z in out F X := Y op Z (in) = in { X ! * } { * !... X... } [ { X ! Y op Z | X Y X Z} X := Y in out F X := Y (in) = in { X ! * } { * !... X... } [ { X ! E | Y ! E 2 in }