Program Verification and Synthesisusing Templates
over Predicate Abstraction
Saurabh Srivastava #
Sumit Gulwani *Jeffrey S. Foster #
# University of Maryland, College Park* Microsoft Research, Redmond
On leave advisor: Mike Hicks
Sorted
Verification: Proving Programs Correct
Selection Sort:
i jfind min
i jfind min
i jfind min
…
Verification: Proving Programs Correct
Selection Sort:
Sorted
i jfind min
i jfind min
i jfind min
…
for i = 0..n{
for j = i..n{
find min index
}if (min != i) swap A[i] with
A[min]}
Selection Sort:
Verification: Proving Programs Correct
for i = 0..n{
for j = i..n{
find min index
}if (min != i) swap A[i] with
A[min]}Prove the following:
• Sortedness• Output array is non-decreasing
• Permutation• Output array contains all elements of the input• Output array contains only elements from the
input• Number of copies of each element is the same
∀k1,k2 : 0≤k1<k2<n => A[k1] ≤A[k2]
∀k∃j : 0≤k<n => Aold[k]=A[j] & 0≤j<n
∀k∃j : 0≤k<n => A[k]=Aold[j] & 0≤j<n
∀n : n∊A => count(A,n) = count(Aold,n)
Selection Sort:for i = 0..n{
for j = i..n{
find min index
}if (min != i) swap A[i] with
A[min]}
Verification: Proving Programs Correct
Outputarray
i<n
j<n
Find min index
if (min != i) swap A[i], A[min]
i:=0
j:=i
Selection Sort:
Outputarray
i<n
j<n
Find min index
if (min != i) swap A[i], A[min]
i:=0
j:=i
Verification: Proving Programs Correct
Iouter
IinnerSortednessPermutatio
n
Output state
true
Input stateWe know from work in
the ’70s (Hoare, Floyd, Dijkstra) that it is easy
to reason logically about simple paths
The difficult task is program state
(invariant) inference:
Iinner, Iouter, Input state, Output state
Selection Sort:
Outputarray
i<n
j<n
Find min index
if (min != i) swap A[i], A[min]
i:=0
j:=i
Verification: Proving Programs Correct
SortednessPermutatio
n
Output state
true
Input stateWe know from work in
the ’70s (Hoare, Floyd, Dijkstra) that it is easy
to reason logically about simple paths
The difficult task is program state
(invariant) inference:
Iinner, Iouter, Input state, Output state
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]
i < j i ≤ min < n
∀k : 0≤k ∧ i≤k<j => A[min] ≤A[k]
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]
∀k1,k2 : 0≤k1<k2<n => A[k1] ≤A[k2]
Iouter
Iinner
User Input
Program properties
that we can verify/prove
Full functionalcorrectness
No input Fully specified invariants
Good
Bad
Tolera
ble
Inferring program state (invariants)
User Input
Program properties
that we can verify/prove
Full functionalcorrectness
No input Fully specified invariants
Mind-blowing
Undesirable
Live w
ith cu
rrent t
echnology
Inferring program state (invariants)
Arbitraryquantificatio
nand
arbitrary abstraction
Alternatingquantification (∀∃)
Quantified facts (∀)
Quantifier-free facts
User Input
Program properties
that we can verify/prove
Full functionalcorrectness
No input Fully specified invariants
Inferring program state (invariants)
Arbitraryquantificatio
nand
arbitrary abstraction
Alternatingquantification (∀∃)
Quantified facts (∀)
Quantifier-free facts
User Input
Program properties
that we can verify/prove
Full functionalcorrectness
No input Fully specified invariants
Undecidable
Inferring program state (invariants)
Arbitraryquantificatio
nand
arbitrary abstraction
Alternatingquantification (∀∃)
Quantified facts (∀)
Quantifier-free facts
User Input
Program properties
that we can verify/prove
Full functionalcorrectness
No input Fully specified invariants
Abstract interpretationQuantifier-free
domain Cousot etc
Quantifier-free domain lifted to ∀
Gulwani etc
Best-effort inference for alternating
quantification (∀∃)Kovacs etc
Validating program pathsusing non-interactive
theorem provers/SMT solversHAVOC, Spec#, etc
Validationusing interactive theorem Proving
Jahob
Inferring program state (invariants)
Constraint-basedQuantifier-free
domain Gulwani, Beyer etc
Shape analysisSLAyer etc
Program propertiesInference Validation
Arbitraryquantificatio
nand
arbitrary abstraction
Alternatingquantification (∀∃)
Quantified facts (∀)
Quantifier-free facts
User Input
Program properties
that we can verify/prove
Full functionalcorrectness
No input Fully specified invariants
Magicaltechnique !!✗
Inferring program state (invariants)
Program propertiesInference Validation
Arbitraryquantificatio
nand
arbitrary abstraction
Alternatingquantification (∀∃)
Quantified facts (∀)
Quantifier-free facts
User Input
Program properties
that we can verify/prove
Full functionalcorrectness
No input Fully specified invariants
Template-basedinvariant inference
Inferring program state (invariants)
Program propertiesInference Validation
User Input
Full functionalcorrectness
No input
Template-basedinvariant inference
Small; intuitive input
Inferring program state (invariants)
InvariantTemplates
Predicates
My task in this talk:Convince you that these small hints go a long way in what we can achieve with verification€
∀k1,k2 : (−)⇒ (−)
If you wanted to prove sortedness, i.e. each element is smaller than some others:
€
∀k1∃k2 : (−)⇒ (−)
If you wanted to prove permutation, i.e. for each element there exists another:
Example: Example:
Take all variables and array elements in the program.
Enumerate all pairs related by less than, greater than, equality etc.
Key facilitators: (1) Templates
• Templates
– Intuitive• Depending on the property being proved; form straight-forward
– Simple• Limited to the quantification and disjunction• No complicated error reporting, iterative refining required• Very different from full invariants used by some techniques
– Helps: We don‘t have to define insane joins etc.• Previous approaches: Domain D for the holes, construct D∀
• This approach: Work with only D• Catch: Holes have different status:
€
∀k1∃k2 : (−)⇒ (−)
€
∀k1,k2 : (−)⇒ (−)Unknown
holes
€
∀k1,k2 : (−)⇒ (−)
Key facilitators: (2) Predicates
• Predicate Abstraction– Simple
• Small fact hypothesis– Nested deep under, the facts for the holes are simple
• E.g., ∀k1,k2 : 0≤k1<k2<n => A[k1] ≤A[k2]– Predicates 0≤k1, k1<k2, k2<n, and A[k1] ≤A[k2] are individually very
simple
– Intuitive• Only program variables, array elements and relational operators
€
x opr (y opa c)Relational operator
<, ≤, >, ≥ …Arithmetic operator
+, - …
Program variables,array elements
Limitedconstants
– E.g., {i<j, i>j, i≤j, i≥j, i<j-1, i>j+1… }
– Example: Enumerate predicates of the form:
VS3
• VS3: Verification and Synthesis using SMT Solvers
• User input– Templates– Predicates
• Set of techniques and tool implementing them
• Infers – Loop invariants– Weakest preconditions– Strongest postconditions
What VS3 will let you do!
A. Infer invariants with arbitrary quantification and boolean structure– ∀: E.g. Sortedness
– ∀∃: E.g. Permutation
..∀k1,k2 k1 k2
€
≤
j....
Aold
∀k∃jAnew
k
Outputarray
i<n
j<n
Find min index
if (min != i) swap A[i], A[min]
i:=0
j:=i
What VS3 will let you do!
k1 k2
€
≤
<
B. Infer weakest preconditions
Weakest conditions on input?
…
No Swap!
Outputarray
i<n
j<n
Find min index
if (min != i) swap A[i], A[min]
i:=0
j:=i
Worst case behavior: swap every time it can
Improves the state-of-art
• Can infer very expressive invariants• Quantification
– E.g. ∀k∃j : (k<n) => (A[k]=B[j] ∧ j<n)
• Disjunction– E.g. ∀k : (k<0 ∨ k>n ∨ A[k]=0)
• Can infer weakest preconditions• Good for debugging: can discover worst case inputs• Good for analysis: can infer missing precondition
facts
Verification part of the talk
• Three fixed-point inference algorithms– Iterative fixed-point
• Least Fixed-Point (LFP)• Greatest Fixed-Point (GFP)
– Constraint-based (CFP)
• Weakest Precondition Generation
• Experimental evaluation
(LFP)
(GFP)
(CFP)
Verification part of the talk
• Three fixed-point inference algorithms– Iterative fixed-point
• Least Fixed-Point (LFP)• Greatest Fixed-Point (GFP)
– Constraint-based (CFP)
• Weakest Precondition Generation
• Experimental evaluation
(LFP)
(GFP)
(CFP)
Outputsortedarray
i<n
j<n
Find min index
if (min != i) swap A[i], A[min]
i:=0
j:=i
Analysis Setup
post
pre
I1
I2
Loop headers (with invariants) split program into simple paths. Simple paths induce program constraints (verification conditions)
vc(pre,I1)
vc(I1,post) vc(I1,I2)
vc(I2,I2)vc(I2,I1)
true ∧ i=0 => I1
I1 ∧ i≥n => sorted array
I1 ∧ i<n ∧ j=i => I2
I1
I2
E.g. Selection Sort:
I1 ∧ i≥n => ∀k1,k2 : 0≤k1<k2<n => A[k1] ≤A[k2]
Analysis Setup
post
pre
I1
I2
Loop headers (with invariants) split program into simple paths. Simple paths induce program constraints (verification conditions)
vc(pre,I1)
vc(I1,post) vc(I1,I2)
vc(I2,I2)vc(I2,I1)
true ∧ i=0 => I1
I1 ∧ i<n ∧ j=i => I2
Simple FOL formulae over I1,
I2!We will exploit this.
Candidate Solution
Iterative Fixed-point: Overview
post
pre
I1
I2
vc(pre,I1)
vc(I1,post) vc(I1,I2)
vc(I2,I2)vc(I2,I1)
{ vc(pre,I1),vc(I1,I2) }
VCs that are not satisfied
✗✓
✓ ✓
✗
Values for invariants
<x,y>
Iterative Fixed-point: Overview
post
pre
I1
I2
vc(pre,I1)
vc(I1,post) vc(I1,I2)
vc(I2,I2)vc(I2,I1)
Improve candidate– Pick unsat VC– Use theorem prover to
compute optimal change– Results in new candidates
<x,y> { … }
Set of candidates
Improve candidate
Candidate satisfies all
VCs
Forward Iterative (LFP)
Forward iteration:
• Pick the destination invariant of unsat constraint to improve
• Start with <⊥,…,⊥>
post
pre
⊥
⊥
✓✓✓
✗
✓
Forward Iterative (LFP)
Forward iteration:
• Pick the destination invariant of unsat constraint to improve
• Start with <⊥,…,⊥>
post
pre
a
⊥
✓
✓
✓ ✗
✓
Forward Iterative (LFP)
Forward iteration:
• Pick the destination invariant of unsat constraint to improve
• Start with <⊥,…,⊥>
post
pre
a
b
✓
✓
✓✗
✓
How do we compute these improvements?
`Optimal change’procedure
Positive and Negative unknowns
• Given formula Φ• Unknowns in have polarities: Positive/Negative
– Value of positive unknown stronger => Φ stronger– Value of negative unknown stronger => Φ weaker
• Optimal Soln to Φ – Maximally strong positives, maximally weak
negatives
¬
negative
u1
positive
∨ u2 ) ∧ u3(
positive
∀x : ∃y : Φ =
Optimal Change
• One approach (old way):
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2] Lifted
domain D∀x1:=e1
∀k1,k2 : ?? => ??
Under-approximate Over-approximate
Very difficult tx, join, widen functionsrequired for the defn of domain D∀
Domain D: quantifier-free facts
Optimal Change
• Our approach (new way):
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]
x1:=e1
∀k1,k2 : ?? => ??
Optimal Change
• Our approach (new way):
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]
x1:=e1
∀k1,k2 : ??1 => ??2
x2:=e2
x3:=e3
∀k1,k2 : ??3 => ??4
Suppose ei does not refer
to xi
Optimal Change
• Our approach (new way):
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]
∧ x1=e1
∀k1,k2 : ??1 => ??2
∧ x2=e2
∧ x3=e3∀k1,k2 : ??3 => ??4
Suppose ei does not refer
to xi
=>
Optimal Change
• Our approach (new way):
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]∧ x1=e1 ∧ x2=e2 ∧ x3=e3
∀k1,k2 : ??1 => ??2∀k1,k2 : ??3 => ??4
Find optimal solutions to unknowns
??1, ??2, ??3, ??4
=>
Unknowns have polarities therefore
obey a monotonicity
property
Trivial search will involve k x 2|Predicates|
queries to the SMT solver
Algorithmic Novelty:This polynomial sized
information can be aggregated efficiently to compute optimal
soln
Key idea: Instantiate some unknowns and
query SMT solver polynomial number of
times
SMT Solver Query Time
1 10 100 1000 More0
5000
10000
15000
20000
25000
30000
35000
Time in Milliseconds (log scale)
Nu
mb
er
of
Qu
eri
es
413 queries
30 queries
Optimal Change
• Our approach (new way):
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]∧ x1=e1 ∧ x2=e2 ∧ x3=e3
∀k1,k2 : ??1 => ??2∀k1,k2 : ??3 => ??4
Find optimal solutions to unknowns
??1, ??2, ??3, ??4
=>
Unknowns have polarities therefore
obey a monotonicity
property
Trivial search will involve k x 2|Predicates|
queries to the SMT solver
Algorithmic Novelty:This polynomial sized
information can be aggregated efficiently to compute optimal
soln
Key idea: Instantiate some unknowns and
query SMT solver polynomial number of
times
Efficient in practice
Backward Iterative Greatest Fixed-Point (GFP)
Backward: Same as LFP except
• Pick the source invariant of unsat constraint to improve
• Start with <⊤, …,⊤>
Constraint-based over Predicate Abstraction
post
pre
I1
I2
vc(pre,I1)
vc(I1,post) vc(I1,I2)
vc(I2,I2)vc(I2,I1)
Constraint-based over Predicate Abstraction
vc(pre,I1)
vc(I1,post)vc(I1,I2)
vc(I2,I2)
vc(I2,I1)
pred(I1)
€
∀k : (−)⇒ (−) template
p1,p2…pr p1,p2…pr
unknown 1 unknown 2
predicates
b11,b2
1…br
1
b12,b2
2…br2 boolean
indicators
I1Remember: VCs are FOL
formulae over I1, I2
If we find assignments
bji = T/F
Then we are done!
Constraint-based over Predicate Abstraction
vc(pre,I1)
vc(I1,post)vc(I1,I2)
vc(I2,I2)
vc(I2,I1)
pred(I1)
pred(A) : A to unknown predicate
indicator variables
Remember: VCs are FOL
formulae over I1, I2
Remember: VCs are FOL
formulae over I1, I2
Constraint-based over Predicate Abstraction
vc(pre,I1)
vc(I1,post)vc(I1,I2)
vc(I2,I2)
vc(I2,I1)
boolc(pred(I1), pred(I2))
pred(A) : A to unknown predicate
indicator variables
boolc(pred(I1))
boolc(pred(I2), pred(I1))
boolc(pred(I2)) Boolean constraintto satisfying soln
(SAT Solver)
Invariant soln∧
boolc(pred(I1))
Program constraintto boolean constraint
SAT formulae over predicate
indicators
Local reasoning Fixed-Point Computation
Verification part of the talk
• Three fixed-point inference algorithms– Iterative fixed-point
• Greatest Fixed-Point (GFP)• Least Fixed-Point (LFP)
– Constraint-based (CFP)
• Weakest Precondition Generation
• Experimental evaluation
(LFP)
(GFP)
(CFP)
Computing maximally weak preconditions
• Iterative:– Backwards algorithm:
• Computes maximally-weak invariants in each step
– Precondition generation using GFP:• Output disjuncts of entry fixed-point in all candidates
• Constraint-based:– Generate one solution for precondition– Assert constraint that ensures strictly weaker pre– Iterate till unsat—last precondition generated is
maximally weakest
Verification part of the talk
• Three fixed-point inference algorithms– Iterative fixed-point
• Greatest Fixed-Point (GFP)• Least Fixed-Point (LFP)
– Constraint-based (CFP)
• Weakest Precondition Generation
• Experimental evaluation– Why experiment?– Problem space is undecidable—if not the worst case, is
the average, `difficult‘ case efficiently solvable?– We use SAT/SMT solvers—hoping that our instances will
not incur their worst case exponential solving time
(LFP)
(GFP)
(CFP)
Implementation
C Program
Templates
Predicate Set
CFG
InvariantsPreconditions
Microsoft’s Phoenix Compiler Verification
Conditions
IterativeFixed-point
GFP/LFP
Constraint-based
Fixed-Point
Candidate Solutions
Boolean Constraint
Z3 SMT Solver
Verifying Sorting Algorithms
• Benchmarks– Considered difficult to verify– Require invariants with quantifiers– Sorting, ideal benchmark programs
• 5 major sorting algorithms– Insertion Sort, Bubble Sort (n2 version and termination
checking version), Selection Sort, Quick Sort and Merge Sort
• Properties:– Sort elements
• ∀k : 0≤k<n => A[k] ≤A[k+1] --- output array is non-decreasing
– Maintain permutation, when elements distinct• ∀k∃j : 0≤k<n => A[k]=Aold[j] & 0≤j<n --- no elements gained
• ∀k∃j : 0≤k<n => Aold[k]=A[j] & 0≤j<n --- no elements lost
Runtimes: Sortedness
Selection Sort Insertion Sort Bubble Sort (n2) Bubble Sort (flag) Quick Sort Merge Sort0
2
4
6
8
10
12
14
16
LFPGFPCFP
seco
nds
Tool can prove sortedness for all
sorting algorithms!
Runtimes: Permutation
Selection Sort Insertion Sort Bubble Sort (n2) Bubble Sort (flag) Quick Sort Merge Sort0
5
10
15
20
25
30
35
40
LFPGFPCFP
∞ ∞
…
94.42
seco
nds
…Permutations too!
Inferring Preconditions
• Given a property (worst case runtime or functional correctness) what is the input required for the property to hold
• Tool automatically infers non-trivial inputs/preconditions
• Worst case input (precondition) for sorting:– Selection Sort: sorted array except last element is the smallest– Insertion, Quick Sort, Bubble Sort (flag): Reverse sorted array
• Inputs (precondition) for functional correctness:– Binary search requires sorted input array– Merge in Merge sort requires sorted inputs– Missing initializers required in various other programs
Runtimes (GFP): Inferring worst case inputs for sorting
seco
nds
Selection Sort Insertion Sort Bubble Sort (n2) Bubble Sort (flag) Quick Sort Merge Sort0
5
10
15
20
25
30
35
40
45
Tool infers worst case inputs for all
sorting algorithms!
Nothing to infer as all inputs yield the same behavior
User Input
Full functionalcorrectness
No input
Template-basedinvariant inference
Small; intuitive input
Inferring program properties
InvariantTemplates
Predicates
Enumerateall possible
predicate sets
More burden on theorem prover
Incrementally increase
sophistication
Loss of efficiency
Verification using Templates: Summary
• Powerful invariant inference over predicate abstraction– Can infer quantifier invariants
• Three algorithms with different strengths– LFP, GFP, CFP
• Extend to maximally-weak precondition inference– Worst case inputs and preconditions for functional correctness
• Techniques builds on SMT Solvers, so exploit their power
• Successfully verified/inferred preconditions– All major sorting algorithms and other difficult benchmarks
Verification
Strongest Postcondition
Inference (LFP)
Weakest Precondition
Inference (GFP)
Verification: Summary
Iouter
IinnerOutput state
Input state
Synthesis: Problem Statement
Iouter
IinnerOutput state
Input state
??
Looping structure; Resources
Iouter
IinnerOutput state
Input state
??
??
??
??
??
Resource Limits:• Stack space• Computational limits
Constraint Setup
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2]∧ x1=e1 ∧ x2=e2 ∧ x3=e3
∀k1,k2 : ??1 => ??2∀k1,k2 : ??3 => ??4
Find optimal solutions to unknowns
??1, ??2, ??3, ??4
=>
Verification:
∀k1,k2 : 0≤k1<k2<n ∧ k1<i => A[k1] ≤A[k2] ∧ x1=??5 ∧ x2=??6 ∧ x3=??7
∀k1,k2 : ??1 => ??2∀k1,k2 : ??3 => ??4
Find optimal solutions to unknowns
??1, ??2, ??3, ??4
=>
Synthesis:
and ??5, ??6, ??7, ??8
∀k1,k2 : ??9 => ??10
∧ x1=??5 ∧ x2=??6 ∧ x3=??7
∀k1,k2 : ??1 => ??2∀k1,k2 : ??3 => ??4
Find optimal solutions to unknowns
??1, ??2, ??3, ??4
=>
Synthesis:
and ??5, ??6, ??7, ??8
and sometimes ??9, ??10
Stack space
Computation limits
Constraint Solving
∀k1,k2 : ??9 => ??10
∧ x1=??5 ∧ x2=??6 ∧ x3=??7
∀k1,k2 : ??1 => ??2∀k1,k2 : ??3 => ??4
Find solutions to unknowns
??1, ??2, ??3, ??4
??5, ??6, ??7, ??8
??9, ??10
=>
Synthesis:
Precondition
Postcondition
GFP
LFP
Too inefficient
Ideal as it does piecewise reduction
Constraint-based
SAT solving fixed-point
uses information
from both bkwd and fwd
Example: Strassen’s Matrix Mult.
• Recursive Block matrix multiplication– Trivial solution is O(n3)– Needs 8 multiplication; hence the O(nln8)
• There exists a way to compute the same eight– In 7 multiplications! Will give O(nln7) = O(n2.81)
Looping structure:• Acyclic
Resources:• 7 multiplications• 7 vars to hold the intermediate results
=
Strassen’s soln:v1 = (a1+a4)(b1+b4)v2 = (a3+a4)b1
v3 = a1(b2-b4)v4 = a4(b3-b1) v5 = (a1+a2)b4
v6 = (a3-a1)(b1+b2)v7 = (a2-a4)(b3+b4)
c1 = v1+v4-v5+v7
c2 = v3+v5
c3 = v2+v4
c4 = v1+v3-v2+v6
=a1 a2
a3 a4
b1 b2
b3 b4
c1 c2
c3 c4
Our tool synthesizes strassen’s soln along with thousands others:
- 7 multiplications decides the exponent
- Constant factor decided by number of adds/subs
- Add another resource limitations: <19 adds/subs
- We get around 5-10 soln
Our tool can synthesize sorting algorithms too!!!
Synthesis: Result
• If you can give me a verification tool that can handle multiple positive and negative unknowns
• And additionally, generates maximally-weak solutions to negative unknowns
Then
• It can be converted into a synthesis tool!
Conclusion
??
Verification
Strongest Postcondition
Inference (LFP)
Weakest Precondition
Inference (GFP)
Iouter
IinnerOutput state
Input state
Program Synthesi
s
VS3: http://www.cs.umd.edu/~saurabhs/pacs
Discuss: [email protected] me after the talk! I’m here till Tuesday afternoon
Outline
• Three fixed-point inference algorithms– Iterative fixed-point
• Greatest Fixed-Point (GFP)• Least Fixed-Point (LFP)
– Constraint-based (CFP)
• Optimal Solutions– Built over a clean theorem proving interface
• Weakest Precondition Generation
• Experimental evaluation
(LFP)
(GFP)
(CFP)
Optimal Solutions
• Key: Polarity of unknowns in formula Φ– Positive or negative:
• Value of positive unknown stronger => Φ stronger• Value of negative unknown stronger => Φ weaker
• Optimal Soln: Maximally strong positives, maximally weak negatives
• Assume theorem prover interface: OptNegSol– Optimal solutions for formula with only negative unknowns– Built using a lattice search by querying SMT Solver
¬
negative
u1
positive
∨ u2 ) ∧ u3(
positive
∀x : ∃y : Φ =
Optimal Solutions using OptNegSol
formula Φ contains unknowns:
Repeat until set stable
Optimal Solutions for formula Φ
OptOpt’
Opt’’
Merge
…
Merge positive tuples
Φ[ ]Φ[ ]
Φ[ ]
u1…uP positive u1…uN negative
…
a1…aP
a’1…a’P
a’’1…a’’P
P-tuple that assigns a single predicate
to each positive unknown
a1…aP,S1…SN
a’1…a’P,S’1…S’N…
Si soln for the negative unknowns
a’’1…a’’P,S’’1…S’’N
OptNegSol
…
P x
Siz
e o
f pre
dic
ate
set
Backwards Iterative (GFP)
post
pre
Backward: • Always pick the source invariant of unsat constraint to improve• Start with <⊤, …,⊤>
Candidate Sols <I1,I2> Unsat
constraints
<⊤, ⊤> { vc(I1,post) }
⊤
⊤
✓✓
✓
✗
✓
Backwards Iterative (GFP)
post
pre
a1
⊤
✓✓
Backward: • Always pick the source invariant of unsat constraint to improve• Start with <⊤, …,⊤>
Candidate Sols <I1,I2> Unsat
constraints
<⊤, ⊤> { vc(I1,post) }
<a1, ⊤> { vc(I2,I1) }
✓
Optimally strengthen so vc(pre,I1) ok unless no soln
✗
✓
Backwards Iterative (GFP)
post
pre
a1
b2
✓✓
Backward: • Always pick the source invariant of unsat constraint to improve• Start with <⊤, …,⊤>
Candidate Sols <I1,I2> Unsat
constraints
<⊤, ⊤> { vc(I1,post) }
<a1, ⊤> { vc(I2,I1) }
<a1,b1> { vc(I2,I2) }<a1,b2> { vc(I2,I2),vc(I1,I2) }
Multiple orthogonal optimal sols
✓
Optimally strengthen so vc(pre,I1) ok unless no soln
✗
✗
Backwards Iterative (GFP)
post
pre
✓ ✓
✓
Backward: • Always pick the source invariant of unsat constraint to improve• Start with <⊤, …,⊤>
Candidate Sols <I1,I2> Unsat
constraints
<⊤, ⊤> { vc(I1,post) }
<a1, ⊤> { vc(I2,I1) }
<a1,b1> { vc(I2,I2) }<a1,b2> { vc(I2,I2),vc(I1,I2) }
<a1,b1> { vc(I2,I2) }<a1,b’2> { vc(I1,I2) }
Multiple orthogonal optimal sols
✓
Optimally strengthen so vc(pre,I1) ok unless no soln
✗b’2
a1
Backwards Iterative (GFP)
post
pre
✓ ✓
✓
✓
Backward: • Always pick the source invariant of unsat constraint to improve• Start with <⊤, …,⊤>
Candidate Sols <I1,I2> Unsat
constraints
<⊤, ⊤> { vc(I1,post) }
<a1, ⊤> { vc(I2,I1) }
<a1,b1> { vc(I2,I2) }<a1,b2> { vc(I2,I2),vc(I1,I2) }
<a1,b1> { vc(I2,I2) }<a1,b’2> { vc(I1,I2) }
<a1,b1> { vc(I2,I2) }<a’1,b’2> none
Multiple orthogonal optimal sols
✓
Optimally strengthen so vc(pre,I1) ok unless no soln
b’2
a’1
<a’1,b’2> : GFP solution
Forward Iterative (LFP)
post
pre
⊥
⊥
✓ ✓
✓
✗
✓
Forward: • Pick the destination invariant of unsat constraint to improve• Start with <⊥,⊥>
Candidate Sols <I1,I2> Unsat
constraints
<⊥,⊥> vc(pre,I1)
Forward Iterative (LFP)
post
pre
a1
⊥
✓✓
Forward: • Pick the destination invariant of unsat constraint to improve• Start with <⊥,⊥>
Candidate Sols <I1,I2> Unsat
constraints
<⊥,⊥> vc(pre,I1)
<a1,⊥> vc(I1,I2)
✓
✗
✓
Optimally weakened so vc(I1,post) ok unless no soln
Forward Iterative (LFP)
post
pre
a1
b3
✓✓
Forward: • Pick the destination invariant of unsat constraint to improve• Start with <⊥,⊥>
Candidate Sols <I1,I2> Unsat
constraints
<⊥,⊥> vc(pre,I1)
<a1,⊥> vc(I1,I2)
<a1,b1> vc(I2,I2),<a1,b2> vc(I2,I1),
<a1,b3> vc(I2,I2),vc(I2,I1)
Multiple orthogonal optimal sols
✓
✗✗
Optimally weakened so vc(I1,post) ok unless no soln
Forward Iterative (LFP)
post
pre
a1
b’3
✓
✓
✓
Forward: • Pick the destination invariant of unsat constraint to improve• Start with <⊥,⊥>
Candidate Sols <I1,I2> Unsat
constraints
<⊥,⊥> vc(pre,I1)
<a1,⊥> vc(I1,I2)
<a1,b1> vc(I2,I2),<a1,b2> vc(I2,I1),
<a1,b3> vc(I2,I2),vc(I2,I1)
<a1,b2> vc(I2,I1),<a1,b’3> vc(I2,I1)
Multiple orthogonal optimal sols
<a1,b1> subsumed by <a1,b’3>
✓
✗
Optimally weakened so vc(I1,post) ok unless no soln
Forward Iterative (LFP)
post
pre
a’1
b’3
✓ ✓
✓
✓
Forward: • Pick the destination invariant of unsat constraint to improve• Start with <⊥,⊥>
Candidate Sols <I1,I2> Unsat
constraints
<⊥,⊥> vc(pre,I1)
<a1,⊥> vc(I1,I2)
<a1,b1> vc(I2,I2),<a1,b2> vc(I2,I1),
<a1,b3> vc(I2,I2),vc(I2,I1)
<a1,b2> vc(I2,I1),<a1,b’3> vc(I2,I1)
<a’1,b’3> none
Multiple orthogonal optimal sols
<a1,b1> subsumed by <a1,b’3>
✓
Optimally weakened so vc(I1,post) ok unless no soln