Program Derivation of Matrix Operations in GF

Embed Size (px)

DESCRIPTION

The original version of my undergraduate research presentation that I was graded on (I got an A, but this version is certainly inferior to the later version of the presentation, by which time I also had better insight into my results).

Citation preview

  • 1. Program Derivation of Matrix Operations in GF Charles Southerland Dr. Anita Walker East Central University

2. The Galois Field A finite field is a finite set and two operators (analogous to addition and multiplication) over which certain properties hold. An important finding in Abstract Algebra is that all finite fields of the same order are isomorphic. A finite field is also called a Galois Field in honor of Evariste Galois, a significant French mathematician in the area of Abstract Algebra who died at age 20. 3. History of Program Derivation Hoare's 1969 paper An Axiomatic Basis for Computer Programming essentially created the field of Formal Methods in CS. Dijkstra's paper Guarded Commands, Nondeterminacy and Formal Derivation of Programs introduced the idea of program derivation. Gries' book The Science of Programming brings Dijkstra's paper to a level undergrad CS and Math majors can understand. 4. Guarded Command Language This is part of the language that Dijkstra defined: S1 ;S2 Perform S1 , and then perform S2 . x:=e Assign the value of e to the variable x. if[b1 S 1 ][b2 S 2 ]fi Execute exactly one of the guarded commands (i.e. S1 , S2 , ) whose corresponding guard (i.e. b1 , b2 , ) is true, if any. do[b1 S 1 ][b2 S 2 ]od Execute the command if[b1 S 1 ][b2 S 2 ]fi until none of the guards are true. 5. The Weakest Precondition Predicate Transformer wp Consider the mapping wp:PL L where P is the set of all finite-length programs and L is the set of all logical statements about the state of a computer. For SP and RL, wp(S,R) yields the weakest QL such that execution of S from within any state satisfying Q yields a state satisfying R. With regard to this definition, we say a statement A is weaker than a statement B if and only if the set of states satisfying B is a proper subset of the set of states satisfying A. 6. Some Notable Properties of wp wp([S1 ;S2 ],R)=wp(S1 ,wp(S2 ,R)) wp([x:=e],R)=R, substituting e for x wp([if[b1 S 1 ][b2 S 2 ]fi],R) =(b1 b2 ) ( b1 wp(S 1 ,R)) ( b2 wp(S 2 ,R)) wp([do[b1 S 1 ][b2 S 2 ]od],R) =(R ~b 1 ~b 2 ) wp([if[b 1 S 1 ][b2 S 2 ] fi],R) wp([if],wp([if],R)) wp([if],wp([if],wp([if],R))) (for finitely many recursions) 7. The Program Derivation Process For precondition QL and postcondition RL, find SP such that Q=wp(S,R). Gather as much information as possible about the precondition and postcondition. Reduce the problem to previously solved ones whenever possible. Look for a loop invariant that gives clues on how to implement the program. If you are stuck, consider alternative representations of the data. 8. Conditions and Background for the Multiplicative Inverse in GF The precondition is that a and b be coprime natural numbers. The postcondition is that x be the multiplicative inverse of a modulo b. Since the greatest common divisor of a and b is 1, Bezout's Identity yields ax+by=1, where x is the multiplicative inverse of a. Recall that gcd(a,b)=gcd(ab,b)=gcd(a,ba). 9. Analyzing Properties of the Multiplicative Inverse in GF Combining Bezout's Identity and the given property of gcd, we get ax+by=gcd(a,b) =gcd(a,ba) =au+(ba)v =au+bvav =a(uv)+bv Since ax differs from a(uv) by a constant multiple of b, we get x (uv)modb . Solving for u, we see u (x+v)modb , which leads us to wonder if u and v may be linear combinations of x and y. 10. Towards a Loop Invariant for the Multiplicative Inverse in GF Rewriting Bezout's Identity using this, we get ax+by=a(1x+0y)+b(0x+1y) =a((1x+0y)+yy)+b(0x+1y) =a(x+yy)+by =a(x+y)ay+by =a(x+y)+(ba)y =au+(ba)y (so we deduce that v=y) Note that assigning c:=ba and z:=x+y would yield ax+by=az+cy. 11. Finding the Loop Invariant for the Multiplicative Inverse in GF Remembering that u and v are linear combinations of x and y, we see that by reducing the values of a and b as in the Euclidean Algorithm gives a1 u1 +b1 v1 =a1 (ca1x x+ca1y y) +b1 (cb1x x+cb1y y) =a2 (ca1x x+ca1y y) +b1 ((cb1x ca1x )x +(cb1y ca1y )y) = After the completion of the Euclidean algorithm, we will have gcd(a,b)(cxf x+cyf y)=1. 12. Algorithm for the Multiplicative Inverse in GF multinv(a,b){ x:=1;y:=0 do a>b a:=ab;x:=x+y b>a b:=ba;y:=y+x od returnx } 13. C Implementation of the Multiplicative Inverse in GF 14. Conditions and Background of the Matrix Product in GF The precondition is that the number of columns of A and the number of rows of B are equal. The postcondition is that C is the matrix product of A and B. The definition of the matrix product allows the elements of C be built one at a time, which seems to be a particularly straight- forward approach to the problem. 15. Loop Invariant of the Matrix Product in GF A good loop invariant would be that all elements of C which either have a row index less than i or else have a row index equal to i and have a column index less than or equal to j have the correct value. The loop clearly heads toward termination given that C is filled from left to right, from top to bottom (which will occur if the value of j is increased modulo the number of columns after every calculation, increasing i by 1 every time j returns to 0). 16. C Implementation of the Matrix Product in GF 17. Conditions and Background of the Determinant of a Matrix in GF The precondition is that the number of rows and the number of columns of A are equal. The postcondition is that d is the determinant of A. The naive approach to the problem is not very efficient, but it is much easier to explain and produces cleaner code. 18. The Loop Invariant of the Determinant of a Matrix in GF The loop invariant of the naive determinant algorithm is that d is equal to the sum for all k