7
Linear function invertibility, revisited Kernel-Image Theorem: For any linear function f : V ! W , dim Ker f + dim Im f = dim V Linear-Function Invertibility Theorem: Let f : V -! W be a linear function. Then f is invertible idim Ker f = 0 and dim V = dim W. Proof: We saw before that f I is one-to-one idim Ker f =0 I is onto if dim Im f = dim W Therefore f is invertible if dim Ker f = 0 and dim Im f = dim W. Kernel-Image Theorem states dim Ker f + dim Im f = dim V Therefore dim Ker f = 0 and dim Im f = dim W idim Ker f = 0 and dim V = dim W QED

Slides Dimension 06 Dimension Linear Functions II

Embed Size (px)

Citation preview

Page 1: Slides Dimension 06 Dimension Linear Functions II

Linear function invertibility, revisited

Kernel-Image Theorem:For any linear function f : V ! W ,

dimKer f + dim Im f = dimV

Linear-Function Invertibility Theorem: Let f : V �! W be a linear function. Thenf is invertible i↵ dimKer f = 0 and dimV = dimW.

Proof: We saw before that f

I is one-to-one i↵ dimKer f = 0

I is onto if dim Im f = dimWTherefore f is invertible if dimKer f = 0 and dim Im f = dimW.

Kernel-Image Theorem states dimKer f + dim Im f = dimV

Therefore

dimKer f = 0 and dim Im f = dimWi↵

dimKer f = 0 and dimV = dimWQED

Page 2: Slides Dimension 06 Dimension Linear Functions II

Rank-Nullity Theorem

Kernel-Image Theorem:For any linear function f : V ! W ,

dimKer f + dim Im f = dimV

Apply Kernel-Image Theorem to the function f (x) = Ax:

I Ker f = Null A

I dim Im f = dimCol A = rank A

Definition: The nullity of matrix A is dimNull A

Rank-Nullity Theorem: For any n-column matrix A,

nullity A+ rank A = n

Page 3: Slides Dimension 06 Dimension Linear Functions II

Checksum problem revisited

Checksum function maps n-vectors over GF (2) to 64-vectors over GF (2):x 7! [a1 · x, . . . ,a64 · x]

Original “file” p, transmission error eso corrupted file is p+ e.

If error is chosen according to uniform distribution,Probability (p+ e has same checksum as p)

= 2dimV

2n

where V is the null space of the matrix

A =

2

64a1...

a64

3

75

Fact: Can easily choose a1, . . . ,a64 so thatrank A = 64

(Randomly chosen vectors will probably work.)

Rank-Nullity Theorem )rank A + nullity A = n64 + dimV = n

dimV = n � 64ThereforeProbability = 2n�64

2n = 1264

very tiny chance that the changeis undetected

Page 4: Slides Dimension 06 Dimension Linear Functions II

Matrix invertibility

Rank-Nullity Theorem: For any n-column matrix A,

nullity A+ rank A = n

Corollary: Let A be an R ⇥ C matrix. Then A is invertible if and only if |R | = |C | andthe columns of A are linearly independent.

Proof: Let F be the field. Define f : FC �! FR by f (x) = Ax.Then A is an invertible matrix if and only if f is an invertible function.

The function f is invertible i↵ dimKer f = 0 and dimFC = dimFR

i↵ nullity A = 0 and |C | = |R |.

nullity A = 0 i↵ dimNull A = 0i↵ Null A = {0}i↵ the only vector x such that Ax = 0 is x = 0i↵ the columns of A are linearly independent. QED

Page 5: Slides Dimension 06 Dimension Linear Functions II

Matrix invertibility examples

1 2 34 5 6

�is not square so cannot be invertible.

1 23 4

�is square and its columns are linearly independent so it is invertible.

2

41 1 22 1 33 1 4

3

5 is square but its columns are not linearly independent so it is not

invertible

Page 6: Slides Dimension 06 Dimension Linear Functions II

Transpose of invertible matrix is invertible

Theorem: The transpose of an invertible matrix is invertible.

A =

2

4 v1 · · · vn

3

5 =

2

64a1...an

3

75 AT =

2

4 a1 · · · an

3

5

Proof: Suppose A is an invertible matrix. Then A is square and its columns arelinearly independent. Let n be the number of columns. Then rank A = n.

Because A is square, it has n rows. By the Rank Theorem, its rows are linearlyindependent.

The columns of the transpose AT are the rows of A, so the columns of AT are linearlyindependent.

Since AT is square and its columns are linearly independent, we conclude that AT isinvertible. QED

Page 7: Slides Dimension 06 Dimension Linear Functions II

More matrix invertibility

Earlier we proved: If A has an inverse A�1 then AA�1 is identity matrixConverse: If BA is identity matrix then A and B are inverses? Not always true.

Theorem: Suppose A and B are square matrices such that BA is an identity matrix 1.Then A and B are inverses of each other.Proof: To show that A is invertible, need to show its columns are linearly independent.

Let u be any vector such that Au = 0. Then B(Au) = B0 = 0.On the other hand, (BA)u = 1u = u, so u = 0.

This shows A has an inverse A�1. Now must show B = A�1.We know AA�1 is an identity matrix.

BA = 1

(BA)A�1 = 1A�1 by multiplying on the right by B�1

(BA)A�1 = A�1

B(AA�1) = A�1 by associativity of matrix-matrix mult

B 1 = A�1

B = A�1 QED