new maths 2.pdf

  • Upload
    akumar

  • View
    233

  • Download
    0

Embed Size (px)

Citation preview

  • 7/24/2019 new maths 2.pdf

    1/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 1

    Dr. Suresh Kumar, Department of Mathematics, BITS-Pilani, Pilani Campus

    Note: Some concepts of Linear Algebra are briefly described here just to help the students. Therefore,the following study material is expected to be useful but not exhaustive for the Mathematics-II course. Fordetailed study, the students are advised to attend the lecture/tutorial classes regularly, and consult the textbook prescribed in the hand out of the course.

    Chapter 2 (2.1-2.4)

    Elementary row operations

    There are three elementary row operations:(1) Interchanging two rows Ri andRj (symbolically written as Ri Rj)(2) Multiplying a row Ri by a non-zero number k (symbolically written as Ri kRi)(3) Adding constantk multiple of a row Rj to a row Ri (symbolically written as Ri Ri+ kRj)

    To see how row transformations are applied, consider the matrix

    A=

    4 8 101 2 3

    3 5 6

    .

    ApplyingR1 R2, we obtain

    A

    1 2 34 8 10

    3 5 6

    .

    ApplyingR2 (1/2)R2, we obtain

    A

    1 2 32 4 5

    3 5 6

    .

    ApplyingR2 R2 2R1 and R3 R3 3R1, we get

    A

    1 2 30 0 1

    0 1 3

    .

    Note. The matrices resulting from the row transformation(s) are known as row equivalent matrices.That is why the sign of equivalence is used after applying row transformations. So we can write

    A=

    4 8 101 2 3

    3 5 6

    1 2 34 8 10

    3 5 6

    1 2 32 4 5

    3 5 6

    1 2 30 0 1

    0 1 3

    .

    Notice that row equivalent matrices can be obtained from each other by applying suitable row transfor-mation(s), but row equivalent matrices need not be equal.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    2/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 2

    Row Echelon Form

    A matrix is said to be in row echelon form1 (REF) if(i) all the zero rows, if any, lie at the bottom, and(ii) the leading entry (the first non-zero entry) in any row is 1, and its column lies to the right of thecolumn of the leading entry of the preceding row.

    In addition, if all the entries above the leading entries are 0, then the matrix is said to be in reducedrow echelon form (RREF).

    Ex.

    1 1 30 1 5

    0 0 1

    ,1 1 30 0 1

    0 0 0

    , 1 1 3

    0 0 1

    all are in REF.

    Ex.

    1 0 00 1 0

    0 0 1

    ,1 3 00 0 1

    0 0 0

    ,1 0 30 1 5

    0 0 0

    all are in RREF.

    The following example illustrates that how we find REF and RREF of a given matrix.

    Ex. Find RREF of the matrix

    A=

    2 4 51 2 3

    3 5 6

    .

    Sol. We have

    A=

    2 4 51 2 3

    3 5 6

    .

    ApplyingR1 R2, we obtain

    A

    1 2 32 4 5

    3 5 6

    .

    ApplyingR2 R2 2R1 and R3 R3 3R1, we get

    A

    1 2 30 0 1

    0 1 3

    .

    ApplyingR2 R3, we obtain

    A 1 2 30 1 3

    0 0 1

    .ApplyingR2 R2 and R3 R3, we get

    A

    1 2 30 1 3

    0 0 1

    .Notice that it is REF of A.

    1Dictionary meaning of echelon: A formation of troops in which each unit is positioned successively to the left or

    right of the rear unit to form an oblique or steplike line.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    3/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 3

    ApplyingR2 R2 3R3 and R1 R1 3R3, we get

    A

    1 2 00 1 0

    0 0 1

    .

    Finally, applyingR1 R1 2R2, we get

    A

    1 0 00 1 0

    0 0 1

    ,

    the RREF ofA.

    Useful Tip: From the above example, one may notice that for getting RREF of a matrix we make useof first row to make zeros in the first column, second row to make zeros in the second column and so on.Note: REF of matrix is not unique, but RREF is unique.

    Inverse of a Matrix

    LetA be a matrix with m rows say R1,R2,....,Rm, andB be a matrix with n columns sayC1,C2,.....,Cn.Then the product matrix AB is of order m n, and

    A.B=

    R1R2...

    Rm

    C1 C2 ... C n=

    R1C1 R1C2 ... R1CnR2C1 R2C2 ... R2Cn

    ... ... ... ...RmC1 RmC2 ... RmCn

    =AB

    If we interchange two rows in A say R1 R2, then the first two rows of AB also get interchanged.

    Similarly, it is easy to see that applying any of the other two row operations in A is equivalent to applyingthe same row operation in AB. Thus, we conclude that applying any elementary row operation in thematrix A is equivalent to applying the same elementary row operation in the matrix AB. Hence, ifR isany row operation, then R(AB) = R(A)B. Note that the matrixB is left unchanged. We make use ofthis fact to find the inverse of a matrix.

    LetA be a given non-singular matrix of order n n. To find the inverse ofA, first we writeA = InA.In this identity, we apply the elementary row operations on the left hand side matrix A in such a waythat it transforms to In, the RREF of A. As discussed above, the same row operations apply to thefirst matrix In on right hand side and suppose it transforms to a matrix B. Then, we have In = BA.Therefore,A1 =B . This method for obtaining the inverse of a matrix is called Gauss-Jordan method.

    Note: One may use elementary column operations (Ci C

    j,C

    i kC

    iandC

    i C

    i+ kC

    j) also to find

    A1. In this case, we write A = AIn and apply elementary column operations to obtain In = AB so thatA1 = B. It may be noted that we can not apply row and column operations together forfinding A1 in the Gauss-Jordan method.

    Ex. Use Gauss-Jordan method to find inverse of the matrix

    A=

    2 4 51 2 3

    3 5 6

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    4/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 4

    Sol. We write A = I3Aand therefore2 4 51 2 3

    3 5 6

    =

    1 0 00 1 0

    0 0 1

    A.

    ApplyingR1 R2, we obtain

    1 2 32 4 53 5 6

    = 0 1 01 0 00 0 1

    A.ApplyingR2 R2 2R1 and R3 R3 3R1, we get

    1 2 30 0 10 1 3

    =

    0 1 01 2 0

    0 3 1

    A.

    ApplyingR2 R3, we obtain

    1 2 30 1 30 0 1

    = 0 1 00 3 11 2 0

    A.ApplyingR2 R2 and R3 R3, we get

    1 2 30 1 30 0 1

    =

    0 1 00 3 11 2 0

    A.

    ApplyingR2 R2 3R3 and R1 R1 3R3, we get

    1 2 00 1 00 0 1

    =

    3 5 03 3 11 2 0

    A.

    Finally, applyingR1 R1 2R2, we get1 0 00 1 0

    0 0 1

    =

    3 1 23 3 11 2 0

    A.

    A1 =

    3 1 23 3 11 2 0

    Useful Tip: To find inverse ofA, first write A= InA. Then change the left hand side matrix A to itsRREF by applying suitable row transformations so that In = BA andA

    1 =B .

    Note: You might be familiar that inverse of a square matrix A exists if and only ifA is non-singular,that is, |A| = 0. Note that RREF of a non-singular matrix is always a unit matrix.

    Note: You know that when a system ofn linear equations in n variables is represented in the matrixform AX=B , then A is n square matrix of the coefficients of the variables, and solution of the systemreads as X= A1B provided A1 exists. In case, if number of equations is not equal to the number ofvariables, then A is not a square matrix, and therefore A1 is not defined. In what follows, we presenta general strategy for solving a system of linear equations. First we introduce the concept of rank of amatrix.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    5/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 5

    Rank of a Matrix

    Let A be a matrix of order m n. Then rank of A, denoted by rank(A), is defined as the number ofnon-zero rows in the REF of the matrix A.

    Ex. Find rank of the matrix A = 2 4 51 2 3

    3 5 6.

    Sol. Applying suitable row transformations, we obtain

    A=

    2 4 51 2 3

    3 5 6

    1 2 30 1 3

    0 0 1

    .

    We see that the REF ofA carries three non-zero rows. So rank ofAis 3.

    Useful Tip: To find rank of a matrix A, find REF ofA. Then rank ofA is the number of non-zero rowsin REF ofA.

    Additional Information: The number of non-zero rows in REF ofA is, in fact, defined as the row rankofA. Similarly, the number of non-zero rows in REF ofA (transpose ofA) is defined as the column rankofA. Further, it can be established that the row rank is always equal to the column rank. It follows thatrank ofA is less than or equal to the minimum ofm (number of rows in A) andn (number of columns inA).

    Solution of a System of Linear Equations

    A system ofm linear equations in n unknown variables x1, x2,...., xn is given by

    a11x1+ a12x2+ ........... + a1nxn = b1,a21x1+ a22x2+ ........... + a2nxn = b2,..........................am1x1+ am2x2+ ........... + amnxn = bm.

    Here aij and bi (i = 1, 2,...,m and j = 1, 2,...,n) are constants. The matrix form of this system oflinear equations reads as

    AX=B,

    where

    A=

    a11 a12 ... a1n

    a21 a22 ... a2n... ... ... ...

    am1 am2 ... amn

    , X= x1

    x2...xn

    and B= b1

    b2...bm

    .

    The system AX=B is said to be non-homogeneous provided B =O. Further,AX=O is calledhomogeneous system. The matrix

    a11 a12 ... a1n b1a21 a22 ... a2n b2... ... ... ... ...

    am1 am2 ... amn bm

    ,

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    6/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 6

    which is formed by inserting the column of matrix B next to the columns ofA, is known as augmentedmatrix of the matrices A and B. We shall denote it by [A : B]. The following theorem tells us aboutthe consistency of the systemAX=B .

    Theorem: The system AX=B of linear equations has a(i) unique solution if rank(A)=rank([A: B ]) =n,

    (ii) infinitely many solutions if rank(A)=rank([A: B])< n,(iii) no solution if rank(A)=rank([A: B ]).From this theorem, we deduce the following.

    IfB = O, then obviously rank(A)=rank([A: B]). It implies that the homogeneous systemAX=Oalways has at least one solution. Further, it has the unique trivial solutionX = O if rank(A)= nand infinitely many solutions if rank(A)< n.

    To find rank([A: B]), we find REF of the augmented matrix [A: B]. From the REF of [A: B], wecan immediately write the rank(A). Then using the above theorem, we decide about the nature ofsolution of the system. In case the solution exists, it can be derived using the REF of the matrix[A: B] as illustrated in the following example.

    Ex. Test the consistency of the following system of equations and find the solution, if exists.

    2x + 3y+ 4z = 11,

    x + 5y+ 7z = 15,

    3x + 11y+ 13z = 25.

    Sol. Considering the matrix form AX= B of the given system, we have X=

    xy

    z

    and the augmented

    matrix

    [A: B] = 2 3 4 111 5 7 153 11 13 25

    .ApplyingR1 R2, we obtain

    [A: B]

    1 5 7 152 3 4 11

    3 11 13 25

    .

    ApplyingR2 R2+ (2)R1 and R3 R3+ (3)R1, we have

    [A: B]

    1 5 7 150 7 10 190 4 8 20

    .

    ApplyingR2 (1/7)R2, we have

    [A: B]

    1 5 7 150 1 10/7 19/7

    0 4 8 20

    .

    ApplyingR3 R3+ 4R2, we have

    [A: B]

    1 5 7 150 1 10/7 19/7

    0 0 16/7 64/7

    .

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    7/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 7

    ApplyingR3 (7/16)R3, we have

    [A: B]

    1 5 7 150 1 10/7 19/7

    0 0 1 4

    .

    This is the REF of [A: B ], which contains three non-zero rows. So rank([A: B ])= 3. Also, we see that

    REF of the matrix A contains three non-zero rows. So rank(A)= 3. Further, there are three variables inthe given system. So rank(A)=rank([A: B]) = 3. Hence, the given system of equations is consistent andhas a unique solution.

    From the REF of [A: B], the given system of equations is equivalent to

    x + 5y+ 7z = 15,

    y+ (10/7)z = 19/7,

    z = 4.

    From the third equation, we have z = 4. Insertingz = 4 into second equation, we obtain y = 3.Finally, plugging z = 4 and y = 3 into first equation, we get x = 2. Hence, the solution of the givensystem is x= 2, y = 3 and z= 4.Note: In the above example, first we have found the REF of the matrix [A: B]. Then we have writtenthe reduced system of equations and found the solution using back substitution. This approach is calledGauss Elimination Method. If we use RREF of [A: B ] to obtain the solution, then this approach iscalled Gauss-Jordan Method. For illustration of this method, we start with the REF of the matrix[A: B] as obtained above. We have

    [A: B]

    1 5 7 150 1 10/7 19/7

    0 0 1 4

    .

    ApplyingR2 R2+ (10/7)R3 and R1 R1+ (7)R3, we get

    [A: B]

    1 5 0 130 1 0 30 0 1 4

    .ApplyingR1 R1 5R2, we get

    [A: B]

    1 0 0 20 1 0 3

    0 0 1 4

    .

    The RREF of [A: B] yields x = 2, y= 3 and z = 4.

    Ex. Test the consistency of the following system of equations

    x + y+ 2z+ w= 5,

    2x + 3y z 2w= 2,

    4x + 5y+ 3z= 7.

    Sol. Here the augmented matrix is

    [A: B] =

    1 1 2 1 52 3 1 2 2

    4 5 3 0 7

    .

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    8/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 8

    Using suitable row transformations (you can do it), we find

    [A: B]

    1 0 7 5 00 1 5 4 0

    0 0 0 0 1

    .

    We see that the rank([A : B]) = 3, but rank(A) = 2. So the given system of equations is inconsistent,

    that is, it has no solution.

    Note: From the above example, you can understand that why there does not exist a solutionwhen rank(A)=rank([A: B]). For, look at the reduced system of equations, which reads as

    x + 7z+ 5w= 0,

    y 5z 4w= 0,

    0 = 1.

    Obviously, the third equation is an absurd.

    Also, notice that initially we are given three equations in four variables. Initial guess says that there

    would exist infinitely many solutions of the system. But we find that there does not exist even a singlesolution. So you should keep in mind that a system involving number of variables more than the numberof equations need not to possess a solution.

    Ex. Test the consistency of the following system of equations:

    x + 2y+ z= 1,

    3x y+ 2z= 1,

    y+ z= 1,

    where is a constant.

    Sol. Here the augmented matrix is

    [A: B] =

    1 2 1 13 1 2 1

    0 1 1

    .

    Using suitable row transformations (you can do it), we find

    [A: B]

    1 2 1 10 1 1 4/5

    0 0 1 1/5

    .

    We see that the rank([A: B]) = 3 irrespective of the values of, but rank(A) = 2 if = 1 and rank(A) = 3

    if= 1. So the given system of equations is consistent with rank([A : B]) = 3 =rank(A) when = 1,and possesses a unique solution in terms of. The reduced system of equations reads as

    x 2y z= 1,

    y+ z = 4/5,

    ( 1)z= 1/5.

    which gives

    z = 1

    5( 1), y=

    4

    5

    1

    5( 1), x=

    3

    5

    1

    5( 1).

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    9/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 9

    Ex. Test the consistency of the following system of equations and find the solution, if exists.

    6x1 12x2 5x3+ 16x4 2x5 = 53,

    3x1+ 6x2+ 3x3 9x4+ x5 = 29,

    4x1+ 8x2+ 3x3 10x4+ x5 = 33.

    Sol. Here the augmented matrix is

    [A: B] =

    6 12 5 16 2 533 6 3 9 1 294 8 3 10 1 33

    .

    Using suitable row transformations (you can do it), we find

    [A: B]

    1 2 0 1 0 40 0 1 2 0 5

    0 0 0 0 1 2

    .

    We see that the rank([A : B]) = 3 =rank(A) < 5(the number of variables in the system). So the given

    system of equations has infinitely many solutions. The reduced system of equations is

    x1 2x2+ x4 = 4,

    x3 2x4 = 5,

    x5 = 2.

    The second and fourth columns in RREF of [A: B ] do not carry the leading entries, and correspond tothe variables x2 and x4 which we consider as independent variables. Let x2= b and x4 = d. So from thereduced system of equations, we get

    x1= 2b d 4, x2= b, x3= 2d + 5, x4= d, x5= 2.

    Hence, the complete solution set is

    {(x1, x2, x3, x4, x5) = (2b d 4, b, 2d + 5, d, 2) :b, d R}.

    Row vectors, linear combinations and row space

    The rows of a matrix are treated as row vectors. For example, consider the matrix

    A=

    2 3 41 5 73 11 13

    .

    Then there are three row vectors given by [2, 3, 4], [1, 5, 7] and [3, 11, 13].Sum of any scalar multiples of row vectors is called a linear combination while the set of all linear

    combinations of the row vectors is called row space of the matrix.For example, ifa,b and c are any three real numbers, then the expression

    a[2, 3, 4] + b[1, 5, 7] + c[3, 11, 13] = [2a + b + 3c, 3a + 5b + 11c, 4a + 7b + 13c]

    is a linear combination of the vectors [2, 3, 4], [1, 5, 7] and [3, 11, 13] while the set

    {[2a + b + 3c, 3a + 5b + 11c, 4a + 7b + 13c] :a, b, c R}

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    10/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 10

    of all linear combinations of the row vectors ofA is the row space ofA.Recall that two matrices are row equivalent if one can be derived from the other by applying suitable

    row transformation(s). The row spaces of two row equivalent matrices are always same. Afterall, rowspaces are nothing but the linear combinations of the row vectors of the matrices. Consequently, theinterplay of row operations gives rise to the same sets of linear combinations of row vectors, and hencethe same row spaces. This fact is quite useful for determining a simple form of row space of a matrix.

    We know RREF of a matrix is its row equivalent matrix, and is unique as well. So we shall prefer to useRREF of the given matrix to write its row space. For example, consider the matrix

    A=

    2 3 41 5 7

    3 11 13

    .

    Its RREF is1 0 00 1 0

    0 0 1

    .

    So row space ofAreads as

    {a[1, 0, 0] + b[0, 1, 0] + c[0, 0, 1] = [a,b,c] :a, b, c R}.

    Ex. Determine whether the row vector [5, 17,20] is in the row space of the matrix

    P=

    3 1 24 0 12 4 3

    .

    Sol. We need to check whether there exists three real numbers a, b andc such that

    [5, 17,20] =a[3, 1,2] + b[4, 0, 1] + c[2, 4,3].

    This gives the following system of linear equations:

    3a + 4b 2c = 5,

    a + 4c = 17,

    2a + b 3c = 20.

    Here the augmented matrix is

    [A: B] = 3 4 2 51 0 4 17

    2 1 3 20

    1 0 0 50 1 0 1

    0 0 1 3 .

    So we get a = 5, b = 1,c= 3, and

    [5, 17,20] = 5[3, 1,2] [4, 0, 1] + 3[2, 4,3].

    Thus, [5, 17,20] is a linear combination of the row vectors ofP, and hence is in the row space ofP.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    11/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 11

    Linearly independent row vectors

    The rows or row vectors of a matrix are said to be linearly independent (LI) if any row vector does notbelong to the row space of remaining row vectors, that is, if any row vector is not the linear combinationof the remaining row vectors. Equivalently, the row vectors are LI if their linear combination is zero vectoronly when all scalars are 0. The row vectors which are not LI, are said to be linearly dependent (LD).

    To determine linear independence of the row vectors of a matrix, put the linear combination of the rowvectors equal to zero vector, and determine the values of scalars. If all the scalars are 0, the vectors are LI.

    Ex. Test the linear independence of the row vectors of the matrix

    P=

    3 1 24 0 12 4 3

    .

    Sol. We need to find three real numbers a, b and c such that

    a[3, 1,2] + b[4, 0, 1] + c[2, 4,3] = [0, 0, 0].

    This gives the following system of linear equations:

    3a + 4b 2c = 0,

    a + 4c = 0,

    2a + b 3c = 0.

    Here the augmented matrix is

    [A: B] =

    3 4 2 01 0 4 02 1 3 0

    1 0 0 00 1 0 0

    0 0 1 0

    .

    So we geta = 0,b = 0,c = 0. Thus, all the scalars are 0, which shows that the row vectors of the matrixPare LI.

    Ex. Test the linear independence of the row vectors of the matrix

    P=

    3 1 24 0 1

    7 1 1

    .

    Sol. We need to find three real numbers a, b and c such that

    a[3, 1,2] + b[4, 0, 1] + c[7, 1,1] = [0, 0, 0].

    This gives the following system of linear equations:

    3a + 4b + 7c = 0,

    a + c = 0,

    2a + b c = 0.

    Here the augmented matrix is

    [A: B] =

    3 4 7 01 0 1 02 1 1 0

    1 0 0 00 1 0 0

    0 0 0 0

    .

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    12/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 12

    So we geta= 0, b= 0, but c is arbitrary. Thus, the row vectors of the matrix Pare not LI. Notice thatthe third row in P is sum of the first two rows. That is why we got the linear dependence of the rowvectors ofP.

    Note: We can talk about the linear independence of the rows of a matrix by looking at the rank of thematrix as well. If rank of a matrix is equal to the number of its rows, then the rows of the matrix are

    LI. At this stage, you should understand the interplay of row operations, RREF, rank, linear system ofequations and linear independence of rows.

    Homework: Do the problems of exercises 2.1 to 2.4 from the textbook.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    13/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 13

    Chapter 3 (3.4)

    Eigenvalues and Eigenvectors

    A real number is an eigenvalue of an n-square matrix A iff there there exists a non-zero n-vector Xsuch that AX= X or (A In)X= 0. The non-zero vectorX is called eigenvector ofA correspond-

    ing to the eigenvalue . Since the non-zero vectorX is non-trivial solution of the homogeneous system(A In)X=0, we must have |A In| = 0. This equation, known as the characteristic equation ofA,yields eigenvalues ofA. So to find the eigenvalues ofA, we solve the equation|A In| = 0.

    The eigenvectors ofA corresponding to are the non-trivial solutions (X=0) of the homogeneoussystem (A In)X=0.

    The setE= {X :AX=X} is known as the eigenspace of. Note thatE contains all eigenvectorsofA corresponding to the eigenvalue in addition to the vector X = 0 since A0 = 0. Of course, bydefinition X=0 is not an eigenvector ofA.

    Ex. Find eigenvalues and eigenvectors ofA = 12 512 11 .

    Sol. Here, the characteristic equation ofA, that is, |A I2| = 0 reads as

    12 512 11

    = 0.This leads to a quadratic equation in given by

    2 30 = 0.

    Its roots are = 6,5, the eigenvalues ofA.Now, the eigenvectors corresponding to = 6, are the non-trivial solutions X of the homogeneous

    system (A 6I2)X= 0. So to find eigenvectors ofA corresponding to the eigenvalue = 6, we need tosolve the homogeneous system:

    6 512 17

    x1x2

    =

    00

    .

    ApplyingR1 (1/6)R1, we get1 17/22 17

    x1x2

    =

    00

    .

    ApplyingR2 R2 2R1, we get1 17/20 0

    x1x2

    =

    00

    .

    So the system reduces to x1 (17/2)x2= 0. Letting x2= a, we getx1= (17/2)a. So

    [x1, x2] = [(17/2)a, a] = (1/2)a[17, 2]

    . So the eigenvectors corresponding to = 6 are non-zero multiples of the vector [17, 2]. The eigenspacecorresponding to = 6, therefore, is E6= {a[17, 2] :a R}.

    Likewise, to find the eigenvectors corresponding to = 5, we solve the homogeneous system(A + 5I2)X=0, that is,

    17 512 6

    x1x2

    =

    00

    .

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    14/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 14

    ApplyingR1 (1/17)R1 and R2 (1/2)R2, we obtain1 31 3

    x1x2

    =

    00

    .

    ApplyingR2 R2 R1, we have

    1 30 0

    x1x2

    =

    00

    .

    So the system reduces to x1 3x2= 0. Letting x2= a, we getx1= 3a. So

    [x1, x2] = [a, 3a] =a[1, 3].

    So the eigenvectors corresponding to = 5 are non-zero multiples of the vector [3, 1]. The eigenspacecorresponding to = 5, therefore, is E5= {a[3, 1] :a R}.

    Ex. Find eigenvalues and eigenvectors ofA =

    4 8 126 6 126 8 14

    .

    Sol. Here, the characteristic equation ofA, that is, |A I3| =0 reads as4 8 12

    6 6 126 8 14

    = 0.This leads to a cubic equation in given by

    3 42 + 4= 0.

    Its roots are = 0, 2, 2, the eigenvalues ofA.Let us first find the eigenvectors of A corresponding to = 0. For this, we need to find non-zero

    solutions Xof the homogeneous system (A 0I3)X=0, that is,4 8 126 6 12

    6 8 14

    x1x2

    x3

    =

    00

    0

    Using suitable row operations, we find1 0 10 1 1

    0 0 0

    x1x2

    x3

    =

    00

    0

    .

    So the reduced system of equations is

    x1+ x3= 0, x2 x3= 0.

    Lettingx3= a, we getx1= a andx2= a. So

    [x1, x2, x3] = [a,a,a] =a[1, 1, 1].

    Thus, the eigenvectors ofA corresponding to = 0 are non-zero multiples of the vector X1 = [1, 1, 1].The eigenspace corresponding to = 0, therefore, is E0= {aX1: a R}.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    15/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 15

    Now, let us find the eigenvectors ofA corresponding to = 2. For this, we need to find non-zerosolutions Xof the homogeneous system (A 2I3)X=0, that is,

    6 8 126 8 126 8 12

    x1x2

    x3

    =

    00

    0

    Using suitable row operations, we find1 4/3 20 0 0

    0 0 0

    x1x2

    x3

    =

    00

    0

    .

    So the system reduces to

    x1 (4/3)x2+ 2x3= 0.

    Lettingx2= a and x3=b, we get x1= (4/3)a 2b. So

    [x1, x2, x3] = [(4/3)a 2b,a,b] = [(4/3)a,a, 0] + [2b, 0, b] = (1/3)a[4, 3, 0] + b[2, 0, 1].

    Thus, the eigenvectors corresponding to = 2 are non-trivial linear combinations of the vectors X2 =[4, 3, 0] and X3= [2, 0, 1]. SoE2= {aX2+ bX3: a, b R} is the eigenspace corresponding to = 2.

    Note: The algebraic multiplicity of an eigenvalue is defined as the number of times it repeats. In theabove example, the eigenvalue = 2 repeats two times. So its algebraic multiplicity is 2. Also, we get twolinearly independent eigenvectors X2 = [4, 3, 0] and X3 = [2, 0, 1] corresponding to = 2. The follow-ing example shows that there may not exist as many linearly independent eigenvectors as the algebraicmultiplicity of an eigenvalue.

    Ex. IfA =0 1 0

    0 0 10 0 0

    , then eigenvalues ofA are = 0, 0, 0. The eigenvectors corresponding to = 0

    are non-zero multiples of the vector X = [1, 0, 0]. The eigenspace corresponding to = 0, therefore, isE0= {a[1, 0, 0] :a R}. Please try this example yourself. Notice that there is only one linearly indepen-dent eigenvector X= [1, 0, 0] corresponding to the repeated eigenvalue (repeating thrice) = 0.

    Note: One can easily prove the following properties of eigen values.(i) Sum of eigenvalues of a matrixA is equal to the trace ofA, that is, the sum of diagonal elements ofA.(ii) Product of eigenvalues of a matrix A is equal to the determinant ofA. It further implies that deter-minant of a matrix is vanishes iff at least one eigenvalue of the matrix is 0.(iii) If is eigenvalue ofA, thenm is eigenvalue ofAm wherem is any positive integer; 1/is eigenvalue

    of inverse ofA, that is, A1

    ; k is eigenvalue ofA kI, where k is any real number.

    Diagonalization

    A square matrix A is said to be similar to a matrix B if there exists a non-singular matrix P such thatP1AP =B . In case, B is a diagonal matrix, we say that A is a diagonalizable matrix. Thus, a squarematrix A is diagonalizable if there exists a non-singular matrix P such that P1AP = D , where D is adiagonal matrix.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    16/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 16

    Suppose ann-square matrixAhasnlinearly independent eigenvectorsX1,X2, ...... ,Xncorrespondingto the eigenvalues 1,2, ...... , n. LetP = [X1X2.... Xn]. Then we have

    AP = [AX1AX2.... AXn] = [1X1 2X2.... nXn] = [X1X2.... Xn]

    1 0 ... 00 2 ... 0... ... ... ...0 0 . .. n

    =P D.

    This shows that if we construct P from eigenvectors ofA, thenA is diagonalizable, and P1AP =D haseigenvalues ofAat the diagonal places.

    Note: If A has n different eigenvalues, then it can be proved that there exist n linearly independenteigenvectors ofA and consequentlyA is diagonalizable. However, there may exist n linearly independenteigenvectors even ifA has repeated eigenvalues as we have seen earlier. Such a matrix is also, of course,diagonalizable. In case, A does not have n linearly independent eigenvectors, it is not diagonalizable.

    Ex. IfA=

    12 512 11

    , then P=

    17 32 1

    and P1AP =

    6 00 5

    . (Verify!)

    Ex. IfA=

    4 8 126 6 126 8 14

    , then P = 4 2 13 0 10 1 1

    andP1AP= 2 0 00 2 00 0 0

    . (Verify!)

    Note: IfA is a diagonalizable matrix, that is, P1AP =D or A = P DP1, then for any positive integern, we have An =P DnP1. For,

    A2 = (P DP1)2 =P DP1P DP1 =P D2P1.

    Likewise,A3 =P D3P1. So in general, An =P DnP1.This result can be utilized to evaluate powers of a diagonalizable matrix easily.

    Ex. Determine A2, where A =

    4 8 126 6 12

    6 8 14

    .

    So. P=

    4 2 13 0 1

    0 1 1

    , P1 =

    1 1 23 4 73 4 6

    andD=

    2 0 00 2 0

    0 0 0

    .

    SoA2 =P D2P1 =

    4 2 13 0 1

    0 1 1

    4 0 00 4 0

    0 0 0

    1 1 23 4 73 4 6

    =

    8 16 2412 12 24

    12 16 28

    .

    Homework: Do exercise 3.4 from the textbook.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    17/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 17

    Chapter 4 (4.1-4.7)Before going through the concept of real vector space, you must be familiar with the following axioms(mathematical statements without proof) satisfied by the real numbers.

    1. Closure property of addition: Sum of any two real numbers is a real number, that is, a, b Rimplies a+ b R. We also say thatR is closed with respect to addition or real numbers satisfyclosure property with respect to addition.

    2. Commutative property of addition: Real numbers are commutative in addition, that is,a +b=b + a for all a, b R.

    3. Associative property of addition: Real numbers are associative in addition, that is, a +(b+c) =(a + b) + c for all a, b, c R.

    4. Additive identity: The real number 0 is the additive identity of real numbers, that is, a+ 0 =a= 0 + a for all a R.

    5. Additive inverse: Additive inverse exits for every real number. Given any a R, we have a Rsuch thata + (a) = 0 = (a) + a. Soa is additive inverse ofa.

    6. Closure property of multiplication: Product of any two real numbers is a real number, thatis, a, b R implies a.b R. We also say thatR is closed with respect to multiplication or realnumbers satisfy closure property with respect to multiplication.

    7. Commutative property of multiplication: Real numbers are commutative in multiplication,that is, a.b= b.a for all a, b R.

    8. Associative property of multiplication: Real numbers are associative in multiplication, thatis,a.(b.c) = (a.b).c for all a, b, c R.

    9. Multiplicative identity: The real number 1 is the multiplicative identity of real numbers, that is,a.1 =a = 1.a for all a R.

    10. Multiplicative inverse: Multiplicative inverse exits for every non-zero real number. Given anynon-zeroa R, we have 1/a R such that a.(1/a) = 1 = (1/a).a. So 1/a is multiplicative inverseofa.

    11. Multiplication is distributive over addition.For any a, b, c R, we have a.(b + c) =a.b + a.c (Left Distribution Law)(b + c).a= b.a + c.a (Right Distribution Law).

    Note: In real numbers, division is only right distributive over addition. For example,

    (4 + 5)/7 = 4/7 + 5/7.

    But

    7/(4 + 5) = 7/4 + 7/5.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    18/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 18

    Real Vector Space

    A non-empty set V is said to be a real vector space if there are defined two operations called vectoraddition and scalar multiplication denoted by and respectively such that for all u, v, w V anda, b R, the following properties are satisfied:

    1. u v V (Closure property)

    2. u v= v u (Commutative property)

    3. (u v)w= u (v w) (Associative property)

    4. There exists some element 0 V such that u 0= u = 0 u. (Existence of additive identity)

    5. There existsu V such that u (u) =0 = (u) u. (Existence of additive inverse)

    6. a u V

    7. a (u v) =a u a v

    8. (a + b) u= a u b u

    9. (ab) u= a (b u)

    10. 1 u= u

    Note: Elements of the vector space Vare called as vectors while that ofR are called as scalars. In whatfollows, a vector space shall mean a real vector space.

    Note: (i) Any scalar multiplied with the zero vector gives zero vector, that is, a 0= 0. For,

    a 0= a 0 0= a 0 a 0 (a 0) =a (0 0) (a 0) =a 0 (a 0) =0.

    For clarity, let us use + and . symbols in place of and respectively. Then we have

    a.0= a.0 + 0= a.0 + a.0 + (a.0) =a.(0 + 0) + (a.0) =a.0 + (a.0) =0.

    (ii) The scalar 0 multiplied with any vector gives zero vector, that is, 0 u= 0. For,

    0 u= (0 0) u= 0 u 0 u= 0.

    0.u= (0 0).u= 0.u 0.u= 0.

    (iii) (1) u gives the additive inverse uofu. For,

    (1) u u= (1) u 1 u= (1 + 1) u= 0 u= 0.

    (1).u + u= (1).u + 1.u= (1 + 1).u= 0.u= 0.

    Ex. The set R of all real numbers is a vector space with respect to the following operations:

    u v= u + v, (Vector Addition)

    a u= au, (Scalar Multiplication)

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    19/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 19

    for all a, u, v R.

    Sol. In this case, V = R, and all the properties of vector space are easily verifiable using the axiomssatisfied by the real numbers.

    Note: The set V ={0}, carrying only one real number namely 0, is a real vector space with respect to

    the operations mentioned in the above example. Think! It is easy!

    Ex. The set R2 = R R = {[x1, x2] : x1, x2 R} of all ordered pairs of real numbers is a vector spacewith respect to the following operations:

    [x1, x2] [y1, y2] = [x1+ y1, x2+ y2], (Vector Addition)

    a [x1, x2] = [ax1, ax2], (Scalar Multiplication)

    for all a Rand [x1, x2], [y1, y2] R2.

    Sol. Let u = [x1, x2], v = [y1, y2] and w = [z1, z2] be members ofV = R2, and a, b be any two real

    numbers. Then we have the following properties:

    1. Closure Property:u v= [x1, x2] [y1, y2] = [x1+ y1, x2+ y2] R

    2 since x1+ y1 and x2+ y2 are real numbers.

    2. Commutative Property:u v= [x1, x2] [y1, y2] = [x1+ y1, x2+ y2],v u= [y1, y2] + [x1, x2] = [y1+ x1, y2+ x2].But [x1+ y1, x2+ y2] = [y1+ x1, y2+ x2] since real numbers are commutative in addition.

    u v= v u.

    3. Associative Property:

    (uv)w= ([x1, x2][y1, y2])[z1, z2] = ([x1+y1, x2+y2])[z1, z2] = [(x1+y1)+z1, (x2+y2)+z2]u(vw) = [x1, x2]([y1, y2][z1, z2]) = [x1, x2]([y1+z1, y2+z2]) = [x1+(y1+z1), x2+(y2+z2)].Since real numbers are associative in addition, we have

    [(x1+ y1) + z1, (x2+ y2) + z2] = [x1+ (y1+ z1), x2+ (y2+ z2)].

    It implies that (u v) w= u (v w).

    4. Existence of identity:There exists 0= [0, 0] R2 such thatu 0= [x1, x2] [0, 0] = [x1+ 0, x2+ 0] = [x1, x2] =u,0 u= [0, 0] [x1, x2] = [0 + x1, 0 + x2] = [x1, x2] =u.

    So u 0= u = 0 u. Therefore [0, 0] is additive identity in R2.

    5. Existence of inverse: There exists u= [x1,x2] R2 such that

    u (u) = [x1, x2] [x1,x2] = [x1 x1, x2 x2] = [0, 0] =0,(u) u= [x1,x2] [x1, x2] = [x1+ x1,x2+ x2] = [0, 0] =0.So u (u) =0 = (u) u. This shows thatu= [x1,x2] is additive inverse ofu= [x1, x2]in R2.

    6. a u= a [x1, x2] = [ax1, ax2] R2

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    20/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 20

    7. a(uv) =a([x1, x2][y1, y2]) =a[x1+y1, x2+y2] = [a(x1+y1), a(x2+y2)] = [ax1+ay1, ax2+ay2],a u a v= a [x1, x2] a [y1, y2] = [ax1, ax2] [ay1, ay2] = [ax1+ ay1, ax2+ ay2]. a (u v) =a u a v.

    8. (a + b) u= (a + b)[x1, x2] = [(a + b)x1, (a + b)x2] = [ax1+ bx1, ax2+ bx2],a u b u= a[x1, x2] b[x1, x2] = [ax1, ax2] [bx1, bx2] = [ax1+ bx1, ax2+ bx2]. (a + b) u= a u b u

    9. (ab) u= (ab) [x1, x2] = [(ab)x1, (ab)x2],a (b u) =a (b [x1, x2]) =a [bx1, bx2] = [a(bx1), a(bx2)].But [(ab)x1, (ab)x2] = [a(bx1), a(bx2)] since real numbers are associative in multiplication.So (ab) u= a (b u).

    10. 1 u= 1 [x1, x2] = [1.x1, 1.x2] = [x1, x2] =u

    HenceR2 is a real vector space.

    Note: In general, the set Rn = {[x1, x2,......,xn] : xi R, i = 1, 2,...n} of all ordered n-tuples of realnumbers is a vector space with respect to the following operations:

    [x1, x2,....,xn] [y1, y2,......,yn] = [x1+ y1, x2+ y2,.....,xn+ yn], (Vector Addition)

    a [x1, x2,.....,xn] = [ax1, ax2, ......, axn], (Scalar Multiplication)

    for all a Rand [x1, x2,.....,xn], [y1, y2,......,yn] Rn.

    Ex. The set Mmn = {[aij]mn : aij R} of all m n matrices with real entries is a vector space withrespect to the following operations:

    [aij ]mn [bij ]mn = [aij+ bij]mn, (Vector Addition)

    a [aij]mn = [aaij ]mn, (Scalar Multiplication)

    for alla Rand [aij]mn, [bij ]mn Mmn. Notice that vector addition is usual addition of matrices, and

    scalar multiplication is the usual scalar multiplication in matrices.Sol. Letu = [aij ]mn, v = [bij]mn and w = [cij]mn be members ofV =Mmn, anda,b be any two realnumbers. Then we have the following properties:

    1. Closure Property:u v= [aij]mn [bij]mn= [aij+ bij ]mn Mmn.

    2. Commutative Property:u v= [aij]mn [bij]mn= [aij+ bij ]mn.v u= [bij]mn [aij]mn= [bij+ aij ]mn.Since aij+ bij =bij+ aij , so we have u v= v u.

    3. Associative Property:(u v)w= ([aij]mn [bij ]mn) [cij ]mn = ([aij+ bij ]mn) [cij ]mn = [(aij+ bij) + cij ]mn,u (vw) = [aij]mn ([bij ]mn [cij ]mn) = [aij ]mn ([bij+ cij ]mn) = [aij+ (bij+ cij)]mn.Since (aij+ bij) + cij =aij+ (bij+ cij), so we get (u v) w= u (v w).

    4. Existence of identity:There exists 0= [0]mn M

    mn such thatu 0= [aij ]mn [0]mn = [aij+ 0]mn = [aij ]mn = u,0 u= [0]mn [aij]mn = [0 + aij ]mn = [aij ]mn = u.So u 0= u = 0 u. Therefore, [0]mn , the null matrix of order m n is the additive identityin Mmn.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    21/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 21

    5. Existence of inverse: There exists u= [aij ]mn Mmn such thatu (u) = [aij ]mn [aij ]mn= [aij aij]mn = [0]mn = 0,(u) u= [aij]mn [aij ]mn= [aij+ aij ]mn= [0]mn = 0.So u (u) =0 = (u) u. This shows that u= [aij]mn is additive inverse ofu= [aij]mnin Mmn.

    6. a u= a [aij ]mn = [aaij]mn Mmn

    7. a (u v) =a ([aij ]mn [bij ]mn) =a [aij+ bij]mn = [a(aij+ bij)]mn = [aaij+ abij]mn,a u a v= a [aij]mn a [bij]mn= [aaij]mn [abij ]mn = [aaij+ abij]mn. a (u v) =a u a v.

    8. (a + b) u= (a + b) [aij]mn = [(a + b)aij]mn = [aaij+ baij ]mn,a u b u= a [aij]mn b [aij]mn = [aaij ]mn [baij]mn = [aaij+ baij ]mn. (a + b) u= a u b u.

    9. (ab) u= (ab) [aij ]mn = [(ab)aij]mn,a (b u) =a (b [aij ]mn) =a [baij ]mn = [a(baij)]mn.But [(ab)aij ]mn = [a(baij)]mn since real numbers are associative in multiplication.

    So (ab) u= a (b u).

    10. 1 u= 1 [aij]mn = [1.aij ]mn = [aij ]mn = u

    HenceMmn is a real vector space.

    Ex. The setPn =

    ni=0

    aixi =a0+ a1x + .... + anx

    n :ai R

    of all polynomials in x of degree nwith

    real coefficients is a vector space with respect to the following operations:

    n

    i=0aix

    i n

    i=0bix

    i =n

    i=0(ai+ bi)x

    i, (Vector Addition)

    a ni=0

    aixi =

    ni=0

    aaixi, (Scalar Multiplication)

    for all a R andni=0

    aixi Pn. Notice that vector addition is usual addition of polynomials, and scalar

    multiplication is the usual scalar multiplication in polynomials.

    Sol. Please do yourself following the procedure given in the previous example(s).

    Ex. The set = {f :f is well defined real valued function on [0, 1]} is a vector space with respect to

    the following operations:

    f g= f+ g, (Vector Addition)

    a f=af, (Scalar Multiplication)

    for all a Rand f, g .

    Sol. Please do yourself following the procedure given in the previous example(s).

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    22/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 22

    Ex. Show that the set = {f :f is well defined real valued function on [0, 1] and f(1/2) = 1} is not avector space with respect to the following operations:

    f g= f+ g, (Vector Addition)

    a f=af, (Scalar Multiplication)

    for all a Rand f, g .

    Sol. Letf , g . Then by definition of , we havef(1/2) = 1, g(1/2) = 1.Next by definition of vector addition, we find(f g)(1/2) = (f+ g)(1/2) =f(1/2) + g(1/2) = 1 + 1 = 2 = 1.So f g / . It means vector addition fails in . Consequently, it is not a vector space.

    Ex. The set R+ of positive real numbers is a vector space with respect to the operations:

    u v= uv (vector addition)

    a u= u

    a

    (scalar multiplication)for all a R and u, v R+. Here vector addition ofu and v is defined by their product while scalarmultiplication ofa anduis defined byuraised to the powera. So be careful while verifying the properties.Sol. Letu, v and w be members ofV = R+, and a, b be any two real numbers. Note thatu, v and ware positive real numbers while a and b are any real numbers. Then we have the following properties:

    1. Closure Property:u v= uv R+.

    2. Commutative Property:u v= uv = vu= v u.

    3. Associative Property:(u v)w= (uv) w= (uv)w= u(vw) =u (vw) =u (v w).

    4. Existence of identity:1 R+ (denoting the positive real number 1 by 0) such thatu 0= u 1 =u.1 =u,0 u= 1 u= 1.u= u.Sou 0= u = 0 u. Therefore, the positive real number 1 is the additive identity in R+.

    5. Existence of inverse:Since u is a positive real number so 1/u is also a positive real number (denoting 1/u byu) suchthat

    u (u) =u (1/u) =u.(1/u) = 1 = 0,(u) u= (1/u) u= (1/u).u= 1 =0.Sou (u) =0 = (u) u. This shows that u= 1/u is additive inverse ofu in R+.

    6. a u= ua R+

    7. a (u v) =a (uv) = (uv)a =uava =ua va =a u a v.

    8. (a + b) u= u(a+b) =uaub =ua ub =a u b u.

    9. (ab) u= u(ab) = (ub)a

    =a (ub) =a (b u).

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    23/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 23

    10. 1 u= u1 =u.

    HenceR+ is a real vector space.

    Ex. Show that the set Rof real numbers is a vector space with respect to the operations:

    u v= (u5 + v5)1/5

    a u= a1/5u

    for all a, u, v R. Further, the principal fifth root is to be considered in both the operations.Sol. Letu,v and w be members ofV =R, anda,b be any two real numbers. Then we have the followingproperties:

    1. Closure Property:u v= (u5 + v5)1/5 R.

    2. Commutative Property:u v= (u5 + v5)1/5 = (v5 + u5)1/5 =v u.

    3. Associative Property:

    (u v)w= ((u5 + v5)1/5) w= ((u5 + v5) + w5)1/5 = (u5 + (v5 + w5))1/5 =u (v5 + w5)1/5 =u (v w).

    4. Existence of identity:0 R(denoting the real number 0 by 0) such thatu 0= u 0 = (u5 + 05)1/5 =u,0 u= 0 u= (05 + u5)1/5 =u.Sou 0= u = 0 u. Therefore, the real number 0 is the additive identity in R.

    5. Existence of inverse:u R such thatu (u) = (u5 u5)1/5 = 0 =0,

    (u) u= (u5 + u5)1/5 = 0 =0.Sou (u) =0 = (u) u. This shows that u is additive inverse ofu in R.

    6. a u= a1/5u R

    7. a(uv) =a((u5+v5)1/5) =a1/5(u5+v5)1/5 = (au5+av5)1/5 = (a1/5u)(a1/5v) =auav.

    8. (a + b) u= (a + b)1/5u= (au5 + bu5)1/5 = (a1/5u) (b1/5u) =a u b u.

    9. (ab) u= (ab)1/5u= (a1/5b1/5)u= a1/5(b1/5u) =a1/5(b u) =a (b u).

    10. 1 u= 11/5u= u.

    HenceR is a real vector space.

    Ex. Show that the set Rof real numbers is not a vector space with respect to the operations:

    u v= (u5 + v5)1/5 (vector addition)

    a u= au (scalar multiplication)

    for all a, u, v R. Further, the principal fifth root is to be considered in vector addition.Sol. Property 8 is not satisfied. Please do yourself.

    Homework: Do exercise 4.1 from the textbook.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    24/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 24

    Subspace

    IfV is a vector space andWis a subset ofVsuch thatWis also a vector space under the same operationsas in V, then W is called a subspace ofV.

    Ex. The set W = {[x1, 0] :x1 R} is a vector space under the operations of vector addition and scalarmultiplication as we considered earlier in R2. Also, W is subset ofR2. So W is subspace ofR2.

    Note: For checking Wto be a subspace of a vector space V, we need to verify 10 properties of vectorspace in W. The following theorem suggests that verification of two properties is enough.

    Theorem: A subset Wof a vector space V is a subspace ofV if and only ifW is closed with respect tovector addition and scalar multiplication.

    Proof: First consider that W is subspace ofV. Then by definition of subspace, W is a vector space. Soby definition of vector space, Wis closed with respect to vector addition and scalar multiplication.

    Conversely assume that the subset Wof the vector space V is closed with respect to vector additionand scalar multiplication. So u v W anda u W for all u, v W anda R. Choosinga = 0, we

    get 0u W. But 0u= 0. So0 W, that is, additive identity exists in W. Again, choosing a = 1,we get (1) u = u W. So additive inverse exists inW. Commutative and associative propertieswith regard to vector addition, and all the properties related to scalar multiplication shall follow in Wfor two reasons (i) members ofWare members from V , and (ii) V is a vector space.

    Note: It is also easy to show that a subset W of a vector space V is subspace of V if and only ifa u b v W for all a, b R and u, v W.

    Ex. Show that the set W = {[x1, 0] :x1 R} is subspace ofR2.

    Sol. Let u= [x1, 0], v= [y1, 0] W and a R. Then we have

    u v= [x1, 0] [y1, 0] = [x1+ y1, 0] W,

    a u= a [x1, 0] = [ax1, 0] W.

    This shows that W is closed with respect to vector addition and scalar multiplication. Hence W is asubspace ofR2.

    Ex. The set W= {(a,b,a + 2b) :a, b R} is a subspace ofR3.

    Sol. Please do yourself.

    Ex. Verify whether W = {[x, y] :x y= 0, x,y R} is a subspace ofR2.

    Sol. Let u= [x1, x2], v= [y1, y2] W and a R. Then x1 x2= 0 and y1 y2= 0. we have

    u v= [x1, x2] [y1, y2] = [x1+ y1, x2+ y2] W,

    since (x1+ y1) (x2+ y2) =x1 x2+ y1 y2= 0.

    a u= a [x1, x2] = [ax1, ax2] W,

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    25/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 25

    sinceax1 ax2= a(x1 x2) =a.0 = 0. This shows that Wis closed with respect to vector addition andscalar multiplication. Hence W is a subspace ofR2.

    Ex. Verify whether W = {[x, y] :y = x2, x,y R} is a subspace ofR2.

    Sol. We have [1, 1], [2, 4] W but

    [1, 1] [2, 4] = [1 + 2, 1 + 4] = [3, 5] / W,

    since 5 = 32. SoW is not a subspace ofR2.

    Ex. Verify whether W =

    p qr s

    : ps qr = 0, p,q,r,s R

    is a subspace ofM22.

    Sol. We have u=

    1 00 1

    , v=

    1 00 1

    W but

    u v=

    1 00 1

    1 00 1

    =

    1 1 0 + 00 + 0 1 1

    =

    0 00 0

    / W,

    since

    0 00 0

    is a singular matrix. So W is not a subspace ofM22.

    Ex. IfA is n-square matrix and is an eigen value ofA, then the eigen space E = {X :AX=X}of is a subspace ofRn.

    Sol. Leta Rand X1, X2 E. Then

    A(X1+ X2) =AX1+ AX2= X1+ X2 = (X1+ X2). SoX1+ X2 E.

    Also,A(aX1) =aAX1= aX1= (aX1). So aX1 E.

    Thus, E is a subspace ofRn.

    Note: IfV is any vector space, then {0}and Vare its trivial subspaces. Any other subspace is called aproper subspace ofV.

    Ex. LetW1 and W2 be two subspaces of a vector space V .(i) Show that W1 W2 is a subspace ofV .(ii) Give an example to show that W1 W2 need not be a subspace ofV.(iii) Show that W1 W2 is subspace ofV iff either W1 W2 or W2 W1.

    Sol. (i) SinceW1andW2are subspaces ofV. So at least the zero vector 0 lies inW1W2. SoW1W2=.Let u, v W1 W2, and a, b R. Then a u b v W1 and a u b v W2 since W1 and W2both are subspaces ofV. It follows thataubv W1W2. This shows thatW1W2is a subspace ofV.

    (ii) W1 = {[x1, 0] : x1 R} and W2 = {[0, x2] : x1 R} both are subspaces of R2. Also,

    [1, 0], [0, 1] W1 W2. But [1, 0] [0, 1] = [1 + 0, 0 + 1] = [1, 1] / W1 W2. So W1 W2 is notclosed with respect to vector addition, and consequently it is not a subspace ofR2.

    (iii) IfW1 W2 or W2 W1, then W1 W2 = W2 or W1 W2 = W1. But W1 and W2 both aresubspaces ofV . Thus, in both cases W1 W2 is a subspace ofV.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    26/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 26

    Conversely assume that W1 W2 is a subspace of V. We need to prove that either W1 W2 orW2 W1. On the contrary assume that neither W1 W2 nor W2 W1. So there must exist someu W1 and v W2 such that u / W2 and v / W1.

    Now u, v W1 W2. So u v W1 W2, by closure property of vector addition in W1 W2. Itimplies that either u v W1 or u v W2. Let u v W1. Also, u W1. So by closure propertyof vector addition in W1, we have u (u v) = v W1, a contradiction since v / W1. Thus, either

    W1 W2 or W2 W1.

    Note: Hereafter, we shall use the symbols + and . for vector addition and scalar multiplication respectively in all vector spaces.

    Homework: Do exercise 4.2 from the textbook.

    Span of a Set

    Let S be any subset of a vector space V. Then the set of all linear combinations of finite number ofmembers ofS is called the span ofS, and is denoted by span(S) or L(S). Therefore, we have

    L(S) = span(S) = {a1v1+ a2v2+ ........ + anvn: ai R, vi S, i= 1, 2,.....,n}.

    Ex. IfS= {[1, 0], [0, 1]}, then L(S) = {a(1, 0) + b(0, 1) :a, b R} = {(a, b) :a, b R} =R2.

    Note: Recall that row space of a matrix is the set of all sums of scalar multiples of the row vectors ofthe matrix. So row space of a matrix is noting but the span of its row vectors. Also, we know that therow spaces of two row equivalent matrices are same. This fact can be used to simplify the span of subsetsof the vector spaces Rn, Pn andMmn as illustrated in the following examples.

    Ex. Show that span of the set S= {[2, 3, 4], [1, 5, 7], [3, 11, 13]} is R3.

    Sol. By definition, span of the given set Sreads as

    L(S) = {a[2, 3, 4]+b[1, 5, 7]+c[3, 11, 13] :a, b, c R} = {[2a+b+3c, 3a+5b+11c, 4a+7b+13c] :a,b, c R}.

    For simplified span, we use suitable row operations, and find2 3 41 5 7

    3 11 13

    1 0 00 1 0

    0 0 1

    .

    L(S) = {a[1, 0, 0] + b[0, 1, 0] + c[0, 0, 1] :a, b, c R} = {[a,b,c] :a, b, c R} =R3.

    Ex. Find simplified span of the set S= x3 1, x2 x, x 1 in P3x.Sol. By definition, span of the given set Sreads as

    L(S) = {a(x3 1) + b(x2 x) + c(x 1) :a, b, c R} = {ax3 + bx2 + cx a c: a, b, c R}.

    For simplified span, we write coefficients of the polynomials as row vectors, and then use suitable rowoperations. We find

    1 0 0 10 1 1 00 0 1 1

    1 0 0 10 1 0 1

    0 0 1 1

    .

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    27/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 27

    L(S) = {a(x3 1) + b(x2 1) + c(x 1) :a, b, c R} = {ax3 + bx2 + cx a b c: a,b, c R}.

    Ex. Find simplified span of the set S=

    1 10 0

    ,

    0 01 1

    ,

    1 00 1

    in M22.

    Sol. By definition, span of the given set Sreads as

    L(S) =

    a1 10 0

    + b

    0 01 1

    + c

    1 00 1

    : a, b, c R

    =a c ab b + c

    : a, b, c R

    For simplified span, we write each matrix as continuous row vector (writing both rows of each matrix

    in one row), and then use suitable row operations. We find1 1 0 00 0 1 11 0 0 1

    1 0 0 10 1 0 1

    0 0 1 1

    .

    L(S) =

    a

    1 00 1

    + b

    0 10 1

    + c

    0 01 1

    : a, b, c R

    =

    a bc a b c

    : a, b, c R

    .

    Ex. Span of the empty set is defined to be the singleton set containing the zero vector., that is,span() = {0}.

    Theorem: LetSbe subset of a vector space V . Then prove the following:(i) L(S) is a subset ofV(ii) L(S) is a subspace ofV(iii)L(S) is the minimal subspace ofV containing S.

    Proof: (i) Letu L(S). Then

    u= c1u1+ c2u2+ ..... + cmum

    for some ci R and ui S where i = 1, 2,...,m. Now, c1 R and u1 S V. So c1u1 V, byproperty 6 (scalar multiplication) of vector space. Likewise c2u2 V. It follows thatc1u1+ c2u2 V,by property 1 (vector addition) of vector space. Thus, repeated use of scalar multiplication and vectoraddition properties of vector space yields

    u= c1u1+ c2u2+ ..... + cmum V.

    Therefore,L(S) is subset ofV.

    (ii) Letu, v L(S) anda R. Thenuandv both are linear combinations of finite number of membersofS.

    u= c1u1+ c2u2+ ..... + cmum, v = d1v1+ d2v2+ ....... + dnvn,

    for some ci, dj Rand ui, vj Swhere i= 1, 2,...,mandj= 1, 2,....,n. It follows that

    u + v= c1u1+ c2u2+ ..... + cmum+ d1v1+ d2v2+ ....... + dnvn L(S),

    au= (ac1)u1+ (ac2)u2+ ..... + (acm)um L(S).

    This shows that L(S) is closed with respect to vector addition and scalar multiplication. Also, by part(i), L(S) is subset ofV . SoL(S) is subspace ofV.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    28/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 28

    (iii) For anyu S, we can writeu = 1.u. So every member ofScan be written as a linear combinationof members of S. So S L(S). In part (ii), we have shown that L(S) is a subspace ofV. To showthat L(S) is minimal subspace ofV containing S, it requires to prove that L(S) W where W is anysubspace ofV containing S.

    Letu L(S). Then

    u= c1u1+ c2u2+ ..... + cmum

    for some ci R and ui Swhere i = 1, 2,...,m. Now W contains S. Also Wbeing a subspace ofV is avector space. So by the properties of scalar multiplication and vector addition, it follows that

    u= c1u1+ c2u2+ ..... + cmum W.

    Thus, L(S) W. This completes the proof.

    Homework: Do exercise 4.3 from the textbook.

    Linear Independence

    A finite non-empty subset S= {v1, v2,......,vn}of a vector space Vis said to be linearly dependent (LD)if and only if there exist real numbersa1, a2,......,annot all zero such thata1v1+ a2v2+ .......... +anvn = 0.

    IfS is not LD, that is, a1v1+ a2v2+ .......... + anvn = 0 implies a1= 0, a2= 0,......,an = 0, then S issaid to be linearly independent (LI).

    Ex. The set S= {[1, 0], [0, 1]}is LI in R2.

    Sol. Here we have two vectors v1= [1, 0] and v2= [0, 1]. To check LI/LD, we let

    a1v1+ a2v2= 0.

    = a1[1, 0] + a2[0, 1] = [0, 0].

    = [a1, 0] + [0, a2] = [0, 0].

    = [a1, a2] = [0, 0].

    = a1= 0, a2= 0.

    Thus, the set S= {[1, 0], [0, 1]}is LI in R2.

    Ex. The set S= {[1, 2], [2, 4]}is LD in R2.

    Sol. Here we have two vectors v1= [1, 2] and v2= [2, 4]. To check LI/LD, we let

    a1v1+ a2v2= 0.

    = a1[1, 2] + a2[2, 4] = [0, 0].

    = [a1, 2a1] + [2a2, 4a2] = [0, 0].

    = [a1+ 2a2, 2a1+ 4a2] = [0, 0].

    = a1+ 2a2 = 0, 2a1+ 4a2= 0.

    = a1+ 2a2 = 0.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    29/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 29

    We see a non-trivial solution, a1= 2, a2= 1. Thus, the set S= {[1, 2], [2, 4]} is LD in R2.

    Note: In fact, a1 + 2a2 = 0 has infinitely many non-trivial solutions. But for LD, existence of onenon-trivial is sufficient.

    Theorem: Two vectors in a vector space are LD if and only if one vector is scalar multiple of the other.

    Proof: Let v1 and v2 be two vectors of a vector space V. Supposev1 and v2 are LD. Then there existsreal numbers a1 and a2 such that at least one ofa1 and a2 is non-zero (say a1= 0), and

    a1v1+ a2v2= 0.

    Since a1= 0, we have

    v1=

    a2a1

    v2= 0.

    This shows that v1 is scalar multiple ofv2.

    Conversely assume that v1 is scalar multiple ofv2, that is, v1 = v2 for some real number . Thenwe have,

    (1)v1+ v2= 0.

    We see that the linear combination ofv1 and v2 is 0, where the scalar 1 with v1 is non-zero. So v1 andv2 are LD.

    Ex. Verify whether the set S= {[3, 1,1], [5,2, 2], [2, 2,1]}is LI in R3.

    Sol. Here we have three vectors v1 = [3, 1,1], v2= [5,2, 2] and (2, 2,1). To check LI/LD, we let

    a1v1+ a2v2+ a3v3= 0.

    = a1[3, 1,1] + a2[5,2, 2] + a3[2, 2,1] = [0, 0, 0].

    = [3a1 5a2+ 2a3, a1 2a2+ 2a3,a1+ 2a2 a3] = [0, 0, 0].

    This gives the following homogeneous system of equations:

    3a1 5a2+ 2a3= 0,

    a1 2a2+ 2a3= 0,

    a1+ 2a2 a3= 0.

    Here the augmented matrix is

    [A: B] =

    3 5 2 01 2 2 01 2 1 0

    .

    Using suitable row transformations, we find

    [A: B]

    1 0 0 00 1 0 0

    0 0 1 0

    .

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    30/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 30

    So we get the trivial solution, a1= 0, a2= 0 and a3= 0.Thus, the set S= {[3, 1,1], [5,2, 2], [2, 2,1]} is LI in R3.

    Note: Row reduction approach can also be applied to test the linear independence of the vectors ofRn. Just write the given vectors as the rows of a matrix say A, and find rank of A. If rank of A isequal to the number of rows or vectors, then the given vectors are LI. For example, consider the set

    S= {[3, 1,1], [5,2, 2], [2, 2,1]}. The matrix with rows as the vectors ofSreads as

    A=

    3 1 15 2 2

    2 2 1

    .

    Using suitable row operations, we find

    A=

    3 1 15 2 2

    2 2 1

    1 0 00 1 0

    0 0 1

    .

    So rank ofA is 3. It implies that the set S= {[3, 1,1], [5,2, 2], [2, 2,1]} is LI.

    Ex. Any set containing zero vector is always LD while a singleton set containing a non-zero vector is LI.

    Sol. LetS= {0, v1, v2,......,vn}be a set containing 0 vector. Then the expression

    1.0 + 0v1+ 0.v2+ ....... + 0.vn = 0

    shows that S is LD.

    Next, consider a set A = {v1} carrying a single non-zero vector. Thena1v1 = 0 gives a1 = 0 sincev1=0. SoA is LI.

    Theorem: A finite set Scontaining at least two vectors is LD iff some vector in Scan be expressed asa linear combination of the other vectors in S.

    Proof: Let S={v1, v2,......,vn} be a set containing at least two vectors. SupposeS is LD. Then thereexist real numbers a1, a2,......,an not all zero (say some am= 0) such that

    a1v1+ a2v2+ ..... + am1vm1+ amvm+ am+1vm+1+ ..... + anvn = 0.

    It can be rewritten as

    vm= a1

    am v1+

    a2

    am v2+ .... +

    am1

    am vm1+

    am+1

    am vm+1+ ...... + (an)vn.

    This shows that vm is linear combination of the remaining vectors.Conversely assume that some vector vm ofSis linear combination of the remaining vectors ofS, that

    is, there exists some real numbers b1, b2,.....,bm1, bm+1........,bn such that

    vm= b1v1+ b2v2+ ....... + bm1vm1+ bm+1vm+1+ ...... + bnvn.

    It can be rewritten as

    b1v1+ b2v2+ ....... + bm1vm1+ (1)vm+ bm+1vm+1+ ...... + bnvn = 0.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    31/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 31

    We see that scalar with the vector vm is 1, which is non-zero. It follows that the set S is LD.

    Theorem: A non-empty finite subset S of a vector space V is LI iff every vector v L(S) can beexpressed uniquely as a linear combination of the members ofS.

    Proof: LetS= {v1, v2,......,vn} be a subset of the vector space V. SupposeSis LI and v is any member

    ofL(S). Then v can be expressed as linear combination of the members ofS. For uniqueness, let

    a1v1+ a2v2+ ......... + anvn = v,

    b1v1+ b2v2+ ......... + bnvn = v.

    Subtracting the two expressions, we get

    (a1 b1)v1+ (a2 b2)v2+ ......... + (an bn)vn= 0.

    Then linear independence of the set S = {v1, v2,......,vn} implies that a1 b1 = 0, a2 b2 = 0, ....,an bn = 0, that is, a1= b1, a2= b2, ....., an = bn. This proves the uniqueness.

    Conversely assume that every vector v L(S) can be expressed uniquely as a linear combination of

    the members ofS. To prove that S= {v1, v2,......,vn} is LI, let

    a1v1+ a2v2+ ......... + anvn = 0.

    Also, we have

    0v1+ 0v2+ ......... + 0vn = 0.

    The above two expressions represent 0 L(S) as linear combinations of members of the set S. So byuniqueness, we must have a1= 0, a2= 0, .......,an = 0. It follows that S is LI.

    Note: An infinite subset Sof a vector space V is LI iff every finite subset ofS is LI. For example, the

    setS= {1, x , x2, .......} is an infinite LI set in P(the vector space of all polynomials).

    Homework: Do exercise 4.4 from the textbook.

    Basis

    A subsetB of a vector space V is a basis ofV ifB is LI and L(B) =V. Therefore, the basis setB is LIand generates or spans the vector space V .

    Ex. The set B= {[1, 0], [0, 1]}is a basis ofR2, called the standard basis ofR2.

    For,B= {[1, 0], [0, 1]} is LI since a[1, 0] + b[0, 1] = [0, 0] yields a= 0,b= 0.

    Also,L(B) =R2 since any [x1, x2] R2 can be written as

    [x1, x2] =x1[1, 0] + x2[0, 1],

    a linear combination of the members of the set B = {[1, 0], [0, 1]}.

    Ex. The set B= {[1, 0, 0], [0, 1, 0], [0, 0, 1]} is the standard basis ofR3.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    32/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 32

    Ex. The set B= {[1, 2, 1], [2, 3, 1], [1, 2,3]} is a basis ofR3.

    Sol. Using suitable row transformations, we find

    1 2 12 3 11 2 3

    1 0 00 1 00 0 1

    .

    This shows that B is LI. Also, L(B) = {a(1, 0, 0) +b(0, 1, 0) +c(0, 0, 1) : a, b, c R} = {(a,b,c) :a,b,c R} =R3. So B is a basis ofR3.

    Ex. The set B=

    1 00 0

    ,

    0 10 0

    ,

    0 01 0

    ,

    0 00 1

    is the standard basis ofM22.

    Ex. The set B= {1, x , x2, ........, xn} is standard basis ofPn.

    Sol. Any polynomial in x of degree n is, of course, a linear combination of the members of the setB= {1, x , x2, ........, xn}. SoL(B) =Pn.

    Also,B = {1, x , x2, ........, xn} is LI sincea0.1 + a1x + ............ + anxn =0 = 0.1 + 0.x + ............ + 0.xngivesa0= 0, a1= 0, ........., an= 0.

    Thus B= {1, x , x2, ........, xn} is a basis ofPn, also known as standard basis ofPn.

    Ex. The empty set{}is basis of the trivial vector space V = {0}.

    Theorem: IfB1 is a finite basis of a vector space V, and B2 is any other basis ofV, then B2 has samenumber of vectors as in B1.Note: A vector space may have infinitely many finite bases. However, the number of vectors in eachbasis would be same as suggested by the above theorem. This fact led to the following definition.

    Dimension

    The number of elements in the basis of a vector space is called its dimension. Further, a vector spacewith finite dimension is called a finite dimensional vector space.

    Ex. The basis B= {[1, 0], [0, 1]}ofR2 carries two vectors. So dim(R2) = 2.

    Ex. The basis set B =

    1 00 0

    ,

    0 10 0

    ,

    0 01 0

    ,

    0 00 1

    ofM22 carries four vectors. So dim(M22) = 4.

    The following theorem gives the idea to find basis of a vector space from a spanning set.

    Theorem: Any maximal LI subset of a spanning set of a vector space forms a basis of the vector space.

    Ex. The set S= {[1, 0], [0, 1], [1, 5]} spans R2. One may verify that {[1, 0], [0, 1]}, {[1, 0], [1, 5]}, {[0, 1], [1, 5]}all being maximal LI subsets ofSare bases ofR2.

    Theorem: Every LI subset of a vector space can be extended to form a basis of the vector space.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    33/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 33

    Ex. The set S= {[1, 3, 7]} is a LI subset ofR3. It is easy to verify that the extended set {[1, 3, 7], [0, 1, 0], [0, 0, 1]is a basis ofR3.

    Interesting Note: If we remove one or more vectors from the basis set of a vector space, it is no more aspanning set of the vector space. Further, if we insert one or more vectors in the basis set, it is no moreLI. Thus, number of elements in the basis set does not vary.

    Theorem: IfW is subspace of a vector space V , then dim(W) dim(V).

    Ex. Find a basis of a subspace ofR4 spanned by the set S= {[3,5, 2, 4], [1,2, 2, 1], [1, 2,1, 1]}.

    Sol. We need to find maximal LI subset ofS = {[3,5, 2, 4], [1,2, 2, 1], [1, 2,1, 1]}. Using suitablerow operations, we find

    3 5 2 41 2 2 11 2 1 1

    1 0 0 150 1 0 9

    0 0 1 2

    .

    This shows that S= {[3,5, 2, 4], [1,2, 2, 1], [1, 2,1, 1]} is LI. So S= {[3,5, 2, 4], [1,2, 2, 1], [1, 2,1, 1itself is a basis of the subspace ofR4.

    Note: Row vectors of the row equivalent matrix of the basis vectors also form a basis. So in the aboveexample, the set {[1, 0, 0, 15], [0, 1, 0, 9], [0, 0, 1, 2]} is also a basis of the subspace ofR4.

    Homework: Do exercise 4.5 and 4.6 from the textbook.

    Coordinatization

    Let B = (v1, v2, ..........., vn) be an ordered basis (an ordered n-tuple of vectors) of a vector space V.Suppose v V. Then there exists real numbers a1, a2,......,an such that v = a1v1+ a2v2+ ......... + anvn.Then-vector [v]B = [a1, a2,.....,an] is called the coordinatization ofv with respect toB. We also say thatv is expressed in B -coordinates.

    Ex. The setB = ([1, 0], [0, 1]) is an ordered basis ofR2. Then [5, 4]B = [5, 4] since [5, 4] = 5[1, 0]+4[0, 1].

    Ex. The setC= ([0, 1], [1, 0]) is an ordered basis ofR2. Then [5, 4]C= [4, 5] since [5, 4] = 4[0, 1]+5[1, 0].

    Ex. LetVbe a subspace ofR3 spanned by the ordered basis B = ([2,1, 3], [3, 2, 1]) and [5,6, 11] V.Then [5,6, 11]B = [4,1].

    Sol. Let [5,6, 11] =a[2,1, 3] + b[3, 2, 1]. This yields the following system of linear equations:

    2a + 3b= 5; a + 2b= 6; 3a + b= 11.

    Writing the augmented matrix and applying suitable row transformations, we find

    2 3 ... 5

    1 2 ... 6

    3 1 ... 11

    1 0 ... 4

    0 1 ... 1

    0 0 ... 0

    .

    This gives a= 4 and b= 1. So [5,6, 11]B = [4,1].

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    34/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 34

    Transition Matrix

    LetVbe a non-trivial n-dimensional vector space with ordered bases B andC. LetP be n-square matrixwhose ith column is [bi]Cwhere [bi] is the ith basis vector in B. Then P is called the transition matrixfromB-coordinates toC-coordinates orB to C. For any vectorv V, it can be proved thatP[v]B = [v]C.

    Ex. The setsB = ([1, 0, 0], [0, 1, 0], [0, 0, 1]) and C= ([1, 5, 1], [1, 6,6], [1, 3, 14]) are two ordered bases of

    R3. Then P =

    102 20 367 13 2

    36 7 1

    .

    For,

    1 1 1 ... 1 0 0

    5 6 3 ... 0 1 0

    1 6 14 ... 0 0 1

    1 0 0 ... 102 20 3

    0 1 0 ... 67 13 2

    0 0 1 ... 36 7 1

    .

    Theorem: Suppose B, C and D are ordered bases for a non-trivial finite dimensional vector space V.Let P be transition matrix from B to C, and Q be transition matrix from C to D. Then QP is thetransition matrix fromB toD.

    Theorem: IfA is an n-square diagonalizable matrix, that is, there exists a non-singular matrix P suchthat P1AP=D, and B is an ordered basis ofRn consisting of eigen vectors ofA(or column vectors ofP), then for any v Rn, we have D[v]B = [Av]B.

    For, ifSstandard basis ofRn, then Pis transition matrix from B to S. SoP[v]B = [v]Sand hence

    D[v]B =P1AP[v]B =P

    1A[v]S=P1[Av]S= [Av]B.

    Homework: Do exercise 4.7 from the textbook.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    35/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 35

    Chapter 5 (5.1-5.4)

    Linear Transformation

    LetV andWbe vector spaces. Then a function from f :V W is said to be linear transformation (LT)iff for all v1, v2 V and c R, we have f(v1+ v2) =f(v1) + f(v2) and f(cv1) =cf(v1).

    Ex. f :Mmn Mnm given by f(A) =AT is a LT.

    Sol. LetA, B Mmn and a R. Then we havef(A + B) = (A + B)T =AT + BT and f(aA) = (aA)T =aAT.This shows that f is a LT.

    Ex. f :R3 R3 given byf([x1, x2, x3]) = [x1, x2,x3] is a LT.Sol. Letv1= [x1, x2, x3], v2= [y1, y2, y3] R

    3 and a R. Then we havef(v1+ v2) =f([x1+ y1, x2+ y2, x3+ y3]) = [x1+ y1, x2+ y2,(x3+ y3)],f(v1) + f(v2) = [x1, x2,x3] + [y1, y2,y3] = [x1+ y1, x2+ y2,(x3+ y3)].Therefore,f(v1+ v2) =f(v1) + f(v2).

    Also,f(av1) =f([ax1, ax2, ax3]) = [ax1, ax2,ax3] =a[x1, x2,x3] =af(v1).This shows that f is a LT.

    Ex. LetA be am nmatrix. Then f :Rn Rm given by f(X) =AX is a LT.Sol. LetX1, X2 R

    n and a R. Then we havef(X1+ X2) =A(X1+ X2) =AX1+ AX2 = f(X1) + f(X2) and f(aX1) =A(aX1) =aAX1= af(X1).This shows that f is a LT.

    Note: A LT f :V V is called linear operator on V.

    Theorem: IfL : V Wis a LT, then L(0) = 0 and L(a1v1+ a2v2) =a1L(v1) +a2L(v2) for alla1, a2 Rand v1, v2 V.

    Proof: Letu V. Then we haveL(0) =L(u + (u)) =L(u) + L(u) =L(u) + L((1)u) =L(u) L(u) = 0.Next,L(a1v1+ a2v2) =L(a1v1) + L(a2v2) =a1L(v1) + a2L(v2) since a1v1, a2v2 V and L is a LT.

    Theorem: The composition of two LT is also a LT, that is, ifL1 : V1 V2 and L2 : V2 V3 are LT,then L2oL1: V1 V3 is a LT.Proof: Letu, v V1 and a R. Given that L1: V1 V2 and L2 : V2 V3 are LT, we have(L2oL1)(u+v) =L2(L1(u+v)) =L2(L1(u)+L1(v)) =L2(L1(u))+L2(L1(v)) = (L2oL1)(u)+(L2oL1)(v).(L2oL1)(au) =L2(L1(au)) =L2(aL1(u)) =aL2(L1(u)) =a(L2oL1)(u).This shows that L2oL1 is a LT.

    Theorem: If L : V W is a LT, and V1 and W1 are subspaces of V and W respectively, thenL(V1) = {L(v) :v V1} is a subspace ofW and L

    1(W1) = {v: L(v) W1} is a subspace ofV .Proof: To prove that L(V1) = {L(v) : v V1} is a subspace ofW, let L(u), L(v) L(V1) and a R.Then u, v V1, and we haveL(u) + L(v) =L(u + v) L(V1) since u + v V1.Also,aL(u) =L(au) L(V1) since au V1.This shows that L(V1) is a subspace ofW1.Likewise, it is easy to prove that L1(W1) = {v: L(v) W1} is a subspace ofV . (please try!)

    Homework: Do exercise 5.1 from the textbook.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    36/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 36

    Matrix of a Linear Transformation

    Let B = (v1, v2, ........, vn) and C= (w1, w2, ........., wn) be ordered bases of two non-trivial vector spacesV and W respectively. IfL : V W is a LT, then there exists a unique matrix ABC of order m nknown as the matrix of the LTL such thatABC[v]B = [L[v]]Cfor allv V. Furthermore, the ith columnofABC= [L[vi]]C.

    Ex. Find the matrix of the LTL: P3 R3 given byL(a3x3 + a2x2 + a1x + a0) = [a0+ a1, 2a2, a3 a0]with respect to the ordered basisB = (x3, x2, x, 1) forP3andC= ([1, 0, 0], [0, 1, 0], [0, 0, 1]) forR

    3. Hencecompute [L(5x3 x2 + 3x + 2)]C.

    Sol. ABC= [[L(x3)]C, [L(x

    2)]C, [L(x)]C, [L(1)]C] =

    0 0 1 11 2 0 0

    1 0 0 1

    .

    Now [5x3 x2 + 3x + 2]B = (5,1, 3, 2).

    [L(5x3 x2 + 3x + 2)]C=ABC[5x3 x2 + 3x + 2]B =

    52

    3

    .

    Theorem: LetL: VW be a LT where V and Ware non-trivial vector spaces. LetABC be matrixofL with respect to the ordered bases B andC. Suppose D andEare any other ordered bases ofV andW respectively. Let Pbe transition matrix from B to D, and Q be the transition matrix from C to E.Then the matrix ADE ofL with respect to the bases D andE is ADE=QABCP

    1.

    Note: (i) Please do example 5 from the text book.(ii) IfL : V V is linear operator on V , thenABB =P

    1ACCP, where B and Care two ordered basesofV, and P is a transition matrix from B to C.(iii) If ABC and ACD are respectively the matrices of the linear transformations L1 : V1 V2 andL2 : V2 V3, where B, C and D are ordered bases ofV1, V2 and V3 respectively, then the matrix of

    L2oL1: V1 V3 is ACDABC.

    Homework: Do exercise 5.2 from the textbook.

    Kernel and Range

    LetL : V Wbe a LT. Then the sets Ker(L) = {v V :L(v) = 0}and Range(L) = {L(v) :v V}arerespectively defined as the kernel and range ofL.

    Ex. IfL : V W be a LT, then Ker(L) = {v V : L(v) = 0} is a subspace ofV and Range(L) ={L(v) :v V} is a subspace ofW.

    Sol. We know that 0 V and L(0) = 0. So 0 Ker(L) and L(0) Range(L). This shows that Ker(L)and Range(L) both are non-empty. To show that Ker(L) is subspace ofV assume thatu, v Ker(L) anda R. Then L(u) = 0 and L(v) = 0. Also, L being a LT, we have L(u+v) =L(u) +L(v) = 0 + 0 = 0.So u + v Ker(L). Next, L(au) =aL(u) =a0 = 0. So au Ker(L). Thus, Ker(L) is a subspace ofV.Likewise, it is easy to show that Range(L) = {L(v) :v V} is a subspace ofW. (please try!).

    Ex. IfL: R3 R3 is given by L([x1, x2, x3]) = [x1, x2, 0], then find Ker(L) and Range(L).Sol. Let [x1, x2, x3] Ker(L). Then by definition of kernel, we haveL([x1, x2, x3]) = [0, 0, 0] or [x1, x2, 0] = [0, 0, 0]. Thus, we getx1 = 0, x2 = 0 and x3 is any real number.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    37/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 37

    So Ker(L) = {[0, 0, x3] :x3 R}.

    Given that L([x1, x2, x3]) = [x1, x2, 0]. So we have Range(L) = {[x1, x2, 0] :x1, x2 R}.

    Ex. IfL: P3 P2 is given by L(ax3 + bx2 + cx + d) = 3ax2 + 2bx + c, then Ker(L) = {d: d R}andRange(L) = {3ax2 + 2bx + c: a, b, c R}.

    For, let ax

    3

    +bx

    2

    +cx + d Ker(L). Then L(ax

    3

    +bx

    2

    +cx + d) = 0, or 3ax

    2

    + 2bx+ c = 0 givesa= b = c = 0 and d any real number.

    Ex. Let L : R5 R4 be given by L(X) = AX, where A =

    8 4 16 32 04 2 10 22 42 1 5 11 76 3 15 33 7

    . Find bases of

    Ker(L) and Range(L).

    Sol. Here Ker(L) is the solution ofAX= 0. Also, A

    1 1/2 0 2 00 0 1 3 00 0 0 0 1

    0 0 0 0 0

    .

    It follows that x1 = (1/2)x2+ 2x4, x3= 3x4 and x5= 0. Let x2=a and x4= b. Then[x1, x2, x3, x4, x5] = [(1/2)a + 2b,a,3b,b, 0] =a[1/2, 1, 0, 0, 0] + b[2, 0,3, 1, 0].Hence Ker(L) = {[1/2a + 2b,a,3b,b, 0] :a, b R}, and{[1/2, 1, 0, 0, 0], [2, 0,3, 1, 0]} is the basis of Ker(L).

    Range space ofL is spanned by the column vectors ofA

    1 1/2 0 2 00 0 1 3 00 0 0 0 10 0 0 0 0

    .

    We see that the first third and fifth column vectors are LI. So {[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]} is abasis of Range(L).

    Theorem: IfL: Rn Rm is a LT with matrix A with respect to any bases ofRn and Rm, then(i) dim(Range(L)) = Rank(A)(ii) dim(Ker(L)) =n Rank(A)(iii) dim(Ker(L)) + dim(Range(L)) =n.

    Verify this theorem for the previous example (important!). A generalized version of part (iii) in thethe above theorem is given by the dimension theorem.

    Dimension Theorem: IfL: V W is a LT and V is finite dimensional, thendim(Ker(L)) + dim(Range(L)) = dim(V).

    Homework: Do exercise 5.3 from the textbook.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    38/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 38

    One-to-One and Onto Linear Transformation

    A LT L : V W is said to be one-to-one iffL(v1) =L(v2) = v1= v2 for all v1, v2 V. Further, it issaid to be onto iff for any w W, there exists some vector v V such that L(v) =w. So L: V W isonto iff Range(L) =W.

    Ex. Show that L: R3 R3 given byL[x1, x2, x3] = [x1, x2, 5x3] is both one-to-one and onto.Sol. Letv1= [x1, x2, x3], v2= [y1, y2, y3] V =R3. ThenL(v1) =L(v2) = L([x1, x2, x3]) =L([y1, y2, y3]) = [x1, x2, 5x3] = [y1, y2, 5y3].So x1= y1, x2= y2, x3= y3 and v1 = v2. Hence,L is one-to-one.

    Letw = [z1, z2, z3] W =R3. Then v = [z1, z2, (1/5)z3] such that

    L(v) =L([z1, z2, (1/5)z3]) = [z1, z2, z3] =w. This shows that L is onto.

    Ex. L: R3 R2 given by L[x1, x2, x3) = [x1, x2] is onto but not one-to-one.Sol. Here V = R3 and W = R2. Let w = [y1, y2] R

    2. Then for any real number y3, we havev = [y1, y2, y3] R

    3 such that L(v) =L[y1, y2, y3] = [y1, y2] =u. This shows that L is onto.Now, [1, 2, 4], [1, 2, 6] R3 such that L[1, 2, 4] = [1, 2] = L[1, 2, 6], but [1, 2, 4]= [1, 2, 6]. So L is not

    one-to-one.

    Ex. L: R2 R3 given by L[x1, x2] = [x1 x2, x1+ x2, x1] is one-to-one but not onto.Sol. Here V = R2 and W = R3. Let v1 = [x1, x2], v2 = [x1, x2] R

    2 such that L(v1) = L(v2) orL[x1, x2] =L[y1, y2] or [x1 x2, x1+ x2, x1] = [y1 y2, y1+ y2, y1]. This leads to x1= y1 andx2= y2. So[x1, x2] = [y1, y2] or v1= v2. Thus,L is one-to-one.

    Next, [1, 2, 3] R3. Let [x1, x2] R2 such that L[x1, x2] = [1, 2, 3] or [x1 x2, x1+x2, x1] = [1, 2, 3].

    This yields the following system of equations:x1 x2= 1; x1+ x2= 2; , x1= 3,which has no solution. This in turn implies that [1, 2, 3] R3 has no pre-image inR2. HenceL is not onto.

    Theorem: A LT L : V W is one-to-one iff Ker(L) = {0}.Proof: First assume that L : V Wis one-to-one. We shall prove that Ker(L) = {0}. Let v Ker(L).Then L(v) = 0, by definition of kernel. Also, we know that L(0) = 0. So L(v) = L(0). Given thatL isone-to-one. It follows that v= 0. So Ker(L) = {0}.

    Conversely assume that Ker(L) = {0}. To prove that L is one-to-one, let v1, v2 V such thatL(v1) =L(v2). So we have L(v1) L(v2) = 0 or L(v1 v2) = 0 since L is a LT. Further, L(v1 v2) = 0implies thatv1v2 Ker(L). But Ker(L) = {0}. So we getv1v2= 0 orv1= v2. Hence,L is one-to-one.

    Theorem: A LT L : V W is onto iff dim(Range(L)) = dim(W).Proof: By definition of onto LT, L : V W is onto iff Range(L) = W. Further, we know thattwo vector spaces are equal iff their dimensions are equal. It follows that L : V

    W is onto iff

    dim(Range(L)) = dim(W).

    Theorem: IfL: V W is one-to-one and S= {v1, v2,.....,vn} is LI subset ofV,then L(S) = {L(v1), L(v2),......,L(vn)} is LI in W.Proof: To prove that L(S) = {L(v1), L(v2),......,L(vn)} is LI in W, let a1, a2, ......, an be real numbers(scalars) such thata1L(v1) + a2L(v2) + .......... + anL(vn) = 0.Since L is LT and L(0) = 0, we haveL(a1v1+ a2v2+ .......... + anvn) =L(0).

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/
  • 7/24/2019 new maths 2.pdf

    39/39

    Linear Algebra Dr. Suresh Kumar, BITS-Pilani 39

    It is given that L is one-to-one. So we geta1v1+ a2v2+ .......... + anvn = 0.Also, it is given that S = {v1, v2,.....,vn} is LI subset ofV. So we have a1 = 0, a2 = 0, ......, an = 0.HenceL(S) = {L(v1), L(v2),......,L(vn)}is LI in W.

    Theorem: IfL: V W is onto and Sspans V, then L(S) spans W.Proof:

    LetS= {v1, v2,.....,vn}spans V and w W. Since L: V W is onto, there exists some v Vsuch thatL(v) =w. Now v V and S= {v1, v2,.....,vn} spans V. So there exist real numbers (scalars)a1,a2, ......, an such thatv = a1v1+ a2v2+ ......... + anvn.Since L is LT, we have w = L(v) =a1L(v1) + a2L(v2) + .......... + anL(vn).This shows that L(S) spans W.

    Homework: Do exercise 5.4 from the textbook.

    https://sites.google.com/site/sureshkumaryd/https://sites.google.com/site/sureshkumaryd/