View
30
Download
0
Category
Preview:
Citation preview
Matrix Algebra
Matrix algebra is a means of efficiently expressing large numbers of calculations to be made upon ordered sets of numbers
Often referred to as Linear Algebra
Why use it?
Matrix algebra is used primarily to facilitate mathematical expression.
Many equations would be completely intractable if scalar mathematics had to be used. It is also important to note that the scalar algebra is under there somewhere.
4
How about solving
7,
3 5.
x y
x y
2 7,
2 4 2,
5 4 10 1,
3 6 5.
x y z
x y z
x y z
x y z
Consider the following set of equations:
It is easy to show that x = 3 and y = 4.
Matrices can help…
Why use it?
Definitions - Vectors
Vector - a single row or column of numbers
Each individual entry is called an element
denoted with bold small letters
row vector
a
1234
a 1 2 3 4
column vector
Definitions - Matrices
A matrix is a rectangular array of numbers (called elements) arranged in orderly rows and columns
11 12 13
21 22 32
a a aa a a
A
Subscripts denote row (i=1,…,n) and column (j=1,…,m) location of an element
Definitions - Matrices
Matrices are denoted with bold Capital letters
All matrices (and vectors) have an order or dimensions - that is the number of rows x the number of columns. Thus A is referred to as a two by three matrix.
Often a matrix A of dimension n x m is denoted Anxm
Often a vector a of dimension n (or m) is denoted An (or Am)
Definitions - Matrices
Null matrix – a matrix for which all elements are zero, i.e., aij = 0 i,j
Square Matrix – a matrix for which the number of rows equals the number of columns (n = m)
Symmetric Matrix – a matrix for which aij = aji i,j
Definitions - Matrices
Diagonal Elements – Elements of a Square Matrix for which the row and column locations are equal, i.e., aij i = j
Upper Triangular Matrix – a matrix for which all elements below the diagonal are zero, i.e., aij = 0 i,j i < j
Lower Triangular Matrix – a matrix for which all elements above the diagonal are zero, i.e., aij = 0 i,j i > j
Matrix Equality
Thus two matrices are equal iff (if and only if) all of their elements are identical
Note that statistical data sets are matrices (usually with observations in the rows and variables in the columns)
11 12 1m
21 22 2m
n1 n2 nm
Variable 1 Variable 2 Variable mObservation 1 a a aObservation 2 a a a
Observation n a a a
The Transpose of a Matrix
The transpose A’ of a matrix A is the
matrix such that the ith row of A is the jth column of A’, i.e., B is the transpose of A iff bij = aji i,j
This is equivalent to fixing the upper left and lower right corners then rotating the matrix 180 degrees
Transpose of a Matrix An Example
If we have
1 2 3'4 5 6
A
1 42 53 6
A
1 42 53 6
A
1 2 3'4 5 6
A
then
i.e.,
Sums and Differences of Matrices
Two matrices may be added (subtracted) iff they are the same order
Simply add (subtract) elements from corresponding locations
11 12 11 12 11 12
21 22 21 22 21 22
31 32 31 32 31 32
a a b b c ca a + b b = c ca a b b c c
11 11 11 12 12 12
21 21 21 22 22 22
31 31 31 32 32 32
a +b = c , a +b = c ,
a +b = c , a +b = c ,
a +b = c , a +b = c
where
Sums and Differences An Example
If we have
1 2 7 10 8 12+ = 3 4 + 8 11 = 11 15
5 6 9 12 14 18C A B
1 2 7 103 4 and = 8 115 6 9 12
A B
then we can calculate C = A + B by
Sums and Differences An Example
Similarly, if we have
1 2 7 10 -6 -8- = 3 4 - 8 11 = -5 -7
5 6 9 12 -4 -6C A B
1 2 7 103 4 and = 8 115 6 9 12
A B
then we can calculate C = A - B by
Some Properties of Matrix Addition/Subtraction
Note that The transpose of a sum = sum of
transposes (A+B+C)’ = A’+B’+C’
A+B = B+A (i.e., matrix addition is commutative)
Matrix addition can be extended beyond two matrices
matrix addition is associative, i.e., A+(B+C) = (A+B)+C
Products of Scalars and Matrices
To multiply a scalar times a matrix, simply multiply each element of the matrix by the scalar quantity
11 12 11 12
21 22 21 22
a a ba bab =
a a ba ba
Products of Scalars & Matrices An Example
If we have
1 2 3.5 7.03.5 3 4 = 10.5 14.0
5 6 17.5 21.0bA
1 23 4 and b = 3.55 6
A
then we can calculate bA by
Note that bA = Ab if b is a scalar
Some Properties of Scalar x Matrix Multiplication
Note that if b is a scalar then bA = Ab (i.e., scalar
x matrix multiplication is commutative)
Scalar x Matrix multiplication can be extended beyond two scalars
Scalar x Matrix multiplication is associative, i.e., ab(C) = a(bC)
Scalar x Matrix multiplication leads to removal of a common factor, i.e., if
11 12 11 12
21 22 21 22
31 32 31 32
ba ba a aba ba then b where = a a ba ba a a
C C A A
Products of Matrices
We write the multiplication of two matrices A and B as AB
This is referred to either as pre-multiplying B by A
or
post-multiplying A by B
So for matrix multiplication AB, A is referred to as the premultiplier and B is referred to as the postmultiplier
Products of Matrices
In order to multiply matrices, they must be conformable (the number of columns in the premultiplier must equal the number of rows in postmultiplier)
Note that an (m x n) x (n x p) = (m x p)
an (m x n) x (p x n) = cannot be done
a (1 x n) x (n x 1) = a scalar (1 x 1)
Products of Matrices
If we have A(3x2) and B(2x3) then
11 12 13 11 12 11 12
21 22 23 21 22 21 22
31 32 31 3231 32 33
a a a b b c ca a a x b b = c c
b b c ca a aAB C
11 11 11 12 21 13 31
12 11 12 12 22 13 32
21 21 11 22 21 23 31
22 21 12 22 22 23 32
31 31 11 32 21 33 31
32 31 12 32 22 33 32
c = a b + a b + a b
c = a b + a b + a b
c = a b + a b + a b
c = a b + a b + a b
c = a b + a b + a b
c = a b + a b + a b
where
Products of Matrices
If we have A(3x2) and B(2x3) then
11 12 1311 12
21 22 21 22 23
31 32 31 32 33
a a ab bb b x a a a = undefinedb b a a a
BA
i.e., matrix multiplication is not commutative (why?)
Matrix Multiplication An Example
If we have
11 12
21 22
31 32
c c1 4 7 1 4 30 662 5 8 x 2 5 = c c = 36 813 6 9 3 6 42 96c c
AB
11 11 11 12 21 13 31
12 11 12 12 22 13 32
21 21 11 22 21 23 31
22 21 12 22 22 23 32
31 31 11 32 21 33 31
32 31 12 32 22 3
c = a b + a b + a b =1 1 + 4 2 + 7 3 = 30
c = a b + a b + a b =1 4 + 4 5 + 7 6 = 66
c = a b + a b + a b = 2 1 +5 2 + 8 3 = 36
c = a b + a b + a b = 2 4 +5 5 + 8 6 = 81
c = a b + a b + a b = 3 1 + 6 2 + 9 3 = 42
c = a b + a b + a 3 32b = 3 4 + 6 5 + 9 6 = 96
1 4 7 1 42 5 8 and = 2 53 6 9 3 6
A B
then
where
Matrices A, B and C are conformable,
A(B + C) = AB + AC
(A + B)C = AC + BC
A(BC) = (AB) C
AB BA in general
AB = 0 NOT necessarily imply A = 0 or B = 0
AB = AC NOT necessarily imply B = C
Some Properties of Matrix Multiplication
Special Uses for Matrix Multiplication
Sum Row Elements of a Matrix
Premultiply a matrix A by a conformable row vector of 1s – If
1 4 72 5 8 3 6 9
A
1 1 11
1 4 71 1 1 2 5 8 = 6 15 24
3 6 9A1
then premultiplication by
will yield the column totals for A, i.e.
Special Uses for Matrix Multiplication
Sum Column Elements of a Matrix
Postmultiply a matrix A by a conformable column vector of 1s – If
1 4 72 5 8 3 6 9
A
11 1
1
1 4 7 1 122 5 8 1 = 15 3 6 9 1 18
A1
then postmultiplication by
will yield the column totals for A, i.e.
Special Uses for Matrix Multiplication
The Dot (or Inner) Product of two Vectors Premultiplication of a column vector a by
conformable row vector b yields a single value called the dot product or inner product - If
53 4 6 and 2
8a b
53 4 6 2 = 3 5 + 4 2 +6 8 = 71
8ab
then ab gives us
which is the sum of products of elements in similar positions for the two vectors
Special Uses for Matrix Multiplication
The Outer Product of two Vectors Postmultiplication of a column vector a by
conformable row vector b yields a matrix containing the products of each pair of elements from the two matrices (called the outer product) - If
53 4 6 and 2
8a b
5 15 20 302 3 4 6 = 6 8 128 24 32 48
ba
then ba gives us
Special Uses for Matrix Multiplication
Sum the Squared Elements of a Vector Premultiply a column vector a by its
transpose – If
then premultiplication by a row vector a’
will yield the sum of the squared values of elements for a, i.e.
52 8
a
' 5 2 8 a
2 2 25
' 5 2 8 2 = 5 +2 + 8 = 93 8
a a
Special Uses for Matrix Multiplication
Postmultiply a row vector a by its transpose – If
then postmultiplication by a column vector a’
will yield the sum of the squared values of elements for a, i.e.
7' 10
1a
7 10 1 a
2 2 27
' 7 10 1 10 = 7 +10 +1 =150 1
aa
Special Uses for Matrix Multiplication
Determining if two vectors are Orthogonal – Two conformable vectors a and b are orthogonal iff
a’b = 0
Example: Suppose we have
then
Special Uses for Matrix Multiplication
Representing Systems of Simultaneous Equations – Suppose we have the following system of simultaneous equations:
px1 + qx2 + rx3 = M
dx1 + ex2 + fx3 = N
If we let
then we can represent the system (in matrix notation) as Ax = b (why?)
Special Uses for Matrix Multiplication
Linear Independence – any subset of columns (or rows) of a matrix A are said to be linearly independent if no column (row) in the subset can be expressed as a linear combination of other columns (rows) in the subset.
If such a combination exists, then the columns (rows) are said to be linearly dependent.
Special Uses for Matrix Multiplication
The Rank of a matrix is defined to be the number of linearly independent columns (or rows) of the matrix.
Nonsingular (Full Rank) Matrix – Any matrix that has no linear dependencies among its columns (rows). For a square matrix A this implies that Ax = 0 iff x = 0.
Singular (Not of Full Rank) Matrix – Any matrix that has at least one linear dependency among its columns (rows).
Special Uses for Matrix Multiplication
Example - The following matrix A
1 2 33 4 96 5 12
A
is singular (not of full rank) because the third column is equal to three times the first column.
This result implies there is either i) no unique solution or ii) no existing solution to the system of equations Ax = 0 (why?).
Special Uses for Matrix Multiplication
Example - The following matrix A
1 2 53 4 116 5 16
A
is singular (not of full rank) because the third column is equal to the first column plus two times the second column.
Note that the number of linearly independent rows in a matrix will always equal the number of linearly independent columns in the matrix.
Special Matrices
There are a number of special matrices. These include
Diagonal Matrices
Identity Matrices
Null Matrices
Commutative Matrices
Anti-Commutative Matrices
Periodic Matrices
Idempotent Matices
Nilpodent Matrices
Orthogonal Matrices
Diagonal Matrices
A diagonal matrix is a square matrix that has values on the diagonal with all off-diagonal entities being zero.
11
22
33
44
a 0 0 00 a 0 00 0 a 00 0 0 a
Identity Matrices
An identity matrix is a diagonal matrix where the diagonal elements all equal 1
When used as a premultiplier or postmultiplier of any conformable matrix A, the Identity Matrix will return the original matrix A, i.e.,
IA = AI = A
Why?
1 0 0 00 1 0 00 0 1 00 0 0 1
I
Null Matrices
A square matrix whose elements all equal 0
Usually arises as the difference between two equal square matrices, i.e.,
a – b = 0 a = b
0 0 0 00 0 0 00 0 0 00 0 0 0
When m = n, i.e.,
A is called a “square matrix of order n” or “n-square matrix”
elements a11, a22, a33,…, ann called
diagonal elements.
11 12 1
21 22 2
1 2
n
n
n n nn
a a a
a a aA
a a a
A
Square Matrices
A square matrix whose elements aij = 0, for i > j is called upper triangular, i.e.,
11 12 1
22 20
0 0
n
n
nn
a a a
a a
a
A square matrix whose elements aij = 0, for i < j is called lower triangular, i.e.,
11
21 22
1 2
0 0
0
n n nn
a
a a
a a a
Triangular Matrices
47
A matrix A such that AT = A is called symmetric, i.e., aji = aij for all i and j.
A + AT must be symmetric. Why?
Example: is symmetric.
A matrix A such that AT = -A is called skew-symmetric, i.e., aji = -aij for all i and j.
A - AT must be skew-symmetric. Why?
A
Symmetric Matrices
Commutative Matrices
Any two square matrices A and B such that AB = BA are said to commute.
Note that it is easy to show that any square matrix A commutes with both itself and with a conformable identity matrix I.
Anti-Commutative Matrices
Any two square matrices A and B such that AB = -BA are said to anti-commute.
Periodic Matrices
Any matrix A such that Ak+1 = A is said to be of period k.
Of course any matrix that commutes with itself of period k for any integer value of k (why?).
Idempotent Matrices
Any matrix A such that A2 = A is said to be of idempotent.
Thus an idempotent matrix commutes with itself if of period k for any integer value of k.
Nilpotent Matrices
Any matrix A such that Ap = 0 where p is a positive integer is said to be of nilpotent.
Note that if p is the least positive integer such that Ap = 0, then A is said to be nilpotent of index p.
Orthogonal Matrices
Any square matrix A with rows (considered as vectors) are mutually perpendicular and have unit lengths, i.e.,
A’A = I
Note that A is orthogonal iff A-1 = A’.
A matrix A is called orthogonal if AAT = ATA = I, i.e., AT = A-1
Example: prove that is orthogonal.
We’ll see that orthogonal matrix represents a rotation in fact!
Since, . Hence, AAT = ATA = I
Can you show the details?
A
AT
Orthogonal Matrices
55
If matrices A and B such that AB = BA = I, then B is called the inverse of A (symbol: A-1); and A is called the inverse of B (symbol: B-1).
Show B is the the inverse of matrix A.
Example:
Ans: Note that
Can you show the details?
A B
AB=BA
Inverse Matrices
(AB)-1 = B-1A-1
((A)-1)-1=A
(kA)-1 =A-1/k
(AT)T = A and (lA)T = l AT
(A + B)T = AT + BT
(AB)T = BT AT
Properties of Transpose & invers of matrices
Traces of Matrices
The trace of a square matrix A is the sum of the diagonal elements
Denoted tr(A)
We have
For example, the trace of
is
Recommended