Upload
dr-fereidoun-dejahang
View
52
Download
2
Embed Size (px)
Citation preview
Mathematics for Economists
Chapters 4-5Linear Models and Matrix
Algebra
Johann Carl Friedrich Gauss (1777–1855)
The Nine Chapters on the Mathematical Art(1000-200 BC)
Objectives of math for economists
To understand mathematical economics problems by stating the unknown, the data and the conditions
To plan solutions to these problems by finding a connection between the data and the unknown
To carry out your plans for solving mathematical economics problems
To examine the solutions to mathematical economics problems for general insights into current and future problems
Remember: Math econ is like love – a simple idea but it can get complicated.
2
4. Linear AlgebraSome history:The beginnings of matrices and determinants goes back
to the second century BC although traces can be seen back to the fourth century BC. But, the ideas did not make it to mainstream math until the late 16th century
The Babylonians around 300 BC studied problems which lead to simultaneous linear equations.
The Chinese, between 200 BC and 100 BC, came much closer to matrices than the Babylonians. Indeed, the text Nine Chapters on the Mathematical Art written during the Han Dynasty gives the first known example of matrix methods.
In Europe, two-by-two determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz and, in Japan, by Seki about 100 years later.
4. What is a Matrix?A matrix is a set of elements, organized into
rows and columns
dcba
rows
columns
• a and d are the diagonal elements. • b and c are the off-diagonal elements.
• Matrices are like plain numbers in many ways: they can be added, subtracted, and, in some cases, multiplied and inverted (divided).
4. Matrix: Details Examples:
5
bdb
A ;11
• Dimensions of a matrix: numbers of rows by numbers of columns. The Matrix A is a 2x2 matrix, b is a 1x3 matrix.
• A matrix with only one column or only one row is called a vector.
• If a matrix has an equal numbers of rows and columns, it is called a square matrix. Matrix A, above, is a square matrix.
• Usual Notation: Upper case letters => matrices
Lower case => vectors
4.1 Basic Operations
Addition, Subtraction, Multiplication
hdgcfbea
hgfe
dcba
hdgcfbea
hgfe
dcba
dhcfdgcebhafbgae
hgfe
dcba
Just add elements
Just subtract elements
Multiply each row by each
column
kdkckbka
dcba
k Multiply each element by the
scalar
4.1 Matrix multiplication: DetailsMultiplication of matrices requires a conformability
conditionThe conformability condition for multiplication is that the
column dimensions of the lead matrix A must be equal to the row dimension of the lag matrix B.
What are the dimensions of the vector, matrix, and result?
131211232221
1312111211 ccccbb
bbbaaaB
7
231213112212121121121111 babababababa
• Dimensions: a(1x2), B(2x3) => c(1x3)
4.1 Basic Matrix Operations: Examples
222222
11725
2013
9712
xxx CBA
Matrix addition
Matrix subtraction
Matrix multiplication
Scalar multiplication
8
6511
3201
9712
222222 x272634
3201
x9712
xxx CBA
81432141
1642
81
4.1 Laws of Matrix Addition & Multiplication
22222121
12121111
2221
1211
2221
1211
abaaabba
bbbb
aaaa
BA
Commutative law of Matrix Addition: A + B = B + A
9
22222121
12121111
2221
1211
2221
1211
abababab
bbaa
bbbb
AB
Matrix Multiplication is distributive across Additions: A (B+ C) = AB + AC (assuming comformability applies).
4.1 Matrix Multiplication
Matrix multiplication is generally not commutative. That is, AB BA even if BA is conformable (because diff. dot product of rows or col. of A&B)
7610
,4321BA
10
25241312
7413640372116201
AB
402743
4726371641203110
BA
4.1 Matrix multiplication
Exceptions to non-commutative law: AB=BA iff
B = a scalar,B = identity matrix I, orB = the inverse of A -i.e., A-1
11
Theorem: It is not true that AB = AC => B=CProof:
132111
212;
011010111
;321101
121CBA
Note: If AB = AC for all matrices A, then B=C.
4.1 Transpose Matrix
4908 13
4 01983
AA:Example
The transpose of a matrix A is another matrix AT (also written A′) created by any one of the following equivalent actions:- write the rows (columns) of A as the columns (rows) of AT - reflect A by its main diagonal to obtain AT
Formally, the (i,j) element of AT is the (j,i) element of A: [AT]ij = [A]jj
If A is a m × n matrix => AT is a n × m matrix. (A')' = AConformability changes unless the matrix is square.
12
4.1 Inverse of a Matrix Identity matrix:
AI = A
100010001
I
Notation: Ij is a jxj identity matrix.
Given A (mxn), the matrix B (nxm) is a right-inverse for A iff AB = Im
Given A (mxn), the matrix C (mxn) is a left-inverse for A iff CA = In
4.1 Inverse of a Matrix
Inversion is tricky:(ABC)-1 = C-1B-1A-1
More on this topic later
Theorem: If A (mxm), has both a right-inverse B and a left-inverse C, then C = B.
Proof:AB=Im and CA=In. Thus,C(AB)=C Im = C and C(AB)=(CA)B=InB=B
=> C(nxm)=B(mxn) Note:
- This matrix is unique.- If A has both a right and a left inverse, it is called invertible.
4.1 Vector multiplication: Geometric interpretation Think of a vector as a
directed line segment in N-dimensions! (has “length” and “direction”)
Scalar multiplication (“scales” the vector –i.e., changes length)
Source of linear dependence
15
6 4 2 U
3 2 U
1 3 2U
x2
x1
-4 -3 -2 -1 1 2 3 4 5 6
6
5
4
3
2
1
-2
4.1 Vector Addition: Geometric interpretation
v' = [2 3]u' = [3 2]w’= v'+u' = [5 5]Note that two vectors
plus the concepts of addition and multiplication can create a two-dimensional space.
16
x1
x2
5
4
3
2
1
1 2 3 4 5
w
u
v
u
A vector space is a mathematical structure formed by a collection of vectors, which may be added together and multiplied by scalars. (It’s closed under multiplication and addition).
4.1 Vector SpaceGiven a field R and a set V of objects, on which “vector
addition” (VxV→V), denoted by “+”, and “scalar multiplication” (RxS →V), denoted by “. ”, are defined. If the following axioms are true for all objects u, v, and w in V and all scalars c and k in R, then V is called a vector space and the objects in V are called vectors.1. u+v is in V (closed under addition).2. u + v = v + u (vector addition is commutative).3. θ is in V, such that u+ θ = u (null element).4. u + (v+w) = (v + u) +w (distributive law of vector addition)5. For each v, there is a –v, such that v+(-v) = θ6. c .u is in V (closed under scalar multiplication).7. c. (k . u) = (c .k) u (scalar multiplication is associative).
17
4.1 Vector Space8. c. (v+ u) = (c. v)+ (c. u) 9. (c+k) . u = (c. u)+ (k. u) 10. 1.u=u (unit element). 11. 0.u= θ (zero element).
We can write S = {V,R,+,.}to denote an abstract vector space.
This is a general definition. If the field R represents the real numbers, then we define a real vector space.
Definition: Linear CombinationGiven vectors u1,...,uk,, the vector w = c1 u1+....+ ckuk is called a linear combination of the vectors u1,...,uk,.18
4.1 Vector Space Definition: Subspace Given the vector space V and W a sect of vectors, such that W is in V. Then W is a subspace iff:u, v are in W => u+v are in W, and c u is in W for every c in R.
u1,...,uk,, the vector w = c1 u1+....+ ckuk is called a linear combination of the vectors u1,...,uk,.
19
4.1 System of equations: Matrices 4.1 System of equations: Matrices and Vectorsand VectorsAssume an economic model as system of
linear equations in which aij parameters, where i = 1.. n rows, j = 1.. m columns, and n=mxi endogenous variables, di exogenous variables and constants
nn
n
n
nm
m
m
nn d
dd
x
xx
ax
axax
axa
axaaxa
2
1
2
22
12
211
22121
12111
20
A general form matrix of a system of linear equationsAx = d where A = matrix of parametersx = column vector of endogenous variables d = column vector of exogenous variables and constants
Solve for x*
dAx
dAx
d
dd
x
xx
aaa
aaaaaa
nnnmnn
m
m
1*
2
1
2
1
21
22221
11211
21
4.1 System of equations: Matrices 4.1 System of equations: Matrices and Vectorsand Vectors
4.1 Solution of a General-equation System
Assume the 2x2 model
2x + y = 124x + 2y = 24
Find x*, y*:y = 12 – 2x4x + 2(12 – 2x) = 244x +24 – 4x = 240 = 0 ?
indeterminante!
Why?4x + 2y =242(2x + y) = 2(12)one equation with two
unknowns2x + y = 12x, yConclusion:
not all simultaneous equation models have solutions
(not all matrices have inverses).
22
4.1 Linear dependence
A set of vectors is linearly dependent if any one of them can be expressed as a linear combination of the remaining vectors; otherwise, it is linearly independent.
Formal definition: Linear independentThe set {u1,...,uk} is called a linearly independent set of vectors iff c1 u1+....+ ckuk = θ => c1= c2=...=ck,=0.
Notes:- Dependence prevents solving a system of equations. More unknowns than independent equations.- The number of linearly independent rows or columns in a matrix is the rank of a matrix (rank(A)).23
4.1 Linear dependenceExamples:
//2
/1
'2
'1
'2
'1
02
2412105
2410
125
vv
vv
v
v
24
02354
16221623
54
;81
;72
321
3
21
321
vvvv
vv
vvv
4.2 Application 1: One Commodity Market Model (2x2 matrix)
Economic Model 1) Qd = a – bP (a,b >0)2) Qs = -c + dP (c,d >0)3) Qd = Qs
Find P* and Q*
Scalar Algebra form(Endogenous Vars ::
Constants)4) 1Q + bP = a 5) 1Q – dP = -c
25
dbbcadQ
dbcaP
*
*
4.2 One Commodity Market Model (2x2 matrix)
26
dAx
ca
db
PQ
dAxca
PQ
db
1*
1
*
*
11
11
Matrix algebra
4.2 Application II: Three 4.2 Application II: Three Equation National Income Equation National Income Model (3x3 matrix)Model (3x3 matrix)Model Y = C + I0 + G0C = a + b (Y-T) (a > 0, 0<b<1)T = d + t Y (d > 0, 0<t<1)
Endogenous variables?Exogenous variables?Constants?Parameters?Why restrictions on the parameters?
27
4.2 Three Equation National 4.2 Three Equation National Income ModelIncome Model Endogenous: Y, C, T: Income (GNP), Consumption,
and Taxes.Exogenous: I0 and G0: autonomous Investment &
Government spending.Parameters:
a & d: autonomous consumption and taxes.t: marginal propensity to tax gross income 0 < t < 1.b: marginal propensity to consume private goods and services from gross income 0 < b < 1.
Solution:
28
btbGIbdaY
1
00*
4.2 Three Equation National Income Model
Parameters & Endogenous
vars.
Exog.
vars.Y C T &con
s.1Y -1C +0
T= I0+G
0
-bY +1C
+bT
= a
-tY +0C
+1T
= d
Given (Model)
Y = C + I0 + G0
C = a + b (Y-T)
T = d + t Y
Find Y*, C*, T*
29
daGI
TCY
tbb
00
101
011
dAx
dAx1*
4.2 Three Equation National Income Model
30
dAx
daGI
tbb
TCY
dAx
daGI
TCY
tbb
1*
001
*
*
*
00
101
011
101
011
4.3 Notes on Vector Operations
23
12xu
An [m x 1] column vector u and a [1 x n] row vector v, yield a product matrix uv of dimension [m x n].
31
54131
xv
1015
812
23
54123
32xvu
4.3 Vector multiplication: Dot (inner), and cross product
• The dot product produces a scalar! c’z =1x1=1x4 4x1= z’c
32
44332211 zczczczcy
4
1iii zcy
zc'
4
3
2
1
4321
zzzz
ccccy
4.3 Vectors: Dot Product
cfbeadfed
cbaT
ccbbaaT 2/1][
)cos(
Think of the dot product as a matrix multiplication
The magnitude (length) is the square root of the dot product of a vector with itself.
The dot product is also related to the angle between the two vectors – but it doesn’t tell us the angle.
Note: As the cos(90) is zero, the dot product of two orthogonal vectors is zero.
4.3 Vectors: Magnitude and Phase (direction)
x
y
||v||
Alternate representations: Polar coords: (||v||, ) Complex numbers: ||v||ej
“phase”
runit vecto a is ,1 If1
2
),,2,1(
vv
n
iixv
nxxxv
T
(Magnitude or “2-norm”)
(unit vector => pure direction)
4.3 Vectors: Cross Product
The cross product of vectors A and B is a vector C which is perpendicular to A and B
The magnitude of C is proportional to the cosine of the angle between A and B
The direction of C follows the right hand rule – this why we call it a “right-handed coordinate system”
)sin(baba
4.3 Vectors: Cross Product: Right hand rule
4.3 Vectors: Norm
• Given a vector space V, the function g: V→ R is called a norm if and only if:1) g(x)≥ 0, for all xεV2) g(x)=0 iff x=θ (empty set)3) g(αx) = |α|g(x) for all αεR, xεV4) g(x+y)=g(x)+g(y) (“triangle inequality”) for all x,yεV
The norm is a generalization of the notion of size or length of a vector.
• An infinite number of functions can be shown to qualify as norms. For vectors in Rn, we have the following examples:
g(x)=maxi (xi), g(x)=∑i |xi|, g(x)=[∑i (xi)4 ] ¼
• Given a norm on a vector space, we can define a measure of “how far apart” two vectors are using the concept of a metric.
4.3 Vectors: Metric
• Given a vector space V, the function d: VxV→ R is called a metric if and only if:1) d(x,y)≥ 0, for all x,yεV2) d(x,y)=0 iff x=y3) d(x,y) = d(y,x) for all x,yεV4) d(x+y)≤d(x,z) + d(z,y) (“triangle inequality”) for all x,y,zεV
Given a norm g(.), we can define a metric by the equation:
d(x,y) = g(x-y).
• The dot product is called the Euclidian distance metric.
4.3 Orthonormal BasisBasis: a space is totally defined by a set of vectors
– any point is a linear combination of the basisOrtho-Normal: orthogonal + normalOrthogonal: dot product is zeroNormal: magnitude is oneExample: X, Y, Z (but don’t have to be!)
T
T
T
z
y
x
100
010
001
000
zyzxyx
• X, Y, Z is an orthonormal basis. We can describe any 3D point as a linear combination of these vectors.
4.3 Orthonormal Basis
ncnbnavcvbvaucubua
nvunvunvu
cb
a
333
222
111
000000
(not an actual formula – just a way of thinking about it)
• To change a point from one coordinate system to another, compute the dot product of each coordinate row with each of the basis vectors.
• How do we express any point as a combination of a new basis U, V, N, given X, Y, Z?
4.5 Identity and Null Matrices
000000000
. 100010001
1001
etc
or Identity Matrix is a square matrix and also it is a diagonal matrix with 1 along the diagonals. Similar to scalar “1”
Null matrix is one in which all elements are zero. Similar to scalar “0”
Both are diagonal matrices
Both are idempotent matrices:
A = AT andA = A2 = A3 = … 41
4.6 Inverse matrix
AA-1 = I A-1A=INecessary for matrix
to be square to have unique inverse
If an inverse exists for a square matrix, it is unique
(A')-1=(A-1)'
42
• A x = d• A-1A x = A-1 d• Ix = A-1 d• x = A-1 d• Solution depends on
A-1
• Linear independence• Determinant test!
4.6 Inverse of a Matrix
100010001
|ihgfedcba
1. Append the identity matrix to A
2. Subtract multiples of the other rows from the first row to reduce the diagonal element to 1
3. Transform the identity matrix as you go
4. When the original matrix is the identity, the identity has become the inverse!
4.6 Determination of the Inverse(Gauss-Jordan Elimination)
AX = I
I X = K
I X = X = A-1 K = A-1
1) Augmented matrix
all A, X and I are (nxn) square matrices
X = A-1
Gauss eliminationGauss-Jordan elimination U: upper triangular
further row operations
[A I ] [ U H] [ I K]
2) Transform augmented matrix
4.6 Determinant of a MatrixThe determinant is a number associated with any
squared matrix. If A is an nxn matrix, the determinant is given by |
A| or det(A).Determinants are used to characterize invertible
matrices. A matrix is invertible (non-singular) if and only if it has a non-zero determinant
That is, if |A|≠0 → A is invertible.Determinants are used to describe the solution to
a system of linear equations with Cramer's rule.Can be found using factorials, pivots, and
cofactors! More on this later.Lots of interpretations
45
4.6 Determinant of a MatrixUsed for inversion. Example: Inverse of a 2x2 matrix:
dcba
A bcadAA )det(||
acbd
bcadA 11 This matrix is
called the adjugate of A (or adj(A)).
A-1 = adj(A)/|A|
4.6 Determinant of a Matrix (3x3)
cegbdiafhcdhbfgaeiihgfedcba
ihgfedcba
ihgfedcba
ihgfedcba
Sarrus’ Rule: Sum from left to right. Then, subtract from right to leftNote: N! terms
4.6 Determinants: Laplace formulaThe determinant of a matrix of arbitrary size can be
defined by the Leibniz formula or the Laplace formula. The Laplace formula (or expansion) expresses the
determinant |A| as a sum of n determinants of (n-1) × (n-1) sub-matrices of A. There are n2 such expressions, one for each row and column of A
Define the i,j minor Mij (usually written as |Mij|) of A as the determinant of the (n-1) × (n-1) matrix that results from deleting the i-th row and the j-th column of A.
48Pierre-Simon Laplace (1749–1827).
4.6 Determinants: Laplace formulaDefine the Ci,j the cofactor of A as:
49
||)1( ,, jiji
ji MC
• The cofactor matrix of A -denoted by C-, is defined as the nxn matrix whose (i,j) entry is the (i,j) cofactor of A. The transpose of C is called the adjugate or adjoint of A (adj(A)).
• Theorem (Determinant as a Laplace expansion)Suppose A = [aij] is an nxn matrix and i,j= {1, 2, ...,n}. Then the determinant
njnjjjijij
ininiiii
CaCaCaCaCaCaA
......||
22
2211
4.6 Determinants: Laplace formula
Example:
50
642010321
A
0)0(x4)3x2-x61)(1()0(x20))2x)1((x3)0(x)1(x2)6x1(x1
x3x2x1|| 131211
CCCA
|A| is zero => The matrix is non-singular. (Check!)
4.6 Determinants: Properties
Interchange of rows and columns does not affect |A|. (Corollary, |A| = |A’|.)
|kA| = kn |A|, where k is a scalar. |I| = 1, where I is the identity matrix. |A| = |A’|. |AB| = |A||B|. |A-1|=1/|A|.
51
4.6 Matrix inversion: Note It is not possible to divide one matrix by another.
That is, we can not write A/B. For two matrices A and B, the quotient can be written as AB-1 or B-1A.
In general, in matrix algebra AB-1 B-1A. Thus, writing A/B does not clearly identify
whether it represents AB-1 or B-1A. We’ll say B-1 post-multiplies A (for AB-1) and B-1 pre-multiplies A (for B-1A)
Matrix division is matrix inversion.
52
4.7 Application IV: Finite Markov Chains
Markov processes are used to measure movements over time.
53
90110
100*6.100*3.,100*4.100*7.6.4.3.7.
100100
PPPP
x
plant?each at be willemployeesmany how year, one of end At the6.4.3.7.
PPPP
M
yprobabilitknown a w/ plantseach between move andstay employees The100100x
B &A plants over two ddistribute are 0 at time Employees
0000BBBA
ABAA00
/011
BBBA
ABAA
00/0
BBABBAAA PBPBPAPABAMBA
BA
4.7 Application IV: Finite Markov Chains
Associative law of multiplication
54
8711390*6.110*3.90*4.110*7.6.4.3.7.
90110
PPPP
PPPP
x
90110PPPP
x
plant?each at be willemployeesmany how years, twoof end At the
BBBA
ABAA
BBBA
ABAA00
2/022
BBBA
ABAA00
/011
BAMBA
BAMBA
Ch. 4 Linear Models & Matrix Algebra: Summary
Matrix algebra can be used:
a. to express the system of equations in a compact notation;
b. to find out whether solution to a system of equations exist; and
c. to obtain the solution if it exists. Need to invert the A matrix to find the solution for x*
55
dAadjAx
AadjAA
dAx
dAx
*
1
1*
det
Ch. 4 Notation and Definitions: SummaryA (Upper case letters) = matrixb (Lower case letters) = vectornxm = n rows, m columnsrank(A) = number of linearly independent vectors of A trace(A) = tr(A) = sum of diagonal elements of ANull matrix = all elements equal to zero.Diagonal matrix = all non-zero elements are in the
diagonal. I = identity matrix (diagonal elements: 1, off-
diagonal:0) |A| = det(A) = determinant of AA-1 = inverse of AA’=AT = Transpose of AA=AT Symmetric matrixA=A-1 Orthogonal matrix|Mij|= Minor of A
56
You know too much linear algebra when...
You look at the long row of milk cartons at Whole Foods --soy, skim, .5% low-fat, 1% low-fat, 2% low-fat, and whole-- and think: "Why so many? Aren't soy, skim, and whole a basis?"