8/11/2019 Honors Linear Algebra Main Problem Set
1/105
Honors Linear Algebra Main Problem Set
Note: Problems are usually referenced by section and number, e.g., C6. If a problem is referencedby a number alone, that means it is in the current section. Thus a reference to Problem 5 within
Section DA means Problem DA5.
Section A. Matrix Operations, More Theoretical
The next few problems (17) ask you to prove various basic properties of matrix operations. A
satisfactory proof must rely on definitions or earlier theorems, but at this point you have no theorems
about matrices, so you must rely on definitions. All the properties say that two matrices are equal,
so you need to apply the definition that A =B if corresponding entries are equal as numbers (e.g.,
that aij = bij for all i, j). If a problem is about, say, two matrix sums being equal, then you will
also need to invoke the definition ofA + B, which is also an entrywise definition. Thus good proofsof these results at this stage will almost surely be entrywise arguments.
1. Prove thatA +B = B +A (more precisely, ifA and B are the same shape, say m n, so thatboth sums are defined, then both sums are the same). This fact is pretty obvious, so it is a
good one for concentrating on appropriate proof style. Do what you think constitutes a good
writeup and we will discuss it in class.
2. Prove that scalar multiplication distributes over addition of matrices: k(A +B) =kA + kB.
3. Prove that matrix multiplication distributes over matrix addition: C(A +B) =CA+ CB .
4. Prove: k(AB) = (kA)B = A(kB). We say that scalars pass through matrix multiplicationeven though matrix multiplication itself does not commute.
5. Prove: a(bM) = (ab)M. This is sort of an associative law. Its not theassociative law, because
that involves just one sort of multiplication and this involves two (why?).
6. Prove 1M =M. Thats 1, not I, so this theorem is about scalar multiplication. The theorem
is rather obvious, but it does show up again later.
7. Prove the Associative Law: A(BC) = (AB)C. (More precisely: For any matricesA, B, C, if
A(BC) exists, then so does (AB)C, and they are equal.)
8. If B and B both commute with A, for each of the matrices below prove or disprove that itmust commute withA:
kB, B+ B, BB, B2.
Note: Since the hypothesis (that B, B commute) algebraicizes into a statement at the matrixlevel, namely BB = BB, rather than at the entry level, it is reasonable to look for a proofentirely at the matrix level. Most earlier proofs have been at the entry level.
9. Does the high school algebra identity a2 b2 = (a +b)(a b) generalize ton n matrices?10. Show that
A1 + B1 =A1[A+B]B1 .
(Just multiply out the righthand side.) This looks pretty weird but it does generalize a familiarhigh school algebra fact. What fact?
S. Maurer, Math 28P, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
2/105
page 2 Section A, Matrix Operations, More Theoretical
11. The transpose AT of a matrix A is obtained by interchanging rows and columns. Thus, if
A= [aij] ism n, thenAT
isnmand itsjientry isaij . Another name for thej ientry ofAT
isaTji , so by definition we may writeAT = [aTji ] where a
Tji = aij . (Why call the typical entry of
AT the j i entry when usually we use ij? Because i has already been associated with the rows
ofA and j with the columns ofA. Since A is m n, then i ranges from 1 to m and j from 1to n. By naming the typical entry ofAT as aTji we are saying thatA
T is n m.)Find a formula for (AB)T in terms ofAT and BT. Hint: It helps to think ofA as made up
of rows and B as made up of columns, for then
AB looks like
...
Now, what does BTAT look like?
12. Prove: Regardless of the shape of matrixA, both products AAT and ATA exist. In fact, they
are both the same shape in some way; what way?
13. a) Come up with a 22 nonzero matrix A, with all entries nonzero, and a 2 1 vectors b,also with nonzero entries, such that Ab= 0.
b) For the same A, come up with a nonzero 2 2 matrix B, with all entries nonzero, suchthat AB =0. Show the multiplication as a check. Note what we have shown: the claim
ab= 0 = [a= 0 or b = 0],true and so useful for real number multiplication, is false for matrix multiplication.
c) Leave your b from above alone, and modify your A slightly to obtain a C=I such thatCb= b.
14. Show that the cancellation law
AC=BC = A= Bis false for matrices, even nonzero matrices. Hint: use Prob. 13.
15. Let
P =
1 1
1 2
, Q=
1 0
2 1
, R=
1 0
1 1
, S=
3 1
2 1
, A=
0 1
1 0
.
By direct calculation, verify that
P Q= RS,
but
P AQ =RAS.This shows that mid multiplication doesnt work (though it does work for ordinary numbers;
why?).
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
3/105
page 3 Section AA, Matrix Operations, Numerical Practice
16. The trace of a matrix is the sum of its main diagonal entries. Let A, B be any two square
matrices of the same size. Prove thatAB and BA have the same trace.17. Suppose the columns of A are c1, c2, . . . , cn and the rows ofB are r1, r2, . . . , rn. Show that
AB=n
i=1 ciri.
Section AA. Matrix Operations, Numerical Practice
1. Let the rows ofB be r1, r2, r3. You want to premultiply B to obtain
r1+ r2
r2 2r3
r1+ r3 2r2
What is the premultiplier matrix?
2. What does premultiplying by
0 0 1
1 0 0
0 1 0
do to a matrix?
3. Determine a premultiplier matrix that converts
1 1 1
0 0 00 1 1
to 1 0 0
0 1 10 0 0
by elementary row operations.
4. Determine the premultiplier matrix that does the downsweep from the first pivot in
1 2 3
2 3 4
3 4 4
5. Suppose mystery matrixA has the property that
A
1
2
3
=1
0
and A
4
2
1
=0
1
.
Name a right inverse ofA.
6. Suppose mystery matrixA has the property that
(1, 2, 3) A= (1, 0) and (4, 2,
1) A= (0, 1).
Name a left inverse ofA.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
4/105
page 4 Section AA, Matrix Operations, Numerical Practice
7. Suppose mystery matrixB satisfies
B
10
=2
3
and B0
1
= 52
.
a) Quickly findX such that
BX=
5 4
2 6
.
b) Quickly findB .
8. Suppose
C
1 32 4
=
1 00 1
.
Quickly find x andy so that
Cx=
10
and Cy= 21
.
9. If
D
4
5
=
1
1
and D
27
=
1
2
,
Findx that satisfies
Dx=
10
.
10. State and solve a row analog of Problem AA9.
11. Assume that the following chart shows the number of grams of nutrients per ounce of food
indicated. (Actually, the numbers are made up.)
meat potato cabbage
protein 20 5 1
fat 30 3 1
carbohydrates 15 20 5
If you eat a meal consisting of 9 ounces of meat, 20 ounces of potatoes, and 5 ounces of cabbage,
how many grams of each nutrient do you get? Be sure to make clear what this problem is doing
in a linear algebra course.
12. Continuing with the data from Problem 11, suppose the Army desires to use these same
delectable foods to feed new recruits a dinner providing 305 grams of protein, 365 grams of
fat, and 575 grams of carbohydrates. How much of each food should be prepared for each
recruit? Be sure to make clear what this problem is doing in a linear algebra course?
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
5/105
page 5 Section AB, Special Matrices
Section AB. Special Matrices
1. A matrix issymmetricifAT =A. Explain why every symmetric matrix is square.
2. Prove: ifA is symmetric, so is A2. (Hint: see Problem A11.) Can you generalize?
3. Prove or disprove: the product of symmetric matrices is symmetric.
4. A matrix isantisymmetric (also called skew symmetric) ifA =(AT). Explain why anantisymmetric matrix must be square. What can you say about the main diagonal entries of
an antisymmetric matrix?
5. Prove or disprove: IfAis skew symmetric, then its inverse (if any) is skew symmetric.
6. IfA is skew symmetric, what nice property (if any) does A2 have?
7. Let A be a 2n2n matrix. Then we may write A as
P Q
R S
, where each of P, Q, R, S is
an nn matrix. In this case, A is called a block matrix (or a partitioned matrix) and
P,Q ,R,Sare the blocks. Similarly, let A=
P Q
R S
. Prove
AA =
P P+QR P Q+QS
RP+SR RQ+SS
; (1)
that is, for the purpose of doing multiplication, you can simply pretend that the blocks are realnumbers and use the regular rule for matrix multiplication, obtaining the product expressed in
block form.
a) Prove (1). Or rather, prove it is correct for the top left block. In other words, prove
that the nn top left corner ofAA is P P+QR. (The remaining parts of the proof aresimilar.)
b) There is nothing special about having all the blocks be the same size. Come up with as
general a correct statement about multiplication of block matrices as you can. No proof
requested.
8. A matrix isdiagonal if the only nonzero entries (if any) are on the main diagonal.
a) Prove that all n n diagonal matrices commute.b) Determine, with proof, the set of all matrices that commute with allnn diagonal ma-
trices. (As usual, n is arbitrary but fixed, so you are looking at square matrices of some
fixed size.) For instance, all the nn diagonal matrices are themselves in this set, butmaybe there are other matrices in the set, even of other shapes.
9. A square matrix is lower triangular if all the nonzero entries (if any) are on or below the
main diagonal. Let L be a simple lower triangular matrix ifL is lower triangular and all
diagonal entries are 1s.
a) Prove that the product of lower triangular matrices is lower triangular (pre- or postmul-
tiplication approaches can help.)
b) Prove that the product of simple lower triangular matrices is simple lower triangular.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
6/105
page 6 Section AC, The Origins of Matrix Multiplication (Parallel Worlds)
Section AC. The Origins of Matrix Multiplication (Parallel Worlds)
1. The M-product of two ordered pairs, (c, d) and (a, b) in that order, is defined by
(c, d) (a, b) = (ac, bc+d).(We useas the special symbol for this sort of product.)a) Prove or disprove: the M-product commutes.
b) Prove or disprove: the M-product is associative.
2. In linear algebra, a function in the form y = f(x) = ax+b is called an affine function. (A
function y = (x) = ax is called linear.)
Now, ifz =g(y) =cy +d is another affine function, we can compose g and f to get z as a
function ofx.a) Show that the composition of two affine functions is affine.
b) Let us represent the affine functiony = ax+bby the symbol A= (a, b). If we wish products
of symbols to correspond to composition of affine functions (when the symbols are written
in the same order as the functions are written in the composition), then how must the
product of these symbols be defined? Show the work which leads to your conclusion).
c) Now go back and do AC1b another way.
3. Instead of associating the affine functiony = ax +b with the symbol (a, b), for some strange
reason let its symbol be the matrix
M =
a b
0 1
.
a) Show that with this choice, the correct product of symbols is just the matrix multiplication
we have already defined!
b) Give yet another solution to AC1b.
c) It may seem strange to associate an affine function, which maps a one-dimensional line to
itself, with a matrix, which maps 2-dimensional space to itself. But the correspondence
makes sense if you just look at a certain one-dimensional slice (that is, line) in R2. Let S
be the slice
{(x, 1)T
|x
R
}. thusSconsists of all the points on a line, represented as
column vectorsx. Confirm that the mapping x MxmapsStoS, and in just the affineway we want.
4. The matrix world and the function world are parallel worlds. Most years we discuss this in
class. Matrices in one world correspond to multivariable linear functions (systems of equations
describing linear substitutions) in the other, and matrix multiplication corresponds to func-
tion composition. Since all function composition is associative, we may conclude that matrix
multiplication is associative
Show that + in the matrix world corresponds to + in the parallel world of functions. You
may work with linear functions from R2 to R2 and 22 matrices (just as on the handout
showing how function composition and matrix multiplication are parallel), but if you can, showit for the general case (as at the end of that handout)
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
7/105
page 7 Section AC, The Origins of Matrix Multiplication (Parallel Worlds)
5. For functions,scalar multiplicationis defined as follows. Ifk is a scalar and fis a function,
thenkfis another function, defined by
(kf)(x) = k(f(x))
for all domain values x. E.g., iffis the sine function, then (3f)(x) = 3 sin x. Iff(x) is a vector,
(kf)(x) multiplies each coordinate off(x) byk.
Show that scalar multiplication in the world of matrices corresponds to scalar multiplication
in the parallel function world.
Remark: The conclusion from the previous two problems is: any fact about linear functions
that uses only plus, composition, and scalar multiplication is also true in the matrix world
(using plus, matrix multiplication, and scalar multiplication), and vice versa.
6. Does everything correspond perfectly between the matrix and the function worlds? Is everything
true in one world true in the other? No, because the function world is much bigger. There is
a perfect correspondence between the matrix world and the linear functionworld (which is the
part of the function world you actually get to from the matrix world), but there are properties
that hold for linear functions that dont hold for all functions. This problem points out one
such property. It shows that a distributive property that holds for matrices does not hold for
all functions.
Consider real functions, that is, functions with domain and range the real numbers. The
claims in the problem work for other functions too, but lets stick to this familiar setting. Thisallows us to avoid a number of issues that would only divert us from the main point.
The sum of two functions f and g is defined to be the function f+ g that satisfies, for all
real x,
(f+g)(x) = f(x) + g(x).
For instance, iff(x) = sin x and g(x) = x2, then (f+g)(x) = sin x+x2. Also,f g means the
composition off andg . Thus
(f g)(x) = f
g(x)
.
a) Prove: for any functionsf, g, h, (f+g)h= f h + gh. That is, composition right distributesover addition.
b) Show that there is no corresponding left distributive law. Find functionsf, g , hsuch that
f(g+h)=f g+f h.Remember, two functions are equal if they agree no matter what input they are given. That
is, f = g if for all x, f(x) = g (x). Consequently, they are unequal if the outputs are different
for even one x input.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
8/105
page 8 Section B, Gaussian Elimination, Numerical
Section B. Gaussian Elimination, Numerical
(Most problems in this section are adapted from Schaums Linear Algebra, Chapter 1. The Schaum
numbers are in parentheses.)
1. (1.15) Solve both in algebraic form and in matrix form, using Gaussian elimination exactly in
the latter case. You may get some fractions, but nothing too hard to handle.
x 2y+ z = 72x y+ 4z = 173x 2y+ 2z = 14.
2. (1.11) Solve in algebraic form and then with Gaussian elimination. The special form of the
equations actually makes the problem easier. Why?
2x 3y+ 5z 2t= 95y z+ 3t= 1
7z t= 32t= 8.
3. (1.36) Solve using Gaussian elimination:
x + 2y z = 3x + 3y+ z = 5
3x + 8y+ 4z = 17.
4. (1.14) Find the general solution to
x 2y 3z+ 5s 2t= 42z 6s + 3t= 2
5t= 10.
Use Gaussian elimination, followed by the technique of moving the free (nonpivot) variables to
the right-hand side.
5. (1.21) Use Gaussian elimination to row reduce the following matrix to an echelon form, and
then to a reduced echelon form.
1 2 3 02 4 2 23 6 4 3
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
9/105
page 9 Section BA, GE: Echelon Form
6. (1.26) Reduce by Gaussian elimination to row-reduced echelon form:
2 2 1 6 44 4 1 10 13
6 6 0 20 19
7. (1.35) Solve by Gaussian elimination:
x + 2y 3z 2s + 4t= 12x + 5y 8z s + 6t= 4
x + 4y
7z+ 5s + 2t= 8.
8. Find the unique polynomial of degree at most 3 which goes through the points (1,1), (2,3),
(3,6), (4,10). Use Gaussian elimination (but if you recognize these points as involving special
numbers, maybe you can also guess the answer). The order in which you list your unknowns
makes a big difference in how tedious the Gaussian elimination is by hand. If you have a
calculator that solves matrix equations, it doesnt matter.
9. Suppose Gaussian elimination reduces Ato
R=
1 2 0 30 0 1 1
0 0 0 0
.
a) Name all solutions toAx= 0.
b) But you were never told the entries ofA! What theorem justifies your answer to part a)?
c) Suppose further that Gaussian elimination required no row switches to reachR. Find a
vector b such that Ax = b has no solutions. Caution: If all you show about your b is
that Rx= b has no solutions, youre not done.
Section BA. GE: Echelon Form
1. Give an example of a 33 lower triangular echelon matrix. Give an example of a 33 lowertriangular row-reduced echelon matrix.
2. a) In a 67 rref (row-reduced echelon matrix) with five pivotal 1s, at most how many entriesare nonzero?
b) At most how many entries are nonzero in anm n rref withp pivots?
3. There are infinitely many 22 row-reduced echelon matrices. For instance,
1 0 0
is such a
matrix for every value of. However, when an entry can take on any value, let us agree thatdifferent values do not give us a different reduced echelon form. All that matters is: where 1s
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
10/105
page 10 Section BA, GE: Echelon Form
are, whereforced0s are (the 0s in positions where the algorithm deliberately creates a 0), and
where arbitrary values are. Thus, there are exactly four 2 2 reduced echelon forms: 1 0
0 1
,
1 0 0
,
0 1
0 0
,
0 0
0 0
.
a) How many 33 reduced echelon forms are there? One of them is
1 0 0 1 0 0 0
,
where the twos represent arbitrary values, not necessarily the same.b) How many 34 reduced echelon forms are there?There are smarter ways to answer these questions than brute force listing.
4. Thesolutionsof an unaugmented matrix A are the solutions to the augmented system [A | 0].For instance, the solutions to
M=
1 1
2 2
are the solutions to the system
x + y= 02x + 2y= 0.
(1)
which, by GE or by inspection, are all ordered pairs (y, y). Equations which are all equal to0, like (1), are called homogeneous. (Note the second e; homogenous means something
else.)
The solutions of matrices A and B below clearly have the same solutions:
A=
1 4 7
2 5 8
3 6 9
, B =
2 5 8
3 6 9
1 4 7
.
a) Why is it clear thatA andB have the same solutions?
b) Nonetheless, the steps and matrices in Gaussian elimination will be very different. Carry
out both Gaussian eliminations. What is the same? Is this surprising?
5. The systems
x + y+ z+ w = 0
x + 2y z+ w = 0 andw z+ 2y+x = 0w+z+ y+x = 0
clearly have the same solution.
a) Why is it clear thatA andB have the same solutions?
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
11/105
page 11 Section C, Matrix Inverses
b) Nonetheless, the steps and matrices in Gaussian elimination will be very different. Carry
out both Gaussian eliminations. Is anything surprising?6. Consider two mn rref matrices M, M and suppose M has no pivot in the first (leftmost)
column, and M does have a pivot in that column. Prove: M and M do not have the samesolutions. Hint: find a specific n-tuple that is a solution to Mbut not to M. Explain how youknow you are right despite knowing so little about either matrix.
This problem is a special case of a general result that some of you may have already conjec-
tured, and which will be the subject of a handout.
Section C. Matrix Inverses
1. Every real number except 0 has a multiplicative inverse. Does every square matrix which is not
all 0s have an inverse? Does every square matrix which has no 0 entries have an inverse?
2. Theorem: (A1)1 = A. State precisely the if-then statement for which this is shorthand.Then prove it.
3. Let A, B be nn and invertible. Show that AB is also invertible by showing that B1A1 isits inverse. (The natural converse IfAB is invertible then so are both A and B is false,
but there are some partial inverses that are true; see CC 68.)
4. Alas, the result in Problem 3 is often stated as (AB)1
=B1
A1
. Why is this a bad statement?How might it be misinterpreted?
5. Prove or disprove: IfAis symmetric, then its inverse (if any) is symmetric. Remark: Since both
symmetry and inverses are defined at the matrix level, there should be a proof of this claim at
the matrix level. Heres a hint: Show that, ifB meets the definition of being an inverse ofA,
then so does BT.
6. Prove: IfA is invertible and AB= 0, then B = 0.
7. Prove: ifAB =0 and B is a not an all-zeros matrix, then A is not invertible. Hint: Use proof
by contradiction, that is, suppose the hypotheses are true but the conclusion is false, and reach
a contradiction.
8. Let A be any 22 matrix, that is, A =
a b
c d
. Let =ad bc. Let B =
d bc a
.
Prove: A is invertible iff = 0, and when A1 exists, it equals 1B. (Only if is the hardpart; one approach is to show that ad bc= 0 is necessary for A to be reducible toI. Anotherapproach is to use Prob. 7.)
Note: There is a general formula for A1, but only in the case n = 2 (this problem) isthe formula easy to use, or quick to state. For n = 2 you just switch the values on the main
diagonal, negate the values on the other diagonal, and scale by 1/.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
12/105
page 12 Section CA, Partial Inverses, Solutions, and Cancellation
9. SupposeA and B are invertible matrices, not necessarily the same size. Show that the block
matrix C =
AB
is invertible. (Blank space means 0s.) Hint: Using A1 and B1,
guess a block formula for C1 and check that you are right. Another approach is to use yourknowledge of what Gaussian elimination must do to A andB to determine what it does to C.
10. Suppose A and B are mm and nn invertible matrices. Show that the block matrix A C
B
is invertible, where C is any m n matrix.
Section CA. Partial Inverses, Solutions, and Cancellation
1. a) Show that ifA has a right inverse R, then for every conformable b, the system Ax = b
has at leastone solution because Rb is a solution. Method: plug it in and check. We say
that A satisfies the existence condition (for solutions x for all conformable b).
b) By considering the rref ofA =
1 2
3 4
5 6
, argue that for this A, Ax= b does nothave a
solution for everyb. Can this A have even one right inverse?
2. a) Show that ifA has a left inverse L, then for every conformable b, the systemAx= b has
at mostone solution because Lb is the only candidate. Method: Solve Ax= b for x byleft multiplying. We say that A meets theuniqueness condition (for solutions x for all
conformable b).
b) By considering the rref ofA =
1 3 5
2 4 6
, argue that for this A, Ax= b has more than
onesolution for some bs. Can this A have even one left inverse?
3. Let
L= 1 1 15/2 5/2 1
, A=
1 2
3 4
5 6
, b=
1
1
2
.
a) Check thatLA = I.
b) So, to solveAx= b, just premultiply by L:
LAx= Lb
Ix= x = Lb.
Therefore, Lb should be the solution. Well, compute Lb and plug it in to Ax = b.
Does it work? If it doesnt, what went wrong? If it does, what do you know about the
existence of other solutions? If it doesnt, what do you know how many solutions Ax= bhas? (Please note there werent any powers or other nonlinear things in this problem of
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
13/105
page 13 Section CA, Partial Inverses, Solutions, and Cancellation
the sort normally associated with extraneous roots. We solved this problem by linear
operations.)c) In fact, matrixA of this problem has infinitely many left inverses. Find them all. Heres
one way, but maybe you can find others that are simpler. To solve XA = I, you can
just as well solve ATXT = IT = I; Then to solve for the unknown matrix XT you can
solve column by column. For instance, lettingx be the first column, you need to solve
ATx = (1, 0)T. Now you have a standard system of linear equations and can solve by
Gaussian elimination.
d) LetLbe any of the left inverses found in c) other than the L given in part a). IsLb= Lb?IsLba solution toAx= b?
Note that this problem, along with Problem 1, show that the A of this problem has infinitely
many left inverses and no right inverses.
4. In Problem CA3, you showed that the existence of a left inverseL is not enough to be sure
that Lb is a solution toAx= b; we get uniqueness but not existence. Now you will show that
the existence of a right inverse R is not enough to be sure that Rbis the only solution; we get
existence but not uniqueness.
Let
A=
1 0 1
0 1 0
, R=
1 0
0 1
0 0
, b=
p
q
.
a) Check thatAR = Iand that x = Rb= (p, q, 0)T is a solution toAx= b.
b) Show thatx = b = (0, q , p)T is another solution. Find infinitely many others.
c) There are infinitely many right inverses toA. Find them all.
5. Prove: IfA is nonsquare and has a right inverse, then it has infinitely many right inverses, and
no left inverse.
6. (Existence condition) Improve on Problem 1 by proving: Ax= b has at least one solutionx
for eachb if and only ifA has a right inverse. (In this and similar problems, it is assumed that
A is a fixed but arbitrary m n matrix, and each b, or every b, means each b for whichthe equationAx= b makes sense, that is, all m 1 column vectors.)
7. (Uniqueness condition) Improve on Problem 2 by proving: Ax= b has at most one solution x
for eachb if and only ifA has a left inverse.
8. It is a theorem (hopefully we proved it already in class) that
Given matrix A, do GE on [A|I], obtaining [AR| C ]. IfAR =I , then A1 =C;else A1 does not exist.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
14/105
page 14 Section CA, Partial Inverses, Solutions, and Cancellation
a) Come up with a theorem/algorithm about computing left inverses that parallels this the-
orem. Come up with means discover, then state, then prove!. Parallel means thetheorem is of the form
Given matrix A, do GE on [A|B ], obtaining [AR| C ]. IfAR. . . , then such-and-such is a left inverse; else a left inverse does not exist.
Youhave to figure out the right choice ofB and what goes in the dot-dot-dots and what
is such-and-such!
b) Apply your algorithm/theorem to
A=
1 2
3 45 6
and A=
1 2
2 43 6
.
9. a) Parallel to Problem 8a, come up with a theorem about computing right inverses. The
procedure for computing the right inverse from Cis a bit complicated; you may wish to
assume at first that the pivot columns ofA are the leftmost columns.)
b) Apply your algorithm/theorem to
A=
1 3 5
2 4 6
and A=
1 2 3
2 4 6
.
10. In Problems 89 all the results were stated in terms of whatA reduces to by GE. In fact, all
that is necessary is that the reduction is obtained by elementary row operations. For instance,
the invertibility theorem may be generalized to say:
Ais invertible iff by some sequence of elementary row operationsA may be reduced
to I, in which case the inverse is C, where Cis what is obtained on the right if the
same sequence of row operations is applied to [A |I].Explain why GE may be replaced by elementary row operations in all claims in these two
problems.
11. Definition: The left cancellation property holds for matrix C if, for anymatrices A and B
that can postmultiplyC,
CA = CB = A= B.(In high school algebra this property is called a law because it is always true. For matrices
it is at best a property because it is not always true.) Prove: The left cancellation property
holds for Cif and only ifChas a left inverse. This claim means that
i) If C has a left inverse, then the property holds for all A, B conformable with C for
multiplication; and
ii) IfCdoes not have a left inverse, then there exist two matricesA =B for whichCA= CB.(However, we do not claim that for al ldistinct A, B, necessarily C A= CB.)
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
15/105
page 15 Section CB, Extraneous Roots
Hint for ii): for each non left-invertible Cit is sufficient to find an X=0 for which C X =0,
for then letA be anything and let B =A + X. For instance, if you let A = 0, then we find thatC0 = CX = 0 but 0= X. Here we follow the semi-mystical principle that problems aboutequations are easier if we can make them into equations equalling 0.
12. State, interpret, and prove a theorem, similar to that in the previous problem, about theright
cancellation property.
13. In light of the previous two problems, for only what sort of matrices C can you rightfully go
about canceling it on the left and canceling it on the right?
14. For what matrices do the right and left cancellation properties hold for addition? E.g., for
which C does A + C=B +C = A= B (for all conformable A and B)?15. Let A be mn.
a) Prove:
1) IfAx = b has at least one solution for every b, then mn (i.e., then A has at leastas many columns as rows).
b) State the contrapositive of 1). (Recall that an if-then statement and its contrapositive are
both true or both false. The contrapositive form of 1) is perhaps the more useful form.)
c) State the converse of 1). Come up with a single example to show that the converse is false.
16. Let A be mn.
a) Prove:2) If Ax = b has at most one solution for every b, then n m (i.e., A has at least as
many rows as columns).
b) State the contrapositive of 2).
c) State the converse of 2). Come up with a single example to show that the converse is false.
Section CB. Extraneous Roots
1. Consider the following attempt to solvex = 2 +
x.
x= 2 +
x (i)
x 2 = x (ii)x2 4x + 4 =x (iii)x2 5x + 4 = 0 (iv)
(x 4)(x 1) = 0 (v)x= 1, 4 (vi)
a) Is each step legitimate? That is, does each line follow correctly from the previous line?
b) Do lines (i) and (vi) have the same solutions?
c) If your answer tob) is No, which line or lines dont reverse? That is, which lines dont
imply the previous line?
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
16/105
page 16 Section CC, Gaussian Elimination and Inverses
2. Problems with radicals in them are not the only problems in elementary algebra where extra-
neous roots show up. Consider
2x
x 1+ 1
x 1 = 3
x 1.
Solve this equation in the usual way. Which, if any, of your answer(s) satisfy this equation?
What step(s) in your solution fails to reverse?
3. A certain matrixA has the property that its transpose is its inverse: AAT = ATA = I. For
this A, and some b, we want to find all solutions x toAx= b. Which, if any, of the following
algebraic calculations amount to an argument that there is exactly one solution, x = ATb.
a)
Ax= b
AT(Ax) =ATb
(ATA)x= ATb
Ix= ATb
x= ATb.
b)
x= ATb
Ax= AATb
Ax= b.
c)
x= ATb
x= ATAx
x= x.
d)
Ax= b
A(ATb) =b
b=b.
e)
Ax ?= b
A(ATb) ?= b
b = b.
4. In fact, we will learn later that, for any square matrix A such that AAT = ATA = I, and
for any b, there is a unique solution x to Ax = b. (We will prove this by an argument that
doesnt involve GE or multiplying A by its inverse.) Knowing this, which of the arguments in
Problem 3 are valid proofs that that this unique solution is ATb?
Section CC. Gaussian Elimination and Inverses
In the first section on (two-sided) inverses, Section C, the results could be proved at the matrix level
using the definition of inverse. The theoretical results in this section are deeper and you will need
your knowledge of Gaussian elimination to prove them. What you will mostly need is knowledge ofwhat various rref configurations imply about solutions.
1. Use Gaussian elimination to determine if the following matrices have inverses, and to find the
inverse when it exists.
a)
1 3
2 4
b)
1 2 3
2 3 4
3 4 5
c)
1 2 3
2 3 4
3 4 4
2. Recall that a simple lower triangular matrix is lower triangular with all diagonal entries 1.
Prove that a simple lower triangular matrix is always invertible and its inverse is simple lowertriangular. (Hint: use GE.)
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
17/105
page 17 Section D, Canonical Forms
3. Prove that an uppertriangularn n matrix with no 0s on the main diagonal is invertible, and
the inverse is uppertriangular. (The same theorem is true for lowertriangular, as one couldprove directly, or by applying transposes to the matrices in the uppertriangular proof.)
4. Prove that if matrixMis square and has a left inverse, then it is invertible. Hint: What does
the existence of a left inverse tell you about the number of solutions to Ax = b for every b?
Combine this with knowledge of rrefs from GE.
5. Show: If a matrixA has a right inverse and is square, then A is invertible.
6. Show that the following is false: ifAB is invertible, then so are A andB . Hint: Think simple
and small. Find A, B which are obviously not invertible but AB is.
7. Show that if (AB)1 exists, then at least A has a right inverse and B has a left inverse.
8. Prove: ifAB is invertible and Ais square, then both A and B are invertible.
9. Lets loosen up the definition of inverse matrix. For this problem, let us say that an mnmatrix A is invertible if there is a matrix B such that BA and AB are both identity matrices
(but not necessarily the same size)
a) (easy) Show thatB is necessarily n m.b) (harder) Show that A must be square anyway (thus, no additional generality is gained
over the usual definition that a matrix A is invertible if it is square and there exists a B
such that AB = BA = I).
Section D. Canonical Forms
1. For representing points on the plane, are cartesian coordinates (x, y) a canonical form? Are
polar coordinates (r, ) a canonical form. Explain.
2. Is y = mx+b a canonical form for straight lines in the plane? If not, do you know another
form which is? (By the meaning of canonical, this form is canonical if every equation of this
form represents a straight line in the plane, and every straight line in the plane can be written
in this form in one and only one way.)
3. Name a canonical form for circles in the plane. Explain briefly why it is canonical.
4. You know one canonical form for quadratics: y =ax2 +bx +c with a= 0. However, there isanother important form for quadratics:
y= a(x d)2 +e, a = 0.Is this a canonical form? Whether it is or not, what advantage does this form have?
5. Someone asks you to describe (as an intersection of planes) the set of points (x , y , z) such that
z= 2y and y = 3x. Your friend says this set in the intersection of the three planes
3x + y z= 03x
3y+ z= 0
6x 2z= 0.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
18/105
page 18 Section E, Complexity of Matrix Algorithms; Middle View
a) Is your friend right? (You might be able to answer this ad hoc, but do it using canonical
forms.)b) Your friend changes his mind before he hears your answer. He now claims that the third
plane is
6x z= 0.Is your friend right now?
6. Is the solution set to
x + 2y+ 3z+ 4w= 0
2x + 3y+ 4z+ 5w= 0
the set of (x , y , z , w) satisfying
x
y
z
w
=x
1
21
0
+ w
0
1
21
?
7. A high school algebra teacher gives a problem to solve, with radicals in it. One student gets
the answer 1 12
2 . Another student gets 1
2 +
2. A third student gets
2 +
2
6 + 4
2. Could
they all be right? Why is this problem in this section?
Section E. Complexity of Matrix Algorithms; Middle View
Note: Restrict attention in this section to nn matrices. and n1 column vectors. In thissection, assume that matrix multiplication takes n3 steps, that Gaussian elimination takes n3/3
steps on any system Ax = b, and that matrix inversion takes n3 steps. (This is correct if we
do matrix multiplication directly from the definition, if we use Gaussian elimination to compute
inverses, if we count only real-number multiplications and divisions as steps, and if we count only
the highest order terms. While these counts are an appropriate way to measure efficiency for
large matrix computations on a computer, it is not appropriate for matrix computations on your
calculator.)
1. Let A andB be n n matrices, and let c be an n 1 column.a) How much work does it take to computeABc as (AB)c? As A(Bc)? Which is less work?
b) Now suppose A and B are fixed, but c varies. In fact, you have to computeABc for M
different vectors c, where M n (that is, M is very much bigger than n). This is thesituation in computer graphics applications.
How much work does it take to compute the answers in the form (AB)c; in the form
A(Bc)? Which is less work?
c) Now suppose B and c are fixed but A varies, over M different matrices A. Compare theamount of work by the two methods.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
19/105
page 19 Section E, Complexity of Matrix Algorithms; Middle View
d) Now suppose A and c are fixed but there are M differentBs. Compare the work by the
two methods.
2. Which side of the identity
(AB)1 =B1A1
is faster to compute?
3. Suppose you are given invertible matrixA and asked to solve Ax = b. Which way is faster,
Gaussian elimination, or finding and then premultiplying by A1?
4. Again, you are given an invertible matrixA and asked to solve Ax= b. Which way is faster
on your calculator, Gaussian elimination, or finding and then premultiplying by A1
? What ifyou dont know in advance whether A is invertible?
5. Suppose you are given invertible matricesA, B and asked to solve (AB)x =c. Which way is
faster:
i) FindAB and then do Gaussian elimination to solve (AB)x= c;
ii) First use Gaussian elimination to solveAy= c for y, and then use Gaussian elimination
to solveBx= y. (Convince yourself that the x you get here really does solve the original
problem.)
6. Your boss gives you a huge square matrixA and tells you to solve Ax = b. Just as you are
about to start, your fairy godmother arrives and hands you A1 on a silver platter. To solvethe problem most efficiently, what should you do?
7. Your boss gives you two square matricesA and B and asks you to verify that B = A1. Whichis smarter, checking directly whether B meets the definition, or solving for A1 from scratchusing Gaussian elimination?
8. This problem is intended to show that complexity is an issue for ordinary arithmetic, not just
for matrix arithmetic; and that what should count as a time-consuming step depends on the
tools you are using.
Consider the identity a(b+c) = ab +ac for the particular values a = 3.729, b = 2.638 andc= 5.893.
a) Compute both sides of the identity by hand. Well, at least imagine carry out the work.
Which side takes longer to compute? By a lot? What part of the work takes the most
time?
b) Compute both sides of the identity with a hand calculator. Use a memory button if that
makes things faster. Which side takes longer to compute? By a lot? What part of the
work takes the most time?
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
20/105
page 20 Section EA, Complexity of Matrix Algorithms; Detailed View
Section EA. Complexity of Matrix Algorithms; Detailed View
1. In the following parts, you are asked to consider variants of Gaussian elimination in whichthe operations are done in different orders. Each variant involves the same input: an nncoefficient matrix A and a n1 constant column b. Thus each variant is carried out on an(n+1) augmented matrix [A|b]. Assume that Gaussian elimination reduces A to I (theusual case when A is square).
Each variant involves a Downsweep, an Upsweep, and a Scale, but they may have to be
somewhat different from those in regular Gaussian elimination. In every case, Scale(i) means
whatever needs to be done at that point so that the pivot entry in row i gets value 1 (and so
that the solution set is unchanged). Downsweep means whatever needs to be done so that the
entries below the pivot in column i are turned into 0s. Upsweep is whatever needs to be done
so that the entries above the pivot in column i are turned into 0s.For each version, determine how many multiplication/division operations are required, for
theA matrix, for theb column, and then jointly. In general you need only consider the highest
order term (h.o.t.), but when two methods have the same h.o.t., you may wish to compute
the number of steps exactly. For this you may use the formulasn
k=1 k = n(n+1)/2 andnk=1 k
2 =n(2n+1)(n+1)/6.
AlgorithmGauss-A
fori = 1 to n
Scale(i)Downsweep(i)
endfor
fori = n downto1
Upsweep(i)
endfor
AlgorithmGauss-B
fori = 1 to n
Downsweep(i)endfor
fori = 1 to n
Scale(i)
endfor
fori= n downto1
Upsweep(i)
endfor
AlgorithmGauss-Cfori = 1 to n
Downsweep(i)
endfor
fori = n downto1
Upsweep(i)
endfor
fori = 1 to n
Scale(i)
endfor
Algorithm Gauss-D (Gauss-Jordan)fori = 1 to n
Scale(i)
Upsweep(i)
Downsweep(i)
endfor
Notice, for instance, that in variant A the scaling is done early, and no multiplier needs to
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
21/105
page 21 Section EB, Asymptotics
be computed in Downsweep before subtracting. Also notice that version D has its own name,
Gauss-Jordan reduction.2. (Complexity of matrix inversion) To findA1, you do Gaussian elimination to [A|I]. Since I
consists ofn b-columns, it follow from the analysis in class that the inversion algorithm takes
order (4/3)n3 steps (n3/3 for the A matrix, and n2 more for each of the n columns of I).
However, the columns ofIare not any old b-columns. They have a lot of 0s, so maybe some
steps can be skipped.
Prove that, by skipping vacuous steps, the inversion algorithm in fact takes asymptotically
n3 steps.
3. Many books discuss matrix inversion using Gauss-Jordan reduction instead of Gaussian elim-
ination. (The Gauss-Jordan reduction method is Algorithm D of Problem EA1.) Although
Gauss-Jordan reduction is slower for solving Ax = b, maybe it pulls ahead in the case of
matrix inversion. Devise a special version of Gauss-Jordan reduction for inverting matrices.
As with Gaussian elimination used for matrix inversion, there may be savings from replacing
computations with assignments. Analyze the efficiency of your algorithm and compare it to the
efficiency of Gaussian elimination for inversion.
4. Let A be an n ntridiagonal matrix that can be reduced without finding any 0s where pivotsshould be (e.g., the matrices Dn in Problem G9).
a) Determine the number of steps to reduceA.
b) Determine the number of steps to reduceb inAx= b.
c) Show thatA1
will in general have no zero entries. That is, show that reducing [A | I] isnot guaranteed to leave any 0s, and demonstrate for a specific 3 3 matrix that A1 hasno zero entries.
d) We know that, for general matricesA, GE takesn3/3 steps to solve Ax = b, whereasfinding A1 and then A1b takesn3. However, we also know that, if for some reasonyou already have A1, then computing A1btakes only n2 more steps.
Show that for tridiagonal matrices A, even if someone has already given you A1, youshouldnt use it to solveAx= b. It is still faster to do GE.
Note: Tridiagonal matrices, or more generally, band matrices, are important in the
numerical solution of differential equations. See Section G.
Section EB. Asymptotics
1. Definition. To say thatg(x) = f(x)+ l.o.t. (lower order terms) means that g(x)f(x), i.e., gis asymptotic to f, meaning
limx
g(x)/f(x) = 1, or equilvalently, limx
(g(x) f(x))/f(x) = 0.
(We often use n as the variable instead ofx, especially when the input values will always be
positive integers.)
a) Show that ifg(x) = xn +Axm, where A is any constant, even a very big one, then Axm
is a l.o.t. (to xn
), so long as m < n. Note: n andm can be any real numbers.b) Ifg (n) =n2 +n log n, determine ifn log n is a lower order term.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
22/105
page 22 Section EC, Matrix Algorithms for Hand Held Calculators
2. Prove thatis transitive, that is, iffg and gh, then fh.
3. Show that
nk=1 km and
n1k=1km are the same asymptotically. The point is, a change by 1 in
the bounds of a power sum (or in fact, a change by any fixed finite amount) makes no difference
asymptotically.
4. Letp(k) be a polynomial whose highest power term is akm. Show thatn
k=1p(k) n
k=1 akm.
5. Prove: if f1 and f2 are positive functions (i.e. f(x) > 0 for all x), and g1f1, g2f2, then(g1+ g2) (f1+ f2).
6. Show that, in general, it is nottrue that
(f1
g1 and f2
g2) =
(f1+f2
g1+g2).
Also show that, even for positive functions, it is generally not true that
(f1g1 and f2g2) = (f1f2 g1g2).
7. We say thatg islittle ohoff, writteng = o(f), if
limx
g(x)
f(x)= 0.
Another way to say this, using a previous definition, is that (f+g) f.Prove: ifg = o(f) and both ffand gg, then (f g)(f g). (Compare this with the
negative result in Problem 6.)
8. Prove: ifg = o(f), ff and gg, then g= o(f).9. It is temping to try to prove Problem 4 by proving the more general result: ifg = o(f) thenn
k=1 g(k) = on
k=1 f(k)
. Alas, this claim can be false. Give an example.
10. Look back over calculations you have done to simplify expressions while doing asymptotic
analysis on them. Are there any steps you have taken that are not justified by the results in
the problems in this section? If so, try to prove that the steps you took are correct.
Section EC. Matrix Algorithms for Hand Held Calculators
1. (How to solve Ax = b on your hand calculator) The only sort of linear equations that your
hand calculator can solve directly are square invertible systems. That is, if A is invertible
so that Ax = b has a unique solution for every b, then you can find it on your calculator by
computingA1b. You can program your calculator to solveAx= bin the general case (that is,you can write programs to do Gaussian eliminationas we have done in class and thus determine
if there are any solutions and find all of them if there are), but there are no built-in commands
you can use on any calculator I know. My TI-86 can compute the reduced echelon form, but
it doesnt know what to do with it. It also doesnt know about augmentation lines; that is, it
doesnt know how to carry out GE only on theA part of [A|b] or [A|B], which can lead toa different rref than if [A B] is to be considered as a single matrix.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
23/105
page 23 Section F, LU methods
Figure out how to trickyour calculator into solving the general system. That is, using just
the built-in commands, figure out how to determine ifAx= b has any solutions, and if it does,to produce output that expresses them all. You can use built-in commands to tell you how to
replaceAx= b with some other system AX= B , withA invertible, that tells you what youneed.
I know one way to do this, but you may think of other and better ways.
Section F. LU methods
1. Let A =
1 2 3
2 3 23 2 1
.
a) Find the LUfactorization ofA. Check thatA = LU.
b) Find the LDU factorization.
2. Let
A=
1 2 3
2 6 7
1 4 3
, b=
1
0
0
, c=
2
0
1
.
a) Suppose you know that you must solve Ax= b and Ay = c. Solve them simultaneously
(by Gaussian elimination, not the LU method).
b) Solve Ax = b again, by the LU method. Suppose you only learn that you must solve
Ay = c after you solve Ax = b. Show how you can now solve Ay = c with a limited
amount of extra work.
3. When LU-Gauss is applied to a certain matrix A, the resulting matrix that stores all the
information is
2 0 1
1 1
1
2 3 3
.
a) What is the solution to
Ax=
4
1
5
?
b) What is A?
4. Repeat Problem E6, but this time it so happens that yesterday you solvedAx= c by the LU
method and youve saved the final matrix. Now what should you do?
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
24/105
page 24 Section F, LU methods
5. For theA and Uof Problem 3 (U ofLU, notLDU), we know that the first part of GE changes
AintoU. Thus there is an L1 such thatU =L1A. In fact, L1 is lowertriangular.a) FindL1. Be careful; use some method youre sure is right. Does the answer surprise you?
b) How is L1 related to the L ofA = LU?
6. Two alternative approaches (to the LU method) have been suggested for avoiding wasted effort
when you have to solve Ax= b at different times with differentbs but the same A.
i) Find the premultiplier matrix such that P A= AR = I(that is, find A1) and then apply P
to b;
ii) Initially solve Ax = (b1, b2, b3)T, that is, use literals instead of numbers, and then solve
Ax= b by substituting the specific constants in b for b1, b2, b3.
a) Carry out both approaches by hand for the example on the handwritten handout, that is,
for A =
2 1 1
4 1 0
2 2 1
and b=
1
27
.
b) I claim that solving Ax = b with literals for b is effectively the same as row reducing
[A |I]. In fact, I claim that method ii) is effectively the same as method i); for instance,applying P to b amounts to substituting the numerical b into the solution for the literal
b. Explain.
c) So what is the asymptotic complexity of methods i) and ii) when solvingAx= b, where
A is n
n? How does this compare to the complexity of theLUmethod? (As always in
this part of the course, assume A GE Iwithout row switches or 0 pivots.)
7. The strongLU theorem says:
IfA isn n and A reduces to Uwith a pivot in every column and no row switches,then there is a unique factorization A = LU. That is, there is at least one such
factorization (the one in the basic theorem), and if A = LU is any factorizationwhere L is simple lowertriangular, and U is uppertriangular with nonzeros on themain diagonal, then L= L andU = U.
In some sense, then, the LU factorization is like the prime factorization of numbers, which isalso unique.
Prove the strong theorem using previous results as follows:
a) Land Uare invertible (a previous result, but you should be able to reconstruct the proof
in your head), so premultiply LU =LU byL1 and postmultiply by (U)1.
b) L1 is a special sort of matrix what sort? (Again you should be able to reconstruct theprevious proof.) What sort of matrix is (U)1?
c) Again by previous problems, what different sorts of special matrices are
L1L and U(U)1?
d) But U(U)1 =L1L. How can this be?
e) Assuming you got, say, L1L = I, what can you do to prove L= L?
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
25/105
page 25 Section F, LU methods
8. Prove the StrongLUTheorem of Problem 7 again by giving an entrywise proof that L = L
and U= U. If you start looking at the n2
real number equations that comprise A = LU, andlook at them in the right order, you will find that it is not hard to show that the entries in A
force the values inL andU. Actually, there are several right orders.
9. Use the strongLUTheorem to prove the strong LDU theorem:
IfA isn n and A reduces to Uwith a pivot in every column and no row switches,then there is a unique factorizationA= LDU, whereL is simple lowertriangular,U
is simple uppertriangular, and D is diagonal.
10. The basicLU Theorem says:
IfA is nn and A reduces by the downsweep part of Gaussian elimination to U,with a pivot in every column and no row switches, then there is a factorization
A = LU, where L is simple lowertriangular (1s on the main diagonal) and U is
uppertriangular (and has nonzeros on the main diagonal).
Prove the converse:
IfA =LUwhereLis simple lowertriangular andUis uppertriangular with nonzeros
on the main diagonal, then A reduces by the downsweep part of GE to U, with a
pivot in every column and no row switches. (A,L ,U are all understood to be square.)
Hint: Its easy to see what the downsweep of GE does to L. Apply the same sequence of steps
to LU.
11. Prove: ifA is symmetric and has an LDU factorization, then necessarily U =LT.
12. Prove a converse to F11:
IfA= LDLT, then A is symmetric.
Note: this is quite easy using transpose properties.
13. Find in the standard way the LDU factorization of the symmetric M =
1 2 3
2 2 03 0 2
and
verify by sight that L = UT.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
26/105
page 26 Section G, Applications
Section G. Applications
1. Suppose the economy has two goods, food and steel. Suppose the production matrix is
M=
.6 .4
.5 .6
,
where food indexes the 1st row and column, and steel indexes the 2nd row and column.
a) Is this economy going to work, that is, for every nonnegative vectory, is it possible for
the economy to create this y as its net production by actually producing (and partially
using in the process) another nonnegative amount x? Are there anynonnegative goods
vectors y that the economy can make as its net production?. In other words, are there some
production plans the economy can carry out, even if it cant carry out every productionplan? (You can answer both questions by using GE to find if (I M)1 exists and, if so,looking at the signs of its entries.)
b) (I M)1 =n=0 Mn if the sum converges. (This is shown in class most years, but alsofollows by considering a matrix geometric series.) Compute enough powers ofM above
(use a computer or calculator) to get a sense whether Mn 0, which is a necessarycondition for convergence of the sum.
2. Repeat the previous problem for the only slightly different matrix
M= .6 .3
.5 .6 .
3. Repeat the previous problem for the matrix
M=
.5 .3
.4 .6
.
4. Sometimes the series
n=0 Mn converges quickly because Mn = 0 for all n starting at some
value. (Such a matrix M is said to be nilpotent, and if p is the lowest integer for which
Mp =0, we say M is nilpotent ofdegreep.)
a) Find the degreep of nilpotency for
M=
0 1 1
0 0 1
0 0 0
.
For the rest of this problem, fix p at this value.
b) Verify that (I M)1 =p1n=0 Mn. Do this two ways:i) Compute (I M)1 by GE and computep1n=0Mn directly.
ii) Verify by matrix algebra rules that (I
M)(
p1n=0 M
n) = I. This shows that
(p1n=0 Mn) is a right inverse ofI M, and since I M is square, we know from a
previous result that any right inverse is the inverse.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
27/105
page 27 Section G, Applications
5. In class we may have approximated the differential equation (and initial condition)
f(x) =x, f(0) = 0,
with the difference equation
f(xi+h) f(xi)h
=xi, f(0) = 0, h= 1/4, xi = i/4 for i = 0, 1, 2, 3.
The idea is that the solution to the difference equation would surely approximate the solution
to the differential equation. Furthermore, the difference equation is easy to solve on any finite
interval (we used the interval [0,1]). First, this method is Eulers method in disguise. But
more important from the 16H point of view, this system of equations reduces to a single matrix
equation Mx= 1
4b, where
M=
1 0 0 0
1 1 0 00 1 1 00 0 1 1
, x=
f(1/4)
f(1/2)
f(3/4)
f(1)
, b=
0
1/4
1/2
3/4
.
Note the scalar 14 in front ofb. In class we actually had 1
1/4Mx = b, but the calculation is
easier if you first multiply both sides by 1/4.
Solve this system and compare the answer with the actual solution of the differential equation.
6. In class we showed (I hope) that it is reasonable to approximate the 2nd order differential
equation (and boundary conditions)
f(x) = x, f(0) =f(1) = 0, (1)
with the difference equation
f(xi+h) 2f(xi) +f(xih)h2
=xi, f(0) =f(1) = 0, h= 1/4, xi = i/4 for i = 1, 2, 3.
This system of equations reduces to a single matrix equation Mx= (14)
2
b, where
M=
2 1 01 2 10 1 2
, x=
f(1/4)
f(1/2)
f(3/4)
, b=
1/4
1/2
3/4
.
Note the scalar (14)2 in front ofb. In class we actually had
1
(1/4)2Mx= b, but the calculation
is easier if you first clear the complex fraction.
a) Show or verify that the exact solution to (1) is f(x) = 16 (x3 x).
b) Solve the difference equation using the matrix system Mx = (14)2b and compare theanswer with the actual solution of the differential equation.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
28/105
page 28 Section G, Applications
The next two problems are adapted from Strang, Linear Algebra and Its Applications, 2e.
7. Consider the 2nd order differential equation
f(x) +f(x) = x, f(0) =f(1) = 0.
As in class, solve approximately on the interval [0,1] with a difference equation, using step size
h= 1/4. You may use a calculator or software to solve the resulting matrix equation.
Bonus: If you know how to find the exact solution, find it and compare with the approxima-
tion.
8. Consider the 2nd order differential equation
f(x) =
42 sin2x, f (0) =f(1) = 0.
a) Check that a solution (and thus thesolution, since by general theory there is only one for
a system like this) is f(x) = sin 2x.
b) Use the method from class, with step sizeh = 1/4, to approximate the solution. Compare
with the actual solution.
c) Repeat b) with step size h = 1/6.
9. LetDnbe then ntridiagonal matrix with 2s on the main diagonal and 1s on the diagonalsjust above and below the main diagonal. (D is for difference; this is just the matrix we have
been using for discrete approximations to the 2nd derivative, except the signs are reversed to
make the main diagonal positive.)a) Find the LDUfactorization ofD4.
b) Make a conjecture about theLDUfactorization ofDn
c) Prove your conjecture.
10. In our first example of a numerical solution to a differential equations in class, we approximated
the solution to
f(x) =x, f(0) = 0. (2)
We approximated by replacing f(x) with
f(x +h) f(x)h
.
However, it turns out that a much closer approximation to f(x) is the symmetric secant slope
f(x+h) f(xh)2h
.
For instance, with this latter approximation, instead of saying
f(1/4) f(0)1/4
f(0) (x= 0, h= 1/4),
we say
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
29/105
page 29 Section G, Applications
f(1/4) f(0)1/4
f(1/8) (x= 1/8, h= 1/8).
Redo the class solution to (2) using this better approximation throughout. You get the same
matrix Mas in class, but a different b, and hence different values for f. Compare this new
approximate solution with the exact solution. That is, compare the approximate values you get
for f(x) at x = 1/4, 1/2, 3/4, and 1 with the exact values.
11. Heres another approach to coming up with a finite difference to approximate f(a) at any a.For simplicity, we will fix a = 0, but the same argument works at any point.
Heres the idea. The approximation of f(0) by (1/h)[f(h)f(0)] is an approximationby a linear combination of f(h) and f(0). Specifically, we are using the linear combination
Af(h) + Bf(0) where A = 1/h and B =1/h. Lets try to generalize and look for a linearcombination approximation tof(0) as well. Forf we probably need one more point, so, letshope that some linear combination off(0), f(h) and f(h) is a good approximation off(0).In other words
Af(h) + Bf(0) +Cf(h) f(0). (3)Now, we have 3 unknowns, so we can insist that the approximation be exact for three functions
f. Since, as you learned in calculus, the polynomials are a good basis for approximating
most functions, lets insist that display (3) be an equality for the three basic polynomials
f(x) = 1, x , x2.
Write down the 3 equations obtained this way and solve for A, B, C. Compare your answerto the difference formula we have used so far for second derivatives.
Note: Finding A, B, Cby GE is harder than usual, because of the presence of the literal h.
You may find it easier to solve by ad hoc methods. However, GE is not actually that hard in
this case.
12. So far our second order differential equations have had twoboundaryconditions, namelyf(0) =
f(1) = 0. Often instead they have two initial conditions, for instance, f(0) = f(0) = 0.(Second order differential equations always need two special conditions of some sort to uniquely
specify an answer.)
So the question is, what is the right matrix setup to approximate the solution to a 2nd orderequation with 2 initial conditions?
As before, lets develop a method with a specific example. Again we use the equation
f(x) = x, where we want to solve on [0,1] using the step size h = 1/4, but now the specialconditions are f(0) =f(0) = 1. Find and explainA, x, bso that the approximation is foundby solvingAx= b.
Note: I can see at least two different but good answers to this question, depending on how
you decide to approximate f(0).
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
30/105
page 30 Section G, Applications
13. Consider the following electric circuit:
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
................................................................................................
......................
..............................
............................................................................................................................................................................................................
A
B
............................................................
.................
............................................................ .................
..........................................................................
.................
..........................................................................
.................
........................................................................
.................
C
D
1
2
3
4
5
.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
battery
The battery connecting D directly toA, makes the voltage at A 1 unit higher than atD . The
edges are wires; The number on each edge is the resistance on that edge, and in this example
can also be used to name the edge. For instance, let i3 represent the current on the vertical
edge with resistance 3 units. Let the positive direction for current be as shown by the arrows.
The curved wire connecting the battery is assumed to have no resistance.
a) Write down the linear equations for the current. Remember, at each (interior) node net
current must be 0 (Kirchkoffs Current Law) and along each loop, the net voltage change
must be 0 (Kirchkoffs Voltage Law). Finally, Ohms law says thatV =ir, where V is the
voltage change along an edge (in the direction of the arrow), i is the current on that edge(in the direction of the arrow), and r is the resistance along that edge. (The battery is
assumed to provide no resistance, but it does increase the voltage by 1 unit from D to A.)
b) Use a calculator or software to find the currents. (You dont want to do this one by hand.)
14. In the diagram below, something causes the voltage to be higher by amountV at pointA than
atC, causing current to flow fromA to B toC. (Perhaps there is a battery connectingC back
to A by a wire not shown, or perhaps there are a whole lot of other wires not shown allowing
for current to flow.) The resistences on the two wires are R1 andR2.
..............................................................................
...............................................................................................................................
.......................................................................
.........................................................................................................................
A B C
R1 R2
i1 i2
Show that, if wiresAB and BCare to be replaced by a single wireACso that the same amount
of current flows fromA toCas currently, that new wire must have resistance R1+ R2. This is
the law of series circuits. Hint: Solve for i1, i2 with Kirchhoffs laws you get one current
equation and one voltage equation and then findR so thatV =I Rfor the given voltage and
the total current flow Iout ofA.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
31/105
page 31 Section G, Applications
15. In the diagram below, something causes the voltage to be higher by amountV at pointA than
atB, causing current to flow separately over each wire fromAto C. (Perhaps there is a batteryconnecting C back to A by a wire not shown, or perhaps there are a whole lot of other wires
not shown allowing for current to flow.) The resistences on the two wires are R1 and R2.
...............................................................................................................
............................................................................................................................................................................................................................................................................................................................................................................................................
.............................
....................
................
...........................
........................
A B
R1
R2
i1
i2
Show that, if both wires AB are to be replaced by a single wire AB so that the same totalcurrent flows fromA to B as currently, the resistance R in that new wire must satisfy
1
R =
1
R1+
1
R2.
This is the law of parallel circuits. Hint: Solve fori1, i2 with Kirchhoffs laws you get no
current equation and two voltage equations and then find R so that V = IR for the given
voltage and the total current flow Iout ofA.
The laws is this and the previous problem allow one to compute currents in many complicated
circuits by hand by repeatedly simplifying the circuit.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
32/105
page 32 Section H, Vector Space Axioms
Section H. Vector Space Axioms
1. Prove that the following are vector spaces.
a) R2, meaning the set of ordered pairs (x, y) of real numbers, where + is defined by
(a, b) + (c, d) = (a+c, b+d)
and scalar multiplication is defined by
a(x, y) = (ax, ay).
The proof can be very tedious if you write everything out in detail, but the key is that
coordinatewise all the vector space properties are known real number properties. Show
this carefully in one or two cases, and thereafter you can just say it. On the other hand,when an axiom introduces a new quantity, like 0 oru, you have to at least state whatthat quantity is as an ordered pair and explain briefly why you are right.
b) The set of allreal functions, that is, functions from real number inputs to real number
outputs. You need to understand what it means to add two functions, or multiply a
function by a scalar, but you have been doing these operations for a long time, as in
f(x) = 3 sin x +x2. Again, show one or two parts in detail.
c) The set of all infinite sequences of real numbers, where addition and scalar multiplication
are defined coordinatewise.
2. Prove: in any vector spaceV,0 is unique. That is, if0 and0can both be substituted truthfullyforin the statementfor each u V, u + =u,
then0= 0. Note: the uniqueness of0 is often included in the axioms.
3. Prove that in any vector space,0is unique in even a stronger sense: if for even oneu,u +x= u,
then x = 0. We summarize by saying: in a vector space, any local zero is the global zero.
(Definition: For any particularu, we say x is alocal zeroforu ifu + x= u. A global zero is
just a 0as previously defined.)
4. Show from the axioms of a vector space that, for any u, its inverse is unique.
5. Prove: ifu =v thenu + w=v + w. That is, equals added to unequals are unequal, just as inreal number algebra. Hint: We know how to prove statements containing equalities, so can be
replace the claim to be proved with an equivalent claim involving only equalities?
6. For any vector spaceV, prove
a) For any a R, a0= 0.b) For any v V, 0v=0.c) For any a R andv V, (a)v= a(v) = (av).d) For any v
V, (
1)v=
v.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
33/105
page 33 Section H, Vector Space Axioms
7. Comment on the following proof of Problem 6a.
Without loss of generality, we may use ordered pairs instead ofn-tuples. Then for
anya we have a0= a(0, 0) = (a0, a0)rnf= (0, 0) = 0.
8. Prove: ifu,v are vectors in any vector space, then
(u + v) = u + v.Use only our axioms or theorems proved from those axioms in earlier problems in this problem
collection.
9. The operationsubtraction is defined in a vector space byu v= u + (v).
Use this definition to prove
a) a(u v) = au av,b) (1)(u v) =v u,c) (a b)v= av bv.
The symbol is used in mathematics for two rather different purposes: additive inverses andsubtraction. This problem shows that no great harm comes from using the same symbol for
those two purposes.
10. Prove: Ifu =v thenu v=0.
11. Scalar division: ifd = 0 is a scalar, define ud
to be (1/d)u. Prove from the axioms of a vector
space, and this definition, that
u + v
d =
u
d +
v
d.
12. The (vector addition) Cancellation Law states: For any vectorsa, b, c,
if a + c= b + c then a= b.
a) Prove this Law for arbitrary vector spaces; that is, prove it from the axioms.
b) Suppose vectors are defined to be sets (of numbers, say) and + is defined to be set union.
For instance,
{1, 3} + {1, 2, 5} = {1, 2, 3, 5}.Show by example that, with these definitions, the Cancellation Law is false. In light of
part a), this shows that when vectors and + are defined this way, they cant form a vector
space, no matter how scalar multiplication is defined. Therefore, at least one of the first 5
axioms on our sheet must be false for this +. Which ones?
13. LetNbe the nonvector-space of H12b: the vectors are sets of numbers and + is defined to beset union.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
34/105
page 34 Section H, Vector Space Axioms
a) Prove thatN has a unique 0 in the sense of Problem H2. (First, find a set that does
what0 is supposed to do, and then see if the proof you used for H2 still works.)a) Prove or disprove that the0ofN is unique in the sense of Problem H3.
14. In class (most years) I claimed that the set of positive real numbers is a vector space with all
real numbers as scalars, if + and scalar multiplication are defined properly. Namely, if we let [x]
be the positive number xwhen viewed as a vector in this space (and we let a without brackets
represent the real number a when it is viewed as a scalar), then we define
[x] + [y] = [xy], a[x] = [xa].
We proved in class that some of the axioms of a vector space hold for this system.
Go through all the axioms and verify that each one not proved in class is correct for thedefinitions I set up.
15. Here is another proposal for a bizarre vector space, call isV. The vectors are all non-zerorealnumbers, and + and scalar are defined as in Problem H14. Well, not quite. Scalar has tobe defined a little differently, because rc is not defined for negative r and most nonintegers c.
So we define
a[x] =
sign(x) |x|a,where
sign(x) = 1 ifx >0
1 ifx
8/11/2019 Honors Linear Algebra Main Problem Set
35/105
page 35 Section HA, Complex Numbers
19. For real numbers, we have the theorem
ab= 0 (a= 0 orb = 0).We have already shown that this result is false for matrices where? Prove or disprove the
vector space analog:
kv= 0 (k= 0 or v = 0).
20. Basically, a field is any set of number-like things which obey all the usual laws of high school
algebra (commutativity, distributivity, etc.), and such that when you add, subtract, multiply or
divide any pair of them (except you cant divide by 0), you get another thing in the same set.
In other words, a field has to be closed under these 4 operations. For instance, the positive
integers are not a field; they are closed under addition and multiplication, but not subtractionand division. Sure, 4 7 is a number, but it is not in the set of positive integers.The best known fields are: the rationals, the reals, and the complex numbers.
Now determine if the following are fields:
a) The complex rationals, that is,{a +bi | a, brational numbers}.b) Thecomplex integers(also known as the Gaussian integers same guy). This is the
set{p +qi | p, qintegers}.c) S={a+b2| a, brational numbers}. Note: somewhere in your solution, you will have
to use the fact that
2 is not a rational number. Where?
d) T =
{p + q
2
|p, qintegers
}.
e) (Challenge) U= {a +b2 + c3 | a,b ,crational numbers}.f) (Challenge) V = {a + b2 +c3 +d6| a,b ,c,drational numbers}.
Section HA. Complex Numbers
1. The complex numbers are defined to be the set C (some books write C) of all objects
z =a +bi, where a, bR and i is a special symbol defined to satisfy i2 =1. By definition,two complex numbers z =a +bi and w =c +di are equal iffa = c and b =d. Addition and
multiplication are defined the only way they can be if the usual rules of real-number algebra
are to hold in C. That is, one defines these operations according to the usual rules and then
has to prove (you do it below) that all the usual rules do indeed hold for C. Indeed, you willshow thatC is a field, as defined in Problem H20.
Useful Notation. Ifz =a +bi, witha, b R as usual, then the real and imaginary parts ofz are defined by
Re(z) =a, Im(z) = b.
a) Give the definition of addition, i.e., what does (a+ bi) + (c+ di) equal? It better be
something that meets the definition to be in C.
b) Find the zero of Cand show that every z has an additive inverse, calledz naturallyenough.
c) Give the definition of multiplication. Show that Chas a multiplicative identity.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
36/105
page 36 Section HA, Complex Numbers
d) Show that everyz= 0 has a multiplicative inverse. Caution: It doesnt suffice to say that
a +bi has the inverse 1a +bi
. To show thatC is closed under multiplicative inverses you
must show that there is an inverse of the form c +di.
e) Show that addition inC is commutative.
f) Show that multiplication inC is commutative.
g) Show that multiplication inCis associative. This is sort of ugly to show unless you figure
out some particularly creative approach. (I know of some creative approaches, but none
that I am fully satisfied with.)
h) Define division inCand prove that z+ w
u =
z
u +
w
u .
The facts you are asked to show above do not cover all the things that must be shown to prove
that C is a field, but they give the gist.
2. The complex conjugateof a complex number z= a +bi, denotedz , is defined by
a +bi=a bi.Prove that
z= z, z+ w= z + w, zw= z w, |z| = |z|,where|z|, the absolute value (or modulus) ofz = a +bi is defined by|z| = a2 + b2.
3. Letz = 2 + i,w = 1+3i. Computezw. Compute the magnitudes ofz ,w,z wdirectly and check
that DeMoivres Thm holds for these magnitudes. Estimate the arguments ofz ,w,z w from a
sketch and check that DeMoivres Thm seems to hold for these arguments.
4. Prove the magnitude part of DeMoivres as follows. Letz = a +bi, w = c +di. Compute|z|and|w| from the definition. Now compute zw in terms ofa, b, c, dand use your computationto compute|zw| from the definition. Now do high school algebra to show that|z||w| = |zw|.
5. Prove both parts of DeMoivres Thm as follows. Let (r, ) be the magnitude and argument of
a +bi. Let (r, ) be the magnitude and argument ofc + di.
a) Explain why
a + bi= r(cos + i sin ) and c + di= r(cos +i sin ) (2)
and conversely, if you are given a complex number in the form r(cos + i sin ), you know
right away that the magnitude is r and the argument is .
b) Multiply (a+bi)(c +di) in the trigonometric forms (2), and use some trig identities to
prove the theorem.
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
37/105
page 37 Section HB, Complex Vector Spaces, Part I
Section HB. Complex Vector Spaces, Part I
Acomplex vector spacesatisfies the same axioms as a real vector space except that the set ofscalars is C. (In general, a vector space is anything that satisfies those axioms, using as the set of
scalars some field.) All the definitions we have used for real vector spaces (e.g., dependent, basis,
linear transformation) go through unchanged. Therefore, all the results we have proved go through
unchanged. The problems in this section illustrates this lack of change. Part II of this section will
come later, after we have more vector space concepts, like dimension. Then you can consider the
interesting interplay of complex and real vector spaces.
1. Let V be a complex vector space. Prove: ifx V, x= 0, and z, w are scalars with z= w,thenzx =wx.
2. Solve Ax= b, where
A=
1 1
i 1
, b=
i1
.(Gaussian elimination is yukier by hand over the complex numbers than over the reals. Can
you calculator do it?)
Section I. Subspaces
1. a) Show that W =
{(x , y , z)
|x+y+z = 0
} is a subspace ofR3.
b) Show that U = {(x , y , z) | x+y+z = 1} is not a subspace ofR3.2. a) LetMnn be the set of all n n matrices (with real number entries). Show thatMnn is a
vector space.
b) Determine which of the following subsets ofMnn is a subspace:i) The symmetric matrices
ii) Those matrices that commute with an arbitrary but fixed matrixT
iii) Those matrices for whichM=M2.
3. For any matrixA, the null space of A, denotedNA orN(A), is defined byNA= {x | Ax= 0}.
That is, if A is mn, thenNA is the set ofn-tuple column vectors x that satisfy Ax = 0.Prove thatNA is a vector space.
4. Which of these sets of functions are vector spaces? Give reasons.
a) All polynomials (call the setP)b) All polynomials of degreen (call the setP=n)c) All polynomials of degree at mostn (call the setPn)d) All polynomialsp(x) that satisfy p(1) = 0.
e) All polynomialsp(x) that satisfy p(1) = 2.f) All polynomialsp(x) that satisfy p(2) = 2p(1).
S. Maurer Main 28P Problem Set, Fall 2005
8/11/2019 Honors Linear Algebra Main Problem Set
38/105
page 38 Section I, Subspaces
5. For each of the following, is it a vector space? Give reasons briefly for your answers.
a) The uppertriangularn n matrices, for some fixed positive integer n.b) The simple uppertriangularn n matrices, for some fixed positive integer n.c) All uppertriangular matrices of all square sizes.
6. Functionf is a real function if its domain and codomain are R. Show that the (set of) real
functions f satisfying
3f(x) + x2f(x) f(x) = 0 for all xare closed under addition and scalar multiplication. This is the key step to showing that these
functions form a vector space.
7. Prove or disprove: the set of odd real functions is a vector space. Recall a real function isodd
if, for all x, f(x) = f(x).
8. Its a pain, when proving subsets are subspaces, to have to prove two closure properties. For-
tunately, we have
Theorem. A nonempty subset Uof a vector space V is a subspace for allu, v U, and all scalars c,cu + v is inU.
In other words, one combined closure property is enough. Prove this theorem.
9. Prove: ifU, U are subspaces ofV, then so is U U. Is the intersection of an arbitrary numberof subspaces ofV, even infinitely many, also