vecmastf - vector spaces, vector algebras, and vector geometries

Embed Size (px)

Citation preview

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    1/154

    Introduction to Vector Spaces, VectorAlgebras, and Vector Geometries

    Richard A. Smith

    April 4, 2011

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    2/154

    ii

    Preface

    Vector spaces appear so frequently in both pure and applied mathematicsthat I felt that a work that promotes self-study of some of their more ele-mentary appearances in a manner that emphasizes some basic algebraic andgeometric concepts would be welcome. Undergraduate mathematics majorsin their junior or senior year are the intended primary readers, and I hopethat they and others sufficiently mathematically prepared will find its studyworthwhile.

    .Copyright 2005-2010. Permission granted to transmit electronically. May

    not be sold by others for profit..Richard A. [email protected]

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    3/154

    Contents

    1 Fundamentals of Structure 11.1 What is a Vector Space? . . . . . . . . . . . . . . . . . . . . . 1

    1.2 Some Vector Space Examples . . . . . . . . . . . . . . . . . . 21.3 Subspace, Linear Combination and Span . . . . . . . . . . . . 31.4 Independent Set, Basis and Dimension . . . . . . . . . . . . . 31.5 Sum and Direct Sum of Subspaces . . . . . . . . . . . . . . . . 91.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    2 Fundamentals of Maps 152.1 Structure Preservation and Isomorphism . . . . . . . . . . . . 152.2 Kernel, Level Sets and Quotient Space . . . . . . . . . . . . . 182.3 Short Exact Sequences . . . . . . . . . . . . . . . . . . . . . . 212.4 Projections and Reflections on V = W X . . . . . . . . . . . 242.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    3 More on Maps and Structure 293.1 Function Spaces and Map Spaces . . . . . . . . . . . . . . . . 293.2 Dual of a Vector Space, Dual of a Basis . . . . . . . . . . . . . 293.3 Annihilators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.4 Dual of a Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.5 The Contragredient . . . . . . . . . . . . . . . . . . . . . . . . 333.6 Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 343.7 Maps Involving Products . . . . . . . . . . . . . . . . . . . . . 35

    3.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    4 Multilinearity and Tensoring 414.1 Multilinear Transformations . . . . . . . . . . . . . . . . . . . 414.2 Functional Product Spaces . . . . . . . . . . . . . . . . . . . . 42

    iii

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    4/154

    iv CONTENTS

    4.3 Functional Tensor Products . . . . . . . . . . . . . . . . . . . 43

    4.4 Tensor Products in General . . . . . . . . . . . . . . . . . . . 454.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    5 Vector Algebras 51

    5.1 The New Element: Vector Multiplication . . . . . . . . . . . . 51

    5.2 Quotient Algebras . . . . . . . . . . . . . . . . . . . . . . . . . 52

    5.3 Algebra Tensor Product . . . . . . . . . . . . . . . . . . . . . 54

    5.4 The Tensor Algebras of a Vector Space . . . . . . . . . . . . . 55

    5.5 The Exterior Algebra of a Vector Space . . . . . . . . . . . . . 56

    5.6 The Symmetric Algebra of a Vector Space . . . . . . . . . . . 62

    5.7 Null Pairs and Cancellation . . . . . . . . . . . . . . . . . . . 645.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    6 Vector Affine Geometry 67

    6.1 Basing a Geometry on a Vector Space . . . . . . . . . . . . . . 67

    6.2 Affine Flats . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    6.3 Parallelism in Affine Structures . . . . . . . . . . . . . . . . . 68

    6.4 Translates and Other Expressions . . . . . . . . . . . . . . . . 69

    6.5 Affine Span and Affine Sum . . . . . . . . . . . . . . . . . . . 70

    6.6 Affine Independence and Affine Frames . . . . . . . . . . . . . 71

    6.7 Affine Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 726.8 Some Affine Self-Maps . . . . . . . . . . . . . . . . . . . . . . 75

    6.9 Congruence Under the Affine Group . . . . . . . . . . . . . . . 76

    6.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    7 Basic Affine Results and Methods 77

    7.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

    7.2 Two Axiomatic Propositions . . . . . . . . . . . . . . . . . . . 77

    7.3 A Basic Configuration Theorem . . . . . . . . . . . . . . . . . 78

    7.4 Barycenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

    7.5 A Second Basic Configuration Theorem . . . . . . . . . . . . . 80

    7.6 Vector Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

    7.7 A Vector Ratio Product and Collinearity . . . . . . . . . . . . 85

    7.8 A Vector Ratio Product and Concurrency . . . . . . . . . . . 86

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    5/154

    CONTENTS v

    8 An Enhanced Affine Environment 89

    8.1 Infl

    ating a Flat . . . . . . . . . . . . . . . . . . . . . . . . . . 898.2 Desargues Revisited . . . . . . . . . . . . . . . . . . . . . . . . 928.3 Representing (Generalized) Subflats . . . . . . . . . . . . . . . 948.4 Homogeneous and Plucker Coordinates . . . . . . . . . . . . . 958.5 Dual Description Via Annihilators . . . . . . . . . . . . . . . . 978.6 Direct and Dual Plucker Coordinates . . . . . . . . . . . . . . 988.7 A Dual Exterior Product . . . . . . . . . . . . . . . . . . . . . 1038.8 Results Concerning Zero Coordinates . . . . . . . . . . . . . . 1038.9 Some Examples in Space . . . . . . . . . . . . . . . . . . . . . 1058.10 Factoring a Blade . . . . . . . . . . . . . . . . . . . . . . . . . 1088.11 Algorithm for Sum or Intersection . . . . . . . . . . . . . . . . 110

    8.12 Adapted Coordinates and Products . . . . . . . . . . . . . . 1118.13 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

    9 Vector Projective Geometry 1199.1 The Projective Structure on V . . . . . . . . . . . . . . . . . . 1199.2 Projective Frames . . . . . . . . . . . . . . . . . . . . . . . . . 1209.3 Pappus Revisited . . . . . . . . . . . . . . . . . . . . . . . . . 1219.4 Projective Transformations . . . . . . . . . . . . . . . . . . . . 1249.5 Projective Maps . . . . . . . . . . . . . . . . . . . . . . . . . . 1269.6 Central Projection . . . . . . . . . . . . . . . . . . . . . . . . 1269.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

    10 Scalar Product Spaces 12910.1 Pairings and Their Included Maps . . . . . . . . . . . . . . . . 12910.2 Nondegeneracy . . . . . . . . . . . . . . . . . . . . . . . . . . 12910.3 Orthogonality, Scalar Products . . . . . . . . . . . . . . . . . 13110.4 Reciprocal Basis . . . . . . . . . . . . . . . . . . . . . . . . . . 13210.5 Related Scalar Product for Dual Space . . . . . . . . . . . . . 13310.6 Some Notational Conventions . . . . . . . . . . . . . . . . . . 13510.7 Standard Scalar Products . . . . . . . . . . . . . . . . . . . . 13610.8 Orthogonal Subspaces and Hodge Star . . . . . . . . . . . . . 13710.9 Scalar Product on Exterior Powers . . . . . . . . . . . . . . . 13810.10Another Way to Define . . . . . . . . . . . . . . . . . . . . . 14110.11Hodge Stars for the Dual Space . . . . . . . . . . . . . . . . . 14410.12Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    6/154

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    7/154

    Chapter 1

    Fundamentals of Structure

    1.1 What is a Vector Space?

    A vector space is a structured set of elements called vectors. The structurerequired for this set of vectors is that of an Abelian group with operatorsfrom a specified field. (For us, multiplication in a field is commutative, anda number of our results will depend on this in an essential way.) If thespecified field is F, we say that the vector space is over F. It is standard forthe Abelian group to be written additively, and for its identity element to bedenoted by 0. The symbol 0 is also used to denote the fields zero element,but using the same symbol for both things will not be confusing. The fieldelements will operate on the vectors by multiplication on the left, the resultalways being a vector. The field elements will be called scalars and be saidto scale the vectors that they multiply. The scalars are required to scale thevectors in a harmonious fashion, by which is meant that the following threerules always hold for any vectors and scalars. (Here a and b denote scalars,and v and w denote vectors.)

    1. Multiplication by the fields unity element is the identity operation:1 v = v.

    2. Multiplication is associative: a (b v) = (ab) v.

    3. Multiplication distributes over addition: (a + b) v = a v + b v anda (v + w) = a v + a w.

    Four things which one might reasonably expect to be true are true.

    1

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    8/154

    2 CHAPTER 1 FUNDAMENTALS OF STRUCTURE

    Proposition 1 If a is a scalar and v a vector, a 0 = 0, 0 v = 0, (a) v =

    (a v), and given that

    a v = 0, if

    a 6= 0then

    v = 0.

    Proof. a 0 = a (0 + 0) = a 0 + a 0; adding (a 0) to both ends yieldsthe first result. 0 v = (0 + 0) v = 0 v + 0 v; adding (0 v) to both endsyields the second result. Also, 0 = 0 v = (a + (a)) v = a v + (a) v;adding (a v) on the front of both ends yields the third result. If a v = 0with a 6= 0, then a1 (a v) = a1 0, so that (a1a) v = 0 and hence v = 0as claimed.

    Exercise 1.1.1 Given that a v = 0, if v 6= 0 then a = 0.

    Exercise 1.1.2

    (a v) = a (

    v).

    Because of the harmonious fashion in which the scalars operate on thevectors, the elementary arithmetic of vectors contains no surprises.

    1.2 Some Vector Space Examples

    There are many familiar spaces that are vector spaces, although in theirusual appearances they may have additional structure as well. A familiarvector space over the real numbers R is Rn, the space of n-tuples of realnumbers with component-wise addition and scalar multiplication by elementsofR. Closely related is another vector space over R, the space Rn of alltranslations ofRn, the general element of which is the function from Rn toitself that sends the general element (1,..., n) ofR

    n to (1 + 1,..., n + n)for a fixed real n-tuple (1,..., n). Thus an element ofR

    n uniformly vectorsthe elements ofRn to new positions, displacing each element ofRn by thesame vector, and in this sense Rn is truly a space of vectors. In the samefashion as with Rn over R, over any field F, the space Fn of all n-tuplesof elements of F constitutes a vector space. The space Cn of all n-tuplesof complex numbers in addition to being considered to be a vector spaceover C may also be considered to be a (different, because it is taken over a

    different field) vector space over R. Another familiar algebraic structure thatis a vector space, but which also has additional structure, is the space of allpolynomials with coefficients in the field F. To conclude this initial collectionof examples, the set consisting of the single vector 0 may be considered tobe a vector space over any field; these are the trivial vector spaces.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    9/154

    1.3 SUBSPACE, LINEAR COMBINATION AND SPAN 3

    1.3 Subspace, Linear Combination and Span

    Given a vector space V over the field F, U is a subspace of V, denotedUC V, if it is a subgroup of V that is itself a vector space over F. To showthat a subset U of a vector space is a subspace, it suffices to show that Uis closed under sum and under product by any scalar. We call V and {0}improper subspaces of the vector space V, and we call all other subspacesproper.

    Exercise 1.3.1 As a vector space over itself, R has no proper subspace. Theset of all integers is a subgroup, but not a subspace.

    Exercise 1.3.2 The intersection of any number of subspaces is a subspace.

    A linear combination of a finite set S of vectors is any sumP

    sScs sobtained by adding together exactly one multiple, by some scalar coefficient,of each vector of S. A linear combination of an infinite set S of vectorsis a linear combination of any finite subset of S. The empty sum whichresults in forming the linear combination of the empty set is taken to be0, by convention. A subspace must contain the value of each of its linearcombinations. IfS is any subset of a vector space, by the span ofS, denotedhSi, is meant the set of the values of all linear combinations of S. hSi is asubspace. IfU = hSi, then we say that S spans U and that it is a spanningset for U.

    Exercise 1.3.3 For any set S of vectors, hSi is the intersection of all sub-spaces that contain S, and therefore hSi itself is the only subspace of hSi thatcontains S.

    1.4 Independent Set, Basis and Dimension

    Within a given vector space, a dependency is said to exist in a given set of

    vectors if the zero vector is the value of a nontrivial (scalar coefficients notall zero) linear combination of some finite nonempty subset of the given set.A set in which a dependency exists is (linearly) dependent and otherwiseis (linearly) independent. The linearly can safely be omitted and wewill do this to simplify our writing.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    10/154

    4 CHAPTER 1 FUNDAMENTALS OF STRUCTURE

    For example, if v 6= 0, then {v} is an independent set. The empty set is

    an independent set. Any subset of an independent set is independent. Onthe other hand, {0} is dependent, as is any set that contains 0.

    Exercise 1.4.1 A set S of vectors is independent if and only if no memberis in the span of the other members: for all v S, v / hSr {v}i. Whatwould the similar statement need to be if the space were not a vector space

    over a field, but only the similar type of space over the ring Z of ordinaryintegers?

    Similar ideas of dependency and independence may be introduced forvector sequences. A vector sequence is dependent if the set of its terms isa dependent set or if the sequence has any repeated terms. Otherwise thesequence is independent.

    Exercise 1.4.2 The terms of the finite vector sequence v1, . . . , vn satisfy alinear relation a1 v1 + + an vn = 0 with at least one of the scalarcoefficientsai nonzero if and only if the sequence is dependent. If the sequenceis dependent, then it has a term that is in the span of the set of preceding

    terms and this set of preceding terms is nonempty if v1 6= 0.

    A maximal independent set in a vector space, i. e., an independentset that is not contained in any other independent set, is said to be a basisset, or, for short, a basis (plural: bases), for the vector space. For example,

    {(1)} is a basis for the vector space F1, where F is any field. The empty setis the unique basis for a trivial vector space.

    Exercise 1.4.3 Let be a family of independent sets which is linearly or-dered by set inclusion. Then the union of all the sets in is an independent

    set.

    From the result of the exercise above and Zorns Lemma applied to inde-pendent sets which contain a given independent set, we obtain the followingimportant result.

    Theorem 2 Every vector space has a basis, and, more generally, every in-dependent set is contained in some basis.

    A basis may be characterized in various other ways besides its definitionas a maximal independent set.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    11/154

    1.4 INDEPENDENT SET, BASIS AND DIMENSION 5

    Proposition 3 (Independent Spanning Set) A basis for the vector space

    Vis precisely an independent set that spans

    V.

    Proof. In the vector space V, let Bbe an independent set such that hBi = V.Suppose that B is contained in the larger independent set C. Then, choosingany v Cr B, the set B0 = B {v} is independent because it is a subset ofthe independent set C. But then v / hB0 r {v}i, i. e., v / hBi, contradictinghBi = V. Hence B is a basis for V.

    On the other hand, let B be a basis, i. e., a maximal independent set,but suppose that B does not span V. There must then exist v Vr hBi.However, B {v} would then be independent, since a nontrivial dependencyrelation in B {v} would have to involve v with a nonzero coefficient and

    this would put v hBi. But the independence of B {v} contradicts thehypothesis that B is a maximal independent set. Hence, the basis B is anindependent set that spans V.

    Corollary 4 An independent set is a basis for its own span.

    Proposition 5 (Minimal Spanning Set) B is a basis for the vector spaceV if and only if B is a minimal spanning set for V, i. e., B spans V, butno subset of B unequal to B also spans V.

    Proof. Suppose B spans V and no subset of B unequal to B also spans V.If B were not independent, there would exist b B for which b hBr {b}i.But then the span of Br {b} would be the same as the span of B, contraryto what we have supposed. Hence B is independent, and by the previousproposition, B must therefore be a basis for V since B also spans V.

    On the other hand, suppose B is a basis for the vector space V. By theprevious proposition, B spans V. Suppose that A is a subset of B that isunequal to B. Then no element v Br A can be in hBr {v}i since B isindependent, and certainly, then, v / hAi since A Br {v}. Thus A doesnot span V, and B is therefore a minimal spanning set for V.

    Exercise 1.4.4 Consider afinite spanning set. Among its subsets that alsospan, there is at least one of smallest size. Thus a basis must exist for any

    vector space that has a finite spanning set, independently verifying what we

    already know from Theorem 2.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    12/154

    6 CHAPTER 1 FUNDAMENTALS OF STRUCTURE

    Exercise 1.4.5 Over anyfield F, the set of n n-tuples that have a 1 in oneposition and

    0in all other positions is a basis for

    F

    n.(This basis is calledthe standard basis for Fn over F.)

    Proposition 6 (Unique Linear Representation) Bis a basis for the vec-tor space V if and only if B has the unique linear representation prop-erty, i. e., each vector of V has a unique linear representation as a linearcombinationP

    xX ax x where X is some finite subset of B and all of thescalars ax are nonzero.

    Proof. Let B be a basis for V, and let v V. Since V is the span ofB, v certainly is a linear combination of a finite subset of B with all scalar

    coefficients nonzero. Suppose that v has two different expressions of thiskind. Subtracting, we find that 0 is equal to a nontrivial linear combinationof the union U of the subsets of B involved in the two expressions. But Uis an independent set since it is a subset of the independent set B. Hence vhas the unique representation as claimed.

    On the other hand, suppose that B has the unique linear representationproperty. Then B spans V. We complete the proof by showing that B mustbe an independent set. Suppose not. Then there would exist b B for whichb hBr {b}i . But this b would then have two different representations withnonzero coefficients, contrary to hypothesis, since we always have 1 b as onesuch representation and there would also be another one not involving b invirtue of b hBr {b}i.

    Exercise 1.4.6 A finite set of vectors is a basis for a vector space if andonly if each vector in the vector space has a unique representation as a linear

    combination of this set: {x1, . . . , xn} (with distinct xi, of course) is a basisif and only if each v = a1 x1 + + an xn for unique scalars a1, . . . , an.

    Exercise 1.4.7 If S is a finite independent set,P

    sScs s =P

    sSds simplies cs = ds for all s.

    Example 7 In this brief digression we now apply the preceding two propo-sitions. Let v0, v1, . . . , vn be vectors in a vector space over the field F, andsuppose that v0 is in the span V of the other vi. Then the equation

    1 v1 + + n vn = v0

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    13/154

    1.4 INDEPENDENT SET, BASIS AND DIMENSION 7

    for the i F has at least one solution. Renumbering the vi if necessary,we may assume that the distinct vectors

    v1, . . . , vmform a basis set for

    V. If

    m = n the equation has a unique solution. Suppose, however, that 1 6 m dim W.

    When does equality hold?

    6. Let W and X be subspaces of a vector space V. Then V has a basis Bsuch that each of W and X are spanned by subsets of B.

    7. Instead of elements of afi

    eld F, let the set of scalars be the integersZ

    ,so the result is not a vector space, but what is known as a Z-module. Z3,the integers modulo 3, is a Z-module that has no linearly independentelement. Thus Z3 has as its only maximal linearly independent set,but hi 6= Z3. Z itself is also a Z-module, and has {2} as a maximallinearly independent set, but h{2}i 6= Z. On the other hand, {1} isalso a maximal linearly independent set in Z, and h{1}i = Z, so weare prone to declare that {1} is a basis for Z. Every Z-module, infact, has a maximal linearly independent set, but the span of such aset may very well not be the whole Z-module. We evidently shouldreject maximal linearly independent set as a definition of basis for Z-

    modules, but linearly independent spanning set, equivalent for vectorspaces, does give us a reasonable definition for those Z-modules thatare indeed spanned by an independent set. Let us adopt the latter asour definition for basis of a Z-module. Then, is it possible for a Z-module to have finite bases with differing numbers of elements? Also,is it then true that no finite Z-module has a basis?

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    20/154

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    21/154

    Chapter 2

    Fundamentals of Maps

    2.1 Structure Preservation and Isomorphism

    A structure-preserving function from a structured set to another with thesame kind of structure is known in general as a homomorphism. Theshorter term map is often used instead, as we shall do here. When thespecific kind of structure is an issue, a qualifier may also be included, and soone speaks of group maps, ring maps, etc. We shall be concerned here mainlywith vector space maps, so for us a map, unqualified, is a vector space map.(A vector space map is also commonly known as a linear transformation.)

    For two vector spaces to be said to have the same kind of structure, werequire them to be over the same field. Then for f to be a map between them,we require f(x + y) = f(x) + f(y) and f(a x) = a f(x) for all vectors x, yand any scalar a, or, equivalently, that f(a x + b y) = a f(x) + b f(y)for all vectors x, y and all scalars a,b. It is easy to show that a map sends0 to 0, and a constant map must then send everything to 0. Under a map,the image of a subspace is always a subspace, and the inverse image of asubspace is always a subspace. The identity function on a vector space is amap. Maps which compose have a map as their composite.

    Exercise 2.1.1 LetV and W be vector spaces over the samefield and let S

    be a spanning set for V. Then any map f : V W is already completelydetermined by its values on S.

    The values of a map may be arbitrarily assigned on a minimal spanningset, but not on any larger spanning set.

    15

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    22/154

    16 CHAPTER 2 FUNDAMENTALS OF MAPS

    Theorem 19 LetV and W be vector spaces over the samefield and let B bea basis for

    V. Then given any function

    f0 : B

    W, there is a unique map

    f : V W such that f agrees with f0 on B.

    Proof. By the unique linear representation property of a basis, given v V,there is a unique subset X of B and unique nonzero scalars ax such thatv =P

    xX ax x. Because a map preserves linear combinations, any map fthat agrees with f0 on B can only have the value f(v) =

    PxX ax f(x) =P

    xX ax f0(x). Setting f(v) =P

    xX ax f0(x) does define a functionf : V W and this function f clearly agrees with f0 on B. Moreover, oneimmediately verifies that this f is a map.

    It is standard terminology to refer to a function that is both one-to-one

    (injective) and onto (surjective) as bijective, or invertible. Each invertiblef : X Y has a (unique) inverse f1 such that f1 f and f f1arethe respective identity functions on X and Y, and an f that has such anf1 is invertible. The composite of functions is bijective if and only if eachindividual function is bijective.

    For some types of structure, the inverse of a bijective map need not alwaysbe a map itself. However, for vector spaces, inverting does always yieldanother map.

    Theorem 20 The inverse of a bijective map is a map.

    Proof. f1

    (a v + b w) = a f1

    (v) + b f1

    (w) means precisely the sameas f(a f1 (v) + b f1 (w)) = a v + b w, which is clearly true.

    Corollary 21 A one-to-one map preserves independent sets, and conversely,a map that preserves independent sets is one-to-one.

    Proof. A one-to-one map sends its domain bijectively onto its image andthis image is a subspace of the codomain. Suppose a set in the one-to-onemaps image is dependent. Then clearly the inverse image of this set is alsodependent. An independent set therefore cannot be sent to a dependent setby a one-to-one map.

    On the other hand, suppose that a map sends the distinct vectors u andv to the same image vector. Then it sends the nonzero vector v u to 0, andhence it sends the independent set {v u} to the dependent set {0}.

    Because their inverses are also maps, the bijective maps are the isomor-phisms of vector spaces. If there is a bijective map from the vector space V

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    23/154

    2.1 STRUCTURE PRESERVATION AND ISOMORPHISM 17

    onto the vector space W, we say that W is isomorphic to V. The notation

    V

    = W is commonly used to indicate that V and W are isomorphic. Viewedas a relation between vector spaces, isomorphism is reflexive, symmetric andtransitive, hence is an equivalence relation. If two spaces are isomorphic toeach other we say that each is an alias of the other.

    Theorem 22 Let two vector spaces be over the same field. Then they areisomorphic if and only if they have bases of the same cardinality.

    Proof. Applying Theorem 19, the one-to-one correspondence between theirbases extends to an isomorphism. On the other hand, an isomorphism re-stricts to a one-to-one correspondence between bases.

    Corollary 23 Anyn-dimensional vector space over thefieldF is isomorphicto Fn.

    Corollary 24 (Fundamental Theorem of Finite-Dimensional VectorSpaces) Two finite-dimensional vector spaces over the same field are iso-morphic if and only if they have the same dimension.

    The following well-known and often useful result may seem somewhatsurprising on first encounter.

    Theorem 25 A map of a finite-dimensional vector space into an alias ofitself is one-to-one if and only if it is onto.

    Proof. Suppose the map is one-to-one. Since one-to-one maps preserveindependent sets, the image of a basis is an independent set with the samenumber of elements. The image of a basis must then be a maximal indepen-dent set, and hence a basis for the codomain, since the domain and codomainhave equal dimension. Because a basis and its image are in one-to-one cor-respondence under the map, expressing elements in terms of these bases, itis clear that each element has a preimage element.

    On the other hand, suppose the map is onto. The image of a basis is aspanning set for the codomain and must contain at least as many distinctelements as the dimension of the codomain. Since the domain and codomainhave equal dimension, the image of a basis must in fact then be a mini-mal spanning set for the codomain, and therefore a basis for it. Expressing

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    24/154

    18 CHAPTER 2 FUNDAMENTALS OF MAPS

    elements in terms of a basis and its image, we find that, due to unique rep-

    resentation, if the map sends u and v to the same element, then u = v.For infinite-dimensional spaces, the preceding theorem fails to hold. Itis a pleasant dividend of finite dimension, analogous to the result that afunction between equinumerous finite sets is one-to-one if and only if it isonto.

    2.2 Kernel, Level Sets and Quotient Space

    The kernel of a map is the inverse image of{0}. The kernel off is a subspaceof the domain of f. A level set of a map is a nonempty inverse image of a

    singleton set. The kernel of a map, of course, is one of its level sets, the onlyone that contains 0 and is a subspace. The level sets of a particular mapmake up a partition of the domain of the map. The following propositioninternally characterizes the level sets as the cosets of the kernel.

    Proposition 26 LetL be a level set andK be the kernel of the map f. Thenfor anyv in L, L = v + K = {v + x | x K}.

    Proof. Let v L and x K. Then f(v + x) = f(v) + f(x) = f(v) + 0 sothat v + x L. Hence v + K L. On the other hand, suppose that u Land write u = v + (u v). Then since both u and v are in L, f(u) = f(v),

    and hence f(u v) = 0, so that u v K. Hence L v + K.

    Corollary 27 A map is one-to-one if and only if its kernel is {0}.

    Given any vector v in the domain off, it must be in some level set, andnow we know that this level set is v + K, independent of the f which hasdomain V and kernel K.

    Corollary 28 Maps with the same domain and the same kernel have iden-tical level set families. For maps with the same domain, then, the kernel

    uniquely determines the level sets.

    The level sets of a map f are in one-to-one correspondence with theelements of the image of the map. There is a very simple way in whichthe level sets can be made into a vector space isomorphic to the image ofthe map. Just use the one-to-one correspondence between the level sets

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    25/154

    2.2 KERNEL, LEVEL SETS AND QUOTIENT SPACE 19

    and the image elements to give the image elements new labels according

    to the correspondence. Thus z gets the new label f

    1

    ({z}). To see whatf1 ({z})+f1 ({w}) is, figure out what z+w is and then take f1({z+ w}),and similarly to see what a f1({z}) is, figure out what a z is and takef1({a z}). All that has been done here is to give different names to theelements of the image of f. This level-set alias of the image of f is thevector space quotient (or quotient space) of the domain V off modulothe subspace K = Kernel(f) (which is the 0 of the space), and is denoted byV/K. The following result internalizes the operations in V/K.

    Proposition 29 Letf be a map with domain V and kernelK. LetL = u+Kand M = v + K. Then in V/K, L + M = (u + v) + K, and for any scalar

    a, a L = a u + K.

    Proof. L = f1({f(u)}) and M = f1({f(v)}). Hence L + M =f1({f(u) + f(v)}) = f1({f(u + v)}) = (u + v) + K. Also, a L =f1({a f(u)}) = f1({f(a u)}) = a u + K.

    The following is immediate.

    Corollary 30 V/K depends only on V and K, and not on the map f thathas domain V and kernel K.

    The next result tells us, for one thing, that kernel subspaces are notspecial, and hence the quotient space exists for any subspace K. (In the vectorspace V, the subspace K is a complementary subspace, or complement,of the subspace K if K and K have disjoint bases that together form a basisfor V, or what is the same, V = KL

    K. Every subspace K of V has at leastone complement K, because any basis A of K is contained in some basis Bof V, and we may then take K = hBrAi.)

    Proposition 31 Let K be a subspace of the vector space V and let K bea complementary subspace of K. Then there is a map from V into itselfwhich has K as its kernel and has K as its image. Hence given any subspace

    K of V, the quotient space V/K exists and is an alias of any complementarysubspace of K.

    Proof. Let A be a basis for K, and let Abe a basis for K, so that A A isa basis for V. Define as the map ofV into itself which sends each element

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    26/154

    20 CHAPTER 2 FUNDAMENTALS OF MAPS

    of A to 0, and each element of A to itself. Then has kernel K and image

    K. Supposing that K is the kernel of some map f : V W, the image off isan alias of the quotient space, as is the image of the map of the propositionabove. Hence any complementary subspace of the kernel of a map is analias of the maps image. For maps with finite-dimensional domains, we thenimmediately deduce the following important and useful result. (The rankof a map is the dimension of its image, and the nullity of a map is thedimension of its kernel.)

    Theorem 32 (Rank Plus Nullity Theorem) If the domain of a map hasfinite dimension, the sum of its rank and its nullity equals the dimension of

    its domain.

    Exercise 2.2.1 Letf : R3 R2 be the map that sends (x,y,z) to (x y, 0).Draw a picture that illustrates for this particular map the concepts of kernel

    and level set, and how complementary subspaces of the kernel are aliases of

    the image.

    If we specify domain V and kernel K, we have the level sets turned intothe vector space V/K as an alias of the image of any map with that domainand kernel. Thus V/K can be viewed as a generic image for maps with

    domain V and kernel K. To round out the picture, we now single out a mapthat sends V onto V/K.

    Proposition 33 The function p : V V/K which sends each v V tov + K is a map with kernel K.

    Proof. Supposing, as we may, that K is the kernel of some map f : V W,let the map f[ : V Image(f) be defined by f[ (v) = f(v) for all v V.Let : Image(f) V/K be the isomorphism from the image of f to V/Kobtained through the correspondence of image elements with their level sets.

    Notice that

    sends w = f(v) to v + K. Then p =

    f

    [

    and is obviously amap with kernel K.

    We call p the natural projection. It serves as a generic map for ob-taining the generic image. For any map f : V W with kernel K, we alwayshave the composition

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    27/154

    2.3 SHORT EXACT SEQUENCES 21

    V

    V/K

    Image(f),

    Wwhere V V/K is generic, denoting the natural projection p, and the re-mainder is the nongeneric specialization to f via an isomorphism and aninclusion map. Our next result details how the generic map p in fact servesin a more general way as a universal factor of maps with kernel containingK.

    Theorem 34 Letp : V V/K be the natural projection and let f : V W.If K Kernel(f) then there is a unique induced map fV/K : V/K Wsuch that fV/K p = f.

    Proof. The prescription fV/K p = f determines exactly what fV/K mustdo: for each v V, fV/K (p(v)) = fV/K (v + K) = f(v). For this to unam-biguously define the value of fV/K at each element of V/K, it must be thecase that if u + K = v + K, then f(u) = f(v) . But this is true becauseK Kernel(f) implies that f(k) = 0 for each k K. The unique functionfV/K so determined is readily shown to be a map.

    Exercise 2.2.2 Referring to the theorem above, fV/K is one-to-one if andonly if Kernel(f) = K, and fV/K is onto if and only if f is onto.

    The rank of the natural map p : V V/K is the dimension of V/K,also known as the codimension of the subspace K and denoted by codim K.Thus the Rank Plus Nullity Theorem may also be expressed as

    dim K + codim K = dim V

    when dim V is finite.

    Exercise 2.2.3 Let K be a subspace of finite codimension in the infinite-dimensional vector space V. Then K = V.

    2.3 Short Exact SequencesThere is a special type of map sequence that is a way to view a quotient.The map sequence X Y Z is said to be exact at Y if the image of themap going into Y is exactly the kernel of the map coming out of Y. A short

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    28/154

    22 CHAPTER 2 FUNDAMENTALS OF MAPS

    exact sequence is a map sequence of the form 0 K V W 0

    which is exact at K, V, and W. Notice that here the map K

    V is one-to-one, and the map V W is onto, in virtue of the exactness requirementsand the restricted nature of maps to or from 0 = {0}. Hence if V Wcorresponds to the function f, then it is easy to see that K is an alias ofKernel(f), and W is an alias ofV/ Kernel(f). Thus the short exact sequencecaptures the idea of quotient modulo any subspace. It is true that with thesame K, V, and W, there are maps such that the original sequence withall its arrows reversed is also a short exact sequence, one complementary tothe original, it might be said. This is due to the fact that K and W arealways aliases of complementary subspaces of V, which is something specialthat vector space structure makes possible. Thus it is said that, for vector

    spaces, a short exact sequence always splits, as illustrated by the short exactsequence 0 K K K K 0.

    An example of the use of the exact sequence view of quotients stems fromconsidering the following diagram where the rows and columns are shortexact sequences. Each second map in a row or column is assumed to be aninclusion, and each third map is assumed to be a natural projection.

    0 0

    0 K H H/K 0

    0 K V V/K 0

    V/H (V/K)/(H/K)

    0 0

    The sequence 0 H V V/K (V/K)/(H/K) 0 is a subdiagram.If we compose the pair of onto maps of V V/K (V/K)/(H/K) to yieldthe composite onto map V (V/K)/(H/K), we then have the sequence 0 H V (V/K)/(H/K) 0 which is exact at H and at (V/K)/(H/K),and would also be exact at V if H were the kernel of the composite. But

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    29/154

    2.3 SHORT EXACT SEQUENCES 23

    it is precisely the h H that map to the h + K H/K V/K which then

    are precisely the elements that map to H/K which is the 0 of (V/K)/(H/K).Thus we have obtained the following isomorphism theorem.

    Theorem 35 LetK be a subspace of H, and letH be a subspace of V. ThenV/H is isomorphic to (V/K)/(H/K).

    Exercise 2.3.1 Let V be R3, let H be the (x, y)-plane, and let K be thex-axis. Interpret the theorem above for this case.

    Now consider the diagram below where X and Y are subspaces and againthe rows and columns are short exact sequences and the second maps areinclusions while the third maps are natural projections.

    0 0

    0 X Y Y Y/(X Y) 0

    0 X X + Y (X + Y)/X 0

    X/(X Y) (X + Y)/Y

    0 0

    As a subdiagram we have the sequence 0 X Y X X + Y (X + Y)/Y 0. Replacing the sequence X X + Y (X + Y)/Y withits composite, the result would be a short exact sequence if the compositeX (X + Y)/Y were an onto map with kernel X Y.

    To see if the composite is onto, we must see if each element of (X+ Y)/Yis the image of some element of X. Now each element of (X + Y)/Y is the

    image of some element w X + Y where w = u + v for some u X andsome v Y. But clearly the element u by itself has the same image moduloY. Hence the composite is onto. Also, the elements of X that map to the 0of (X + Y)/Y, namely Y, are precisely those that then are also in Y. Hencethe kernel of the composite is X Y.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    30/154

    24 CHAPTER 2 FUNDAMENTALS OF MAPS

    Thus we have obtained another isomorphism theorem and an immediate

    corollary.Theorem 36 Let X and Y be any subspaces of some vector space. ThenX/ (X Y) is isomorphic to (X + Y)/Y.

    Corollary 37 (Grassmanns Relation) LetX andY befinite-dimensionalsubspaces of a vector space. Then

    dim X + dim Y = dim(X + Y) + dim(X Y).

    Exercise 2.3.2 Two 2-dimensional subspaces of R3 are either identical orintersect in a 1-dimensional subspace.

    Exercise 2.3.3 LetN, N0

    , P, andP0

    be subspaces of the same vector space

    and let N P and N0

    P0

    . Set X = P P0

    and Y = (P N0

    ) + N.Verify that X Y = (P

    0

    N) + (P N0

    ) and X + Y =

    P P0

    +N, sothat

    P0

    P

    +N0

    (P0 N) +N0=

    P P0

    (P0 N) + (PN0)=

    P P

    0

    +N

    (PN0) +N.

    2.4 Projections and Reflections on V = WX

    Relative to the decomposition of the vector space V by a pair W, X of directsummands, we may define some noteworthy self-maps on V. Expressing thegeneral vector v V = W X uniquely as v = w + x where w W andx X, two types of such are defined by

    PW|X (v) = w and RW|X (v) = w x.

    That these really are maps is easy to establish. We call PW|X the projectiononto W along X and we call RW|X the reflection in W along X. (Thefunction in the proof of Proposition 31 was the projection onto K along K,so we have already employed the projection type to advantage.) Denotingthe identity map on V by I, we have

    PX|W = I PW|X

    andRW|X = PW|X PX|W = I 2PX|W = 2PW|X I.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    31/154

    2.4 PROJECTIONS AND REFLECTIONS ON V = W X 25

    It bears mention that a given W generally has many different complements,

    and ifX and Y are two such, PW|X and PW|Y will generally diff

    er.The image of PW|X is W and its kernel is X. Thus PW|X is a self-mapwith the special property that its image and its kernel are complements. Theimage and kernel of RW|X are also complements, but trivially, as the kernelof RW|X is 0 = {0}. The kernel ofRW|X is 0 because w and x are equal onlyif they are both 0 since W X = 0 from the definition of direct sum.

    A double application ofPW|X has the same effect as a single application:P2W|X = PW|X PW|X = PW|X (PW|X is idempotent). We then have PW|X

    PX|W = 0 (the constant map on V that sends everything to 0). We alsofind that P2W|X = PW|X implies that R

    2W|X = I (RW|X is involutary). An

    idempotent map always turns out to be some PW|X, and an involutary map

    is almost always some RW|X, as we now explain.

    Proposition 38 Let P : V V be a map such that P2 = P. Then P is theprojection onto Image (P) along Kernel (P).

    Proof. Let w be in Image (P), say w = P(v). Then

    w = P2 (v) = P(P (v)) = P (w)

    so that if w is also in Kernel (P), it must be that w = 0. Hence

    Image (P) Kernel (P) = 0.

    Let Q = I P so that P (v) + Q (v) = I(v) = v and thus Image (P) +Image (Q) = V. But Image (Q) = Kernel(P) since if x = Q (v) = I(v) P(v), then P(x) = P(v) P2 (v) = 0. Therefore

    V = Image (P) Kernel (P) .

    Now ifv = w + x, w Image (P), x Kernel (P), then P (v) = P(w + x) =P(w) + P(x) = w + 0 = w, and P is indeed the projection onto Image (P)

    along Kernel (P).When V is over a field in which 1 + 1 = 0, the only possibility for RW|X

    is the identity I. However, when V is over such a field there can be aninvolutary map on V which is not the identity, for example the map of thefollowing exercise.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    32/154

    26 CHAPTER 2 FUNDAMENTALS OF MAPS

    Exercise 2.4.1 Over the 2-elementfield F = {0, 1} (wherein 1 + 1 = 0), let

    Vbe the vector space

    F

    2 of2

    -tuples of elements ofF

    . LetF

    be the map on

    V that interchanges (0, 1) and (1, 0). Then F F is the identity map.

    The F of the exercise can reasonably be called a reflection, but it is notsome RW|X. If we say that any involutary map is a reflection, then reflectionsof the form RW|X do not quite cover all the possibilities. However, when1+1 6= 0, they do. (For any function f : V V, we denote by Fix (f) the setof all elements in V that are fixed by f, i. e., Fix (f) = {v V | f(v) = v}.)

    Exercise 2.4.2 Let V be over a field such that 1 + 1 6= 0. Let R : V Vbe a map such that R2 = I. Then R = 2P I, where P = 1

    2(R + I) is the

    projection onto Fix(R) along Fix(R).

    2.5 Problems

    1. Let f : S T be a function. Then for all subsets A, B of S,

    f(A B) f(A) f(B) ,

    butf(A B) = f(A) f(B)

    for all subsets A, B of S if and only if f is one-to-one. What about for instead of ? And what about for and for when f is replaced by f1?

    2. (Enhancing Theorem 19 for nontrivial codomains) Let B be a subsetof the vector space V and let W 6= {0} be a vector space over the same field.Then B is independent if and only if every function f0 : B W extends toat least one map f : V W, and B spans V if and only if every functionf0 : B W extends to at most one map f : V W. Hence B is a basisfor V if and only if every function f0 : B W extends uniquely to a mapf : V W.

    3. Let B be a basis for the vector space V and let f : V W be a map.Then f is one-to-one if and only iff(B) is an independent set, and f is ontoif and only iff(B) spans W.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    33/154

    2.5 PROBLEMS 27

    4. Deduce Theorem 25 as a corollary of the Rank Plus Nullity Theorem.

    5. Let V be a finite-dimensional vector space of dimension n over thefinite field ofq elements. Then the number of vectors in V is qn, the numberof nonzero vectors that are not in a given m-dimensional subspace is

    (qn 1) (qm 1) = (qn qm) ,

    and the number of different basis sets that can be extracted from V is

    (qn 1) (qn q) (qn qn1)

    n!since there are

    (qn 1) (qn q)

    qn qn1

    different sequences of n distinct basis vectors that can be drawn from V.

    6. (Correspondence Theorem) Let UCV. The natural projection p : V V/U puts the subspaces of V that contain U in one-to-one correspondencewith the subspaces of V/U, and every subspace of V/U is of the form W/Ufor some subspace W that contains U.

    7. PW1|X1 + PW2|X2 = PW|X if and only if PW1|X1 PW2|X2 = PW2|X2 PW1|X1 = 0, in which case W = W1 W2 and X = X1 X2. What happenswhen PW1|X1 PW2|X2 = PW|X is considered instead?

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    34/154

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    35/154

    Chapter 3

    More on Maps and Structure

    3.1 Function Spaces and Map Spaces

    The set WD of all functions from a set D into the vector space W may bemade into a vector space by defining the operations in a pointwise fashion.That is, suppose that f and g are in WD, and define f + g by (f + g) (x) =f(x) + g (x), and for any scalar a define a f by (a f) (x) = a (f(x)).This makes WD into a vector space, and it is this vector space that wemean whenever we refer to WD as a function space. Considering n-tuplesas functions with domain {1, . . . , n} and the field F as a vector space over

    itself, we see that the familiar F

    n

    is an example of a function space.Subspaces of a function space give us more examples of vector spaces. Forinstance, consider the set {V W} of all maps from the vector space V intothe vector space W. It is easy to verify that it is a subspace of the functionspace WV. It is this subspace that we mean when we refer to {V W} asa map space.

    3.2 Dual of a Vector Space, Dual of a Basis

    By a linear functional on a vector space V over a field F is meant an

    element of the map space {V F1

    }. (Since F1

    is just F viewed as a vectorspace over itself, a linear functional is also commonly described as a mapinto F.) For a vector space V over a field F, the map space {V F1} willbe called the dual of V and will be denoted by V>.

    Let the vector space V have the basis B. For each x B, let x> be the

    29

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    36/154

    30 CHAPTER 3 MORE ON MAPS AND STRUCTURE

    linear functional on V that sends x to 1 and sends all other elements of B to

    0. We call x

    >

    the coordinate function corresponding to the basis vectorx. The set B# of all such coordinate functions is an independent set. For iffor some finite X B we haveP

    xX ax x> (v) = 0 for all v V, then for

    any v X we find that av = 0 so there can be no dependency in B#.

    When the elements of B# span V>, we say that B has B# as its dualand we may then replace B# with the more descriptive notation B>. WhenV is finite-dimensional, B# does span V>, we claim. For the general linearfunctional f is defined by prescribing its value on each element ofB, and givenany v Bwe have f(v) =P

    xB f(x)x> (v). Therefore any linear functional

    on V may be expressed as the linear combination f =

    PxB f(x) x

    >, sothat B# spans V> as we claimed.

    We thus have the following result.

    Theorem 39 Let V be afinite-dimensional vector space. Then any basis ofV has a dual, and therefore V> has the same dimension as V.

    When V is infinite-dimensional, B# need not span V>.

    Example 40 Let N denote the natural numbers {0, 1, . . .}. Let V be thesubspace of the function space RN consisting of only those real sequences

    a0, a1, . . . that are ultimately zero, i. e., for which there exists a smallestN N (called the length of the ultimately zero sequence), such that an = 0 for all n > N. Then : V R defined by (a0, a1, . . .) = PnN nanis a linear functional given any (not necessarily ultimately zero) sequenceof real coefficients 0,1, . . .. (Given 0,1, . . . that is not ultimately zero,it cannot be replaced by an ultimately zero sequence that produces the same

    functional, since there would always be sequencesa0, a1, . . . of greater length for which the results would differ.) The B# derived from the basis B ={(1, 0, 0, . . .) , (0, 1, 0, 0, . . .) , . . .} for V does not span V> because

    -B#

    only

    contains elements corresponding to sequences 0,1, . . . that are ultimatelyzero. Also note that by Theorem 19, each element of V> is determined by anunrestricted choice of value at each and every element of B.

    Fixing an element v V, the scalar-valued function Fv defined on V>

    by Fv(f) = f(v) is a linear functional on V>, i. e., Fv is an element of

    V>> =

    V>>

    . The association of each such v V with the correspondingFv V>> gives a map from V into V>>. is one-to-one, we claim. For

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    37/154

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    38/154

    32 CHAPTER 3 MORE ON MAPS AND STRUCTURE

    and that S0 = hSi0. Obviously, {0}0 = V>. Also, since a linear functional

    that is not the zero functional, i. e., which is not identically zero, must haveat least one nonzero value, V0 = {0}.

    Exercise 3.3.1 Because of the natural injection of V into V>>, the onlyx V such that f(x) = 0 for all f V> is x = 0.

    When V is finite-dimensional, the dimension of a subspace determinesthat of its annihilator.

    Theorem 42 Let V be an n-dimensional vector space and let U C V havedimension m. Then U0 has dimension n m.

    Proof. Let B = {x1, . . . , xn} be a basis for V such that {x1, . . . , xm} isa basis for U. We claim that

    x>m+1, . . . , x>n

    B> spans U0. Clearly,-

    x>m+1, . . . , x>n

    U0. On the other hand, iff = b1 x

    >1 + + bn x

    >n U

    0

    then f(x) = 0 for each x {x1, . . . , xm} so that b1 = = bm = 0 andtherefore U0

    -x>m+1, . . . , x

    >n

    .

    Exercise 3.3.2 LetV be afinite-dimensional vector space. For any UC V,(U0)

    0= U, remembering that we identify V with V>>.

    3.4 Dual of a MapLet f : V W be a map. Then for W>, the prescription f> () = f,or (f> ()) (v) = (f(v)) for all v V, defines a function f> : W> V>.It is easy to show that f> is a map. f> is called the dual of f. The dual ofan identity map is an identity map. The dual of the composite map f g isg> f>. Thus, iff is a bijective map, the dual of its inverse is the inverse ofits dual.

    Exercise 3.4.1 Let V and W be finite-dimensional vector spaces with re-spective bases B and C. Let the map f : V W satisfy f(x) =

    PyCay,x y

    for eachx B. Then f> satisfies f> y> = PxB ay,x x> for eachy C(where x> B> and y> C> are coordinate functions corresponding respec-tively to x B and y C, of course.)

    The kernel of f> is found to have a special property.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    39/154

    3.5 THE CONTRAGREDIENT 33

    Theorem 43 The kernel of f> is the annihilator of the image of the map

    f : V

    W.

    Proof. is in the kernel of f> (f> ()) (v) = 0 for all v V (f(v)) = 0 for all v V (f(V))0.

    When the codomain off has finite dimension m, say, then the image offhas finite dimension r, say. Hence, the kernel of f> (which is the annihilatorof the image of f) must have dimension m r. By the Rank Plus NullityTheorem, the rank off> must then be r. Thus we have the following result.

    Theorem 44 (Rank Theorem) When the map f has afinite-dimensionalcodomain, the rank of f> equals the rank of f.

    Exercise 3.4.2 Let f be a map from one finite-dimensional vector space

    to another. Then, making the standard identifications,

    f>>

    = f and thekernel of f is the annihilator of the image of f>.

    Exercise 3.4.3 For V>, let denote the subset of V that contains allthe vectors v such that (v) = 0 for all . Prove analogs of Theorems 42and 43. Finally prove that when the map f has afinite-dimensional domain,the rank of f> equals the rank of f (another Rank Theorem).

    3.5 The Contragredient

    Suppose that V is a finite-dimensional vector space with basis B and dualbasis B>, and suppose that W is an alias ofVvia the isomorphism f : V Wwhich then sends the basis B to the basis f(B). Then the contragredient

    f> = (f1)>

    =

    f>1

    maps V> isomorphically onto W> and maps B>

    onto the dual of the basis f(B).

    Theorem 45 Letf : V W be an isomorphism betweenfinite-dimensional

    vector spaces. Let B be a basis for V. Then the dual of the basis f(B) isf>

    B>

    .

    Proof. Let x, y B. We then have

    f>

    x>

    (f(y)) =

    x> f1

    (f(y)) =x> (y).

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    40/154

    34 CHAPTER 3 MORE ON MAPS AND STRUCTURE

    This result is most often applied in the case when V = W and the iso-

    morphism f (now an automorphism) is specifi

    cally being used to eff

    ect achange of basis and the contragredient is being used to effect the correspond-ing change of the dual basis.

    3.6 Product Spaces

    The function space concept introduced at the beginning of this chapter can beprofitably generalized. Instead of considering functions each value of whichlies in the same vector space W, we are now going to allow different vectorspaces (all over the same field) for different elements of the domain D. Thusfor each x D there is to be a vector space W

    x, with each of these W

    xbeing

    over the same field, and we form all functions f such that f(x) Wx. The setof all such f is the familiar Cartesian product of the Wx. Any such Cartesianproduct of vector spaces may be made into a vector space using pointwiseoperations in the same way that we earlier used them to make WD into afunction space, and it is this vector space that we mean whenever we refer tothe Cartesian product of vector spaces as a product space. Sometimes wealso refer to a product space or a Cartesian product simply as a product,when there is no chance of confusion. We use the general notation

    QxD Wx

    for a product, but for the product of n vector spaces W1, . . . , Wn we oftenwrite W1 W n. For a field F, we recognize F F (with n factors)

    as the familiar Fn.When there are only a finite number of factors, a product space is the di-

    rect sum of the internalized factors. For example, W1W2 = (W1 {0})({0} W2).

    Exercise 3.6.1 IfU1 andU2 are subspaces of the same vector space and haveonly 0 in common, then U1 U2 = U1 U2 in a manner not requiring anychoice of basis.

    However, when there are an infinite number of factors, a product spaceis not equal to the direct sum of internalized factors.

    Exercise 3.6.2 LetN = {0, 1, 2,...} be the set of natural numbers and letF be a field. Then the function space FN, which is the same thing as theproduct space with each factor equal to F and with one such factor for eachnatural number, is not equal to its subspace

    LiN hxii where xi (j) is equal

    to 1 if i = j and is equal to 0 otherwise.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    41/154

    3.7 MAPS INVOLVING PRODUCTS 35

    Selecting from a product space only those functions that have finitely

    many nonzero values gives a subspace called the weak product space. Asan example, the set of all formal power series with coefficients in a field Fis a product space (isomorphic to the FN of the exercise above) for whichthe corresponding weak product space is the set of all polynomials withcoefficients in F. The weak product space always equals the direct sum ofthe internalized factors, and for that reason is also called the externaldirect sum. We will writeU

    xD Wx for the weak product space derivedfrom the productQ

    xD Wx.

    3.7 Maps Involving Products

    We single out some basic families of maps that involve products and theirfactors. The function x that sends f

    QxD Wx (orU

    xD Wx) to f(x) Wx, is called the canonical projection onto Wx. The function x thatsends v Wx to the f Q

    xD Wx (orU

    xD Wx) such that f(x) = v andf(y) = 0 for y 6= x, is called the canonical injection from Wx. It is easy toverify that x and x are maps. Also, we introduce x = x x, called thecomponent projection for component x. Note that x x = idx (identityon Wx), x y = 0 whenever x 6= y, and x x = x. Note also that

    PxD x (f) = f so that

    PxD x is the identity function, where we have

    made the convention that the pointwise sum of any number of 0s is 0.

    A map F : V QxD Wx from a vector space V into the product of thevector spaces Wx is uniquely determined by the map projections x F.

    Theorem 46 There exists exactly one map F : V Q

    xD Wx such thatx F = x where for each x D, x : V Wx is a prescribed map.

    Proof. The function F that sends v V to the f Q

    xD Wx such thatf(x) = x (v) is readily seen to be a map such that xF = x. Suppose thatthe map G : V Q

    xD Wx satisfies xG = x for each x D. For v V letG (v) = g. Then g (x) = x(g) = x(G (v)) = (x G) (v) = x (v) = f(x)and hence g = f. But then G (v) = f = F(v) so that G = F, which showsthe uniqueness of F.

    For weak products, it is the maps from them, rather than the maps intothem, that have the special property.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    42/154

    36 CHAPTER 3 MORE ON MAPS AND STRUCTURE

    Theorem 47 There exists exactly one map F :

    UxD Wx V such that

    F

    x =

    xwhere for each

    x

    D,

    x : Wx

    Vis a prescribed map.

    Proof. The function F that sends f U

    xD Wx toP

    {xD|f(x)6=0} x (f(x))is readily seen to be a map such that Fx = x. Suppose that the map G :U

    xD Wx V satisfies G x = x for each x D. Let f U

    xD Wx. Thenf =P

    {xD|f(x)6=0} x (f(x)) so that G (f) =P

    {xD|f(x)6=0} G (x (f(x))) =P{xD|f(x)6=0} x (f(x)) = F(f), which shows the uniqueness of F.

    Exercise 3.7.1U

    xD Wx =L

    xD x (Wx).

    Exercise 3.7.2

    LxDUx =

    UxDUx in a manner not requiring any choice

    of basis.

    Exercise 3.7.3 LetW1, . . . , Wn befinite dimensional vector spaces over thesame field. Then

    dim(W1 W n) = dim W1 + + dim Wn.

    The last two theorems lay a foundation for two important isomorphisms.As before, with all vector spaces over the same field, let the vector space Vbe given, and let a vector space Wx be given for each x in some set D. Then,utilizing map spaces, we may form the pairs of vector spaces

    M =YxD

    {V Wx} , N =(

    V YxD

    Wx)

    and

    M0

    =YxD

    {Wx V} , N0

    =

    (]xD

    Wx V

    ).

    Theorem 48 M = N and M0 = N

    0

    .

    Proof.

    f M means that for each x D, f(x) = x for some map x : V Wx, and hence there exists exactly one map F N such that x F = f(x)for each x D. This association of F with f therefore constitutes a well-defined function : M N which is easily seen to be a map. The map is one-to-one. For if sends both f and g to F, then for each x D,

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    43/154

    3.7 MAPS INVOLVING PRODUCTS 37

    x F = f(x) = g (x), and f and g must therefore be equal. Also, each

    F

    N is the image under

    of the f

    M such that f(x) =

    x

    F so that is also onto. Hence is an isomorphism from M onto N.

    A similar argument shows that M0 = N

    0

    .

    Exercise 3.7.4U

    xD Wx> = QxD W>x . (Compare with Example 40 at

    the beginning of this chapter.)

    Exercise 3.7.5 (W1 W n)> = W>1 W

    >n .

    We note that the isomorphisms of the two exercises and theorem abovedo not depend on any choice of basis. This will generally be the case forthe isomorphisms we will be establishing. From now on, we will usually skippointing out when an isomorphism does not depend on specific choices, butwill endeavor to point out any opposite cases.

    Up to isomorphism, the order of the factors in a product does not matter.This can be proved quite readily directly from the product definition, but thetheorem above on maps into a product gives an interesting approach fromanother perspective. We also incorporate the obvious result that usingaliases of the factors does not affect the product, up to isomorphism.

    Theorem 49Q

    xD Wx=Q

    xD W0

    (x) where is a bijection of D onto

    itself, and for each x D there is an isomorphism x : W0

    x Wx.

    Proof. In Theorem 46, take the product to beQ

    xD Wx, take V =Q

    xD W0

    (x),

    and take x = x 1(x) so that there exists a map fromQ

    xD W0

    (x) toQxD Wx such that x = x 1(x). Interchanging the product spaces

    and applying the theorem again, there exists a map fromQ

    xD Wx to

    QxDW

    0

    (x) such that 1(x) = 1x x. Then x = x1(x) =

    x, and 1(x) = 1x x = 1(x). Now applying the theorem

    to the two cases where the V and the product space are the same, the uniquemap determined in each case must be the identity. Hence and must each be identity maps, thus making and a pair of mutually inversemaps that linkQ

    xD Wx andQ

    xD W0

    (x) isomorphically.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    44/154

    38 CHAPTER 3 MORE ON MAPS AND STRUCTURE

    3.8 Problems

    1. Let V be a vector space over the field F. Let f V>. Then f> is themap that sends the element of (F1)

    >to ( (1)) f V>.

    2. Let VC W. Let j : V W be the inclusion map of V into W whichsends elements in V to themselves. Then j> is the corresponding restrictionof domain operation on the functionals of W>, restricting each of theirdomains to V from the original W. (Thus, using a frequently-seen notation,

    j> () = |V for any W>.)

    3. Let f : V W be a map. Then for all w W the equation

    f(x) = w

    has a unique solution for x if and only if the equation

    f(x) = 0

    has only the trivial solution x = 0. (This result is sometimes called theAlternative Theorem. However, see the next problem.)

    4. (Alternative Theorem) Let V and W be finite-dimensional and letf : V W be a map. Then for a given w there exists x such that

    f(x) = w

    if and only if

    w

    Kernel

    f>0

    ,

    i. e., if and only if (w) = 0 for all Kernel

    f>

    . (The reason thisis called the Alternative Theorem is because it is usually phrased in theequivalent form: Either the equation f(x) = w can be solved for x, or thereis a Kernel f> such that (w) 6= 0.)

    5. Let V and W be finite-dimensional and let f : V W be a map.Then f is onto if and only if f> is one-to-one and f> is onto if and only if fis one-to-one.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    45/154

    3.8 PROBLEMS 39

    6. Let V and W have the respectivefi

    nite dimensions m and n over thefinite field F of q elements. How many maps V W are there? If m > nhow many of these are onto, ifm 6 n how many are one-to-one, and ifm = nhow many are isomorphisms?

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    46/154

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    47/154

    Chapter 4

    Multilinearity and Tensoring

    4.1 Multilinear Transformations

    Maps, or linear transformations, are functions of a single vector variable.Also important are the multilinear transformations. These are the functionsof several vector variables (i. e., functions on the product space V1 V n)which are linear in each variable separately. In detail, we call the functionf : V1 V n W, where the Vi and W are vector spaces over the samefield F, a multilinear transformation, if for each k

    f(v1, . . . , vk1, a u + b v , . . . , vn) =

    = a f(v1, . . . , vk1, u , . . . , vn) + b f(v1, . . . , vk1, v , . . . , vn)

    for all vectors u,v,v1, . . . , vn and all scalars a, b. We also say that such anf is n-linear (bilinear, trilinear when n = 2, 3). When W = F, wecall a multilinear transformation a multilinear functional. In general, amultilinear transformation is not a map on V1 V n. However, we willsoon construct a vector space on which the linear transformations are themultilinear transformations on V1 V n.

    A unique multilinear transformation is determined by the arbitrary as-signment of values on all tuples of basis vectors.

    Theorem 50 LetV1, . . . , Vn, W be vector spaces over the samefield and foreach Vi let the basis Bi be given. Then given the functionf0 : B1 B n W, there is a unique multilinear transformation f : V1 V n W suchthat f agrees with f0 on B1 B n.

    41

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    48/154

    42 CHAPTER 4 MULTILINEARITY AND TENSORING

    Proof. Given vi Vi there is a unique finite subset Xi of Bi and unique

    nonzero scalars axi such that vi = PxiXi axi xi. The n-linearity of f :V1 V n W impliesf(v1, . . . , vn) =X

    x1X1

    X

    xnXn

    ax1 axn f(x1, . . . , xn) .

    Hence for f to agree with f0 on B1 B n, it can only have the value

    f(v1, . . . , vn) =X

    x1X1

    X

    xnXn

    ax1 axn f0 (x1, . . . , xn) .

    On the other hand, setting

    f(v1, . . . , vn) =X

    x1X1

    X

    xnXn

    ax1 axn f0 (x1, . . . , xn)

    does define a function f : V1 V n W which clearly agrees with f0on B1 B n, and this f is readily verified to be n-linear.

    4.2 Functional Product Spaces

    Let S1, . . . , Sn be sets of linear functionals on the respective vector spaces

    V1, . . . , Vn all over the samefi

    eld F. By the functional product S1 S nis meant the set of all products f1 fn with each fi Si, where by sucha product of linear functionals is meant the function f1 fn : V1 Vn F such that f1 fn (v1, . . . , vn) = f1 (v1) fn (vn). The function-space subspace hS1 S ni obtained by taking all linear combinations offinitesubsets of the functional product S1 S n is called the functional productspace generated by S1 S n.

    Exercise 4.2.1 Each element of hS1 S ni is a multilinear functional onV1 V n.

    Exercise 4.2.2 hhS1i h S nii = hS1 S ni.

    Lemma 51 Let W1, . . . , Wn be vector spaces of linear functionals, all overthe same field, with respective bases B1, . . . , Bn. ThenB1 B n is an inde-pendent set.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    49/154

    4.3 FUNCTIONAL TENSOR PRODUCTS 43

    Proof. Induction will be employed. The result is clearly true for n = 1, and

    suppose that it is true for n = N

    1. Let X be afi

    nite subset ofB1 B N suchthatPxX ax x = 0. Each x X has the form yz where y B1 B N1 andz BN. Collecting terms on the distinct z,

    PxX ax x =P

    z(P

    x=yz ax y)z.The independence of BN implies that for each z,

    Px=yz ax y = 0, and the

    assumed independence ofB1 B N1 then implies that each ax is zero. HenceB1 B N is an independent set, and by induction, B1 B n is an independentset for each n.

    Proposition 52 Let W1, . . . , Wn be vector spaces of linear functionals, allover the samefield, with respective bases B1, . . . , Bn. ThenB1 B n is a basisfor hW1 W ni.

    Proof. By the exercise above, hW1 W ni = hB1 B ni and hence B1 B nspans hW1 W ni. By the lemma above, B1 B n is an independent set.Thus B1 B n is a basis for hW1 W ni.

    4.3 Functional Tensor Products

    In the previous chapter, it was shown that the vector space V may be embed-ded in its double dual V>> independently of any choice of basis, effectivelymaking each vector in V into a linear functional on V>. Thus given the vectorspaces V1, . . . , Vn, all over the same field F, and the natural injection i that

    embeds each Vi in its double dual, we may form the functional product spaceh1(V1) n(Vn)i which will be called the functional tensor product ofthe Vi and which will be denoted by V1

    N N

    Vn. Similarly, an element1(v1) n(vn) of1(V1) n(Vn) will be called the functional tensorproduct of the vi and will be denoted by v1 vn.

    If W is a vector space over the same field as the Vi, the universalmultilinear transformation : V1 V n V1

    N N

    Vn which sends(v1, . . . , vn) to v1 vn exchanges the multilinear transformations f :V1 V n W with the linear transformations : V1

    N N

    Vn Wvia the composition of functions f = .

    Theorem 53 Let V1, . . . , Vn, W be vector spaces over the same field. Then for each multilinear transformationf : V1 Vn W there exists aunique linear transformation : V1

    N N

    Vn W such that f = ,where : V1 V n V1N

    N

    Vn is the tensor product function thatsends (v1, . . . , vn) to v1 vn.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    50/154

    44 CHAPTER 4 MULTILINEARITY AND TENSORING

    Proof. For each Vi, choose some basis Bi. Then by the proposition above,

    B= {x1

    xn | x1

    B1, . . . , xn

    Bn} is a basis for V1N NVn. Giventhe multilinear transformation f : V1 V n W, consider the lineartransformation : V1N

    N

    Vn W such that for x1 B1, . . . , xn Bn

    (x1 xn) = ((x1, . . . , xn)) = f(x1, . . . , xn) .

    Since is clearly n-linear, it must then equal f by the theorem above.Every determines an n-linear f via f = , but since the values of onB determine it completely, and the values of f on B1 B n determine itcompletely, a different determines a different f.

    Thus, while multilinear transformations themselves are generally not mapson V1 V n, they do correspond one-to-one to the maps on the related

    vector space V1N NVn = h{v1 vn}i. In fact, we can interpretthis correspondence as an isomorphism if we introduce the function space

    subspacen

    V1 V n(nlinear)

    Wo

    of n-linear functions, which is then

    clearly isomorphic to the map space {V1N

    N

    Vn W}. The vector

    spacen

    V1 V n(nlinear)

    Fo

    of multilinear functionals is therefore iso-

    morphic to (V1N

    N

    Vn)>. If all the Vi are finite-dimensional, we then

    have V1N

    N

    Vn =n

    V1 V n(nlinear)

    Fo>

    .

    When the Vi are finite-dimensional, we may discard any even number ofconsecutive dualization operators, since we identify a finite-dimensional spacewith its double dual. Therefore, V>1 N NV>n is just -V>1 V >n . ThusV>1N

    N

    V>n is a subspace ofn

    V1 V n(nlinear)

    Fo

    , and it is easy

    to see that both have the same dimension, so they are actually equal. Thisand our observations immediately above then give us the following result.

    Theorem 54 Let V1, . . . , Vn be vector spaces over the field F. Then

    (V1N

    N

    Vn)> =n

    V1 V n(nlinear)

    Fo

    .

    If the Vi are allfinite-dimensional, we also have

    V1N NVn = nV1 V n (nlinear) Fo>and

    (V1N

    N

    Vn)> = V>1N

    N

    V>n =n

    V1 V n(nlinear)

    Fo

    .

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    51/154

    4.4 TENSOR PRODUCTS IN GENERAL 45

    4.4 Tensor Products in General

    Theorem 53 contains the essence of the tensor product concept. By mak-ing a definition out of the universal property that this theorem ascribes toV1N

    N

    Vn and , a general tensor product concept results. Let the vec-tor spaces V1, . . . , Vn each over the same field F be given. A tensor productof V1, . . . , Vn (in that order) is a vector space (a tensor product space)over F along with an n-linear function : V1Vn (a tensor prod-uct function) such that given any vector space W, every n-linear functionf : V1 V n W is equal to for a unique map : W. Thefunctional tensor product V1N

    N

    Vn along with the functional tensorproduct function are thus an example of a tensor product of V1, . . . , Vn.

    Another tensor product of the same factor spaces may be defined usingany alias of a tensor product space along with the appropriate tensor productfunction.

    Theorem 55 LetV1, . . . , Vn be vector spaces over the samefield. Then ifalong with the n-linear function : V1 V n is a tensor productof V1, . . . , Vn, and : 0 is an isomorphism, then0 along with is also a tensor product of V1, . . . , Vn.

    Proof. Given the vector space W, the n-linear function f : V1Vn Wis equal to for a unique map : W. The map 1 therefore

    satisfies ( 1

    ) ( ) = f. On the other hand, if 0

    ( ) =(0 ) = f, then by the uniqueness of , 0 = and hence0 = 1.

    Exercise 4.4.1 Make sense of the tensor product concept in the case wheren = 1, so that we have 1-linearity and the tensor product of a single factor.

    If in the manner of the previous theorem, we have isomorphic tensor prod-uct spaces with the tensor product functions related by the isomorphism asin the theorem, we say that the tensor products are linked by the isomor-phism of their tensor product spaces. We now show that tensor products of

    the same factors always have isomorphic tensor product spaces, and in factthere is a unique isomorphism between them that links them.

    Theorem 56 Let V1, . . . , Vn be vector spaces over the same field. Then if along with the n-linear function : V1 Vn , and0 along

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    52/154

    46 CHAPTER 4 MULTILINEARITY AND TENSORING

    with the n-linear function0 : V1 V n 0, are each tensor productsof

    V1, . . . , Vn, then there is a unique isomorphism

    : 0 such that

    = 0.

    Proof. In the tensor product definition above, take W = 0 and f = 0.Then there is a unique map : 0 such that 0 = . Similarly, thereis a unique map 0 : 0 such that = 0 0. Hence 0 = 0 0

    and = 0. Applying the tensor product definition again, the uniquemap : such that = must be the identity. We conclude that0 is the identity and the same for 0. Hence the unique map hasthe inverse 0 and is therefore an isomorphism.

    We will write any tensor product space as V1N NVn just as we didfor the functional tensor product. Similarly we will write v1 vn for (v1, . . . , vn), and call it the tensor product of the vectors v1, . . . , vn, justas we did for the functional tensor product (v1, . . . , vn). With isomorphictensor product spaces, we will always assume that the tensor products arelinked by the isomorphism. By assuming this, tensor products of the samevectors will then correspond among all tensor products of the same factorspaces: v1 vn in one of these spaces is always an isomorph ofv1 vnin any of the others.

    Exercise 4.4.2 A tensor product space is spanned by the image of its as-sociated tensor product function: h{v

    1 v

    n| v

    1 V

    1, . . . , v

    n V

    n}i =

    V1N NVn.Up to isomorphism, the order of the factors in a tensor product does not

    matter. We also incorporate the obvious result that using aliases of thefactors does not affect the result, up to isomorphism.

    Theorem 57 Let the vector spaces V0

    1, . . . , V0

    n, all over the same field, be

    (possibly reordered) aliases of V1, . . . , Vn, and let V = V1N

    N

    Vn alongwith the tensor product function : V = V1 V n V be a tensorproduct, and let V

    0

    = V0

    1N NV0

    n along with the tensor product function

    0

    : V0

    = V0

    1 V 0

    n V0

    be a tensor product. ThenV = V0

    .

    Proof. Let : V V0

    denote the isomorphism that exists from theproduct space V to the product space V

    0

    . It is easy to see that 0

    and 1are n-linear. Hence there is a (unique) map : V V0

    such

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    53/154

    4.4 TENSOR PRODUCTS IN GENERAL 47

    that = 0

    and there is a (unique) map 0

    : V0

    V such that

    0

    0

    = 1

    . From this we readily deduce that

    0

    =

    0

    0

    and = 0

    . Hence the map has the inverse 0

    and is therefore anisomorphism.

    Up to isomorphism, tensor multiplication of vector spaces may be per-formed iteratively.

    Theorem 58 Let V1, . . . , Vn be vector spaces over the same field. Then forany integer k such that 1 6 k < n there is an isomorphism

    : (V1N

    N

    Vk)N

    (Vk+1N

    N

    Vn) V1N

    N

    Vn

    such that

    ((v1 vk) (vk+1 vn)) = v1 vn.

    Proof. Set V = V1 V n, V = V1N

    N

    Vn, V = V1 V k,

    V = V1N

    N

    Vk, V = Vk+1 V n, and V = Vk+1N

    N

    Vn.

    For fixed vk+1 vn V we define the k-linear function fvk+1vn :

    V V by fvk+1vn (v1, . . . , vk) = v1 vn. Corresponding to

    fvk+1vn is the (unique) map vk+1vn : V V such that

    vk+1vn (v1 vk) = v1 vn.

    Similarly, for fixed v1 vk V we define the (n k)-linear functionfv1vk : V V by fv1vk (vk+1, . . . , vn) = v1 vn. Corresponding

    to fv1vk is the (unique) map v1vk : V V such that

    v1vk (vk+1 vn) = v1 vn.

    We then define the function f : V V V by the formula

    f(x, y) =X

    av1vkv1vk (y)

    when x =P av1vk v1 vk. We claim this formula gives the sameresult no matter how x is expressed as a linear combination of elementsof the form v1 vk. To aid in verifying this, let y be expressed asy =P

    bvk+1vn vk+1 vn. The formula may then be written

    f(x, y) =X

    av1vk X

    bvk+1vn v1vk (vk+1 vn) .

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    54/154

    48 CHAPTER 4 MULTILINEARITY AND TENSORING

    But v1vk (vk+1 vn) = vk+1vn (v1 vk) so that the for-

    mula is equivalent to

    f(x, y) =X

    av1vk X

    bvk+1vn vk+1vn (v1 vk) .

    But this is the same as

    f(x, y) =X

    bvk+1vn vk+1vn (x)

    which of course has no dependence on how x is expressed as a linear combi-nation of elements of the form v1 vk.

    f is clearly linear in either argument when the other argument is held

    fixed. Corresponding to f is the (unique) map : VNV V such that (x y) = f(x, y) so that, in particular,

    ((v1 vk) (vk+1 vn)) = v1 vn.

    We define an n-linear function f0

    : V VN

    V by setting

    f0

    (v1, . . . , vn) = (v1 vk) (vk+1 vn) .

    Corresponding to f0

    is the (unique) map 0

    : V V

    NV such that

    0

    (v1 vn) = (v1 vk) (vk+1 vn) .

    Thus each of0

    and 0

    coincides with the identity on a spanningset for its domain, so each is in fact the identity map. Hence the map hasthe inverse

    0

    and therefore is an isomorphism.

    Exercise 4.4.3 ( ((V1N

    V2)N

    V3)N

    N

    Vn1)N

    Vn = V1N

    N

    Vn.

    Corollary 59 (V1N

    V2)N

    V3 = V1N

    (V2N

    V3) = V1N

    V2N

    V3.

    The two preceding theorems form the foundation for the following generalassociativity result.

    Theorem 60 Tensor products involving the same vector spaces are isomor-phic no matter how the factors are grouped.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    55/154

    4.4 TENSOR PRODUCTS IN GENERAL 49

    Proof. Complete induction will be employed. The result is trivially true

    for one or two spaces involved. Suppose the result is true for r spaces forevery r < n. Consider a grouped (perhaps nested) tensor product of then > 2 vector spaces V1,..., Vn in that order. At the outermost essential levelof grouping, we have an expression of the form = W1

    N N

    Wk, where1 < k 6 n and the n factor spaces V1, ..., Vn are distributed in (perhapsnested) groupings among the Wi. Then = (W1

    N N

    Wk1)N

    Wkand by the induction hypothesis we may assume that, for some i < n,W1N

    N

    Wk1 = V1N

    N

    Vi and Wk = Vi+1N

    N

    Vn. Hence = (V1N

    N

    Vi)N

    (Vi+1N

    N

    Vn) and therefore = V1N

    N

    Vnno matter how the n factors V1,..., Vn are grouped. The theorem thus holdsby induction.

    Not surprisingly, when we use the field F (considered as a vector spaceover itself) as a tensor product factor, it does not really do anything.

    Theorem 61 LetV be a vector space over the field F, and let F be consid-ered also to be a vector space over itself. Then there is an isomorphism from

    FN

    V (and another from VN

    F) to V sending a v (and the other sendingv a) to a v.

    Proof. The function from F V to V that sends (a, v) to a v is clearlybilinear. Hence there exists a (unique) map : F

    NV V that sends a v

    to a v. Let

    0

    : V FNV be the map that sends v to 1 v. Then 0is clearly the identity on V. On the other hand, 0 sends each elementin FN

    V of the form a v to itself, and so is the identity on FN

    V.The other part is proved similarly.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    56/154

    50 CHAPTER 4 MULTILINEARITY AND TENSORING

    4.5 Problems

    1. Suppose that in V1NV2 we have u v = u w for some nonzerou V1. Is it then possible for the vectors v and w in V2 to be different?

    2. When all Vi are finite-dimensional, V>1N

    N

    V>n= (V1N

    N

    Vn)>

    via the unique map : V>1N

    N

    V>n (V1N

    N

    Vn)> for which

    (1 n) (v1 vn) = 1 (v1) n (vn) ,

    or, in other words,

    (1 n) = 1 n.

    Thus, for example, if the Vi are all equal to V, and V has the basisB, then

    x>1 x>n

    = x>1 x

    >n = (x1 xn)

    >, where eachxi B.

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    57/154

    Chapter 5

    Vector Algebras

    5.1 The New Element: Vector Multiplication

    A vector space per se has but one type of multiplication, namely multipli-cation (scaling) of vectors by scalars. However, additional structure may beadded to a vector space by defining other multiplications. Defining a vectormultiplication such that any two vectors may be multiplied to yield anothervector, and such that this multiplication acts in a harmonious fashion withrespect to vector addition and scaling, gives a new kind of structure whenthis vector multiplication is included as a structural element. The maps of

    such an enhanced structure, that is, the functions that preserve the entirestructure, including the vector multiplication, are still vector space maps, butsome vector space maps may no longer qualify due to the requirement thatthe vector multiplication must also be preserved. Adding a new structuralelement, giving whatever benefits it may, also then may give new burdens inthe logical development of the theory involved if we are to exploit structuralaspects through use of function images. We will see, though, that at the costof a little added complication, some excellent constructs will be obtained asa result of including various vector multiplications.

    Thus we will say that the vector space Vover the field Fbecomes a vector

    algebra (over F) when there is defi

    ned on it a vector multiplication : V V V, written (u, v) = u v, which is required to be bilinear sothat it satisfies both the distributive laws

    (t + u) v = t v + u v, and u (v + w) = u v + u w

    51

  • 8/9/2019 vecmastf - vector spaces, vector algebras, and vector geometries

    58/154