146
Technische Universität Braunschweig CSE – Computational Sciences in Engineering An International, Interdisciplinary, and Bilingual Master of Science Programme Introduction to Continuum Mechanics Vector and Tensor Calculus Winter Semester 2002 / 2003 Franz-Joseph Barthold 1 Jörg Stieghan 2 22nd October 2003 1 Tel. ++49-(0)531-391-2240, Fax ++49-(0)531-391-2242, email [email protected] 2 Tel. ++49-(0)531-391-2247, Fax ++49-(0)531-391-2242, email [email protected]

Introduction to Continuum Mechanics - Vector and Tensor Calculus

Embed Size (px)

Citation preview

Page 1: Introduction to Continuum Mechanics - Vector and Tensor Calculus

Technische UniversitätBraunschweig

CSE – Computational Sciences in Engineering

An International, Interdisciplinary, and Bilingual Master of Science Programme

Introduction toContinuum Mechanics

Vector and Tensor Calculus

Winter Semester 2002 / 2003

Franz-Joseph Barthold 1 Jörg Stieghan 2

22nd October 2003

1Tel. ++49-(0)531-391-2240, Fax ++49-(0)531-391-2242, email [email protected]. ++49-(0)531-391-2247, Fax ++49-(0)531-391-2242, email [email protected]

Page 2: Introduction to Continuum Mechanics - Vector and Tensor Calculus

Herausgeber

Prof. Dr.-Ing. Franz-Joseph Barthold, M.Sc.

Organisation und Verwaltung

Dipl.-Ing. Jörg Stieghan, SFICSE – Computational Sciences in EngineeringTechnische Universität BraunschweigBültenweg 17, 38 106 BraunschweigTel. ++49-(0)531-391-2247Fax ++49-(0)531-391-2242email [email protected]

c©2000 Prof. Dr.-Ing. Franz-Joseph Barthold, M.Sc.undDipl.-Ing. Jörg Stieghan, SFICSE – Computational Sciences in EngineeringTechnische Universität BraunschweigBültenweg 17, 38 106 Braunschweig

Alle Rechte, insbesondere das der Übersetzung in fremde Sprachen, vorbehalten. Ohne Geneh-migung der Autoren ist es nicht gestattet, dieses Heft ganz oder teilweise auf fotomechanischemWege (Fotokopie, Mikroskopie) zu vervielfältigen oder in elektronische Medien zu speichern.

Abstract

Zusammenfassung

Page 3: Introduction to Continuum Mechanics - Vector and Tensor Calculus

Preface

Braunschweig, 22nd October 2003 Franz-Joseph Barthold and Jörg Stieghan

Page 4: Introduction to Continuum Mechanics - Vector and Tensor Calculus

Contents

Contents VII

List of Figures IX

List of Tables XI

1 Introduction 1

2 Basics on Linear Algebra 32.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.4 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.6 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.7 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.8 Af£ne Vector Space and the Euclidean Vector Space . . . . . . . . . . . . . . . . 282.9 Linear Mappings and the Vector Space of Linear Mappings . . . . . . . . . . . . 322.10 Linear Forms and Dual Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 36

3 Matrix Calculus 373.1 De£nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.2 Some Basic Identities of Matrix Calculus . . . . . . . . . . . . . . . . . . . . . 423.3 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483.4 Linear Mappings of an Af£ne Vector Spaces . . . . . . . . . . . . . . . . . . . . 543.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.6 Matrix Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4 Vector and Tensor Algebra 754.1 Index Notation and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.2 Products of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964.4 Transformations and Products of Tensors . . . . . . . . . . . . . . . . . . . . . . 101

VII

Page 5: Introduction to Continuum Mechanics - Vector and Tensor Calculus

VIII Contents

4.5 Special Tensors and Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124.6 The Principal Axes of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204.7 Higher Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

5 Vector and Tensor Analysis 1315.1 Vector and Tensor Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . 1335.2 Derivatives and Operators of Fields . . . . . . . . . . . . . . . . . . . . . . . . . 1435.3 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

6 Exercises 1596.1 Application of Matrix Calculus on Bars and Plane Trusses . . . . . . . . . . . . 1626.2 Calculating a Structure with the Eigenvalue Problem . . . . . . . . . . . . . . . 1746.3 Fundamentals of Tensors in Index Notation . . . . . . . . . . . . . . . . . . . . 1826.4 Various Products of Second Order Tensors . . . . . . . . . . . . . . . . . . . . . 1906.5 Deformation Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1946.6 The Moving Trihedron, Derivatives and Space Curves . . . . . . . . . . . . . . . 1986.7 Tensors, Stresses and Cylindrical Coordinates . . . . . . . . . . . . . . . . . . . 210

A Formulary 227A.1 Formulary Tensor Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227A.2 Formulary Tensor Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

B Nomenclature 237

References 239

Glossary English – German 241

Glossary German – English 257

Index 273

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

List of Figures

2.1 Triangle inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2 Hölder sum inequality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3 Vector space R2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4 Af£ne vector space R2

affine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.5 The scalar product in an 2-dimensional Euclidean vector space. . . . . . . . . . . 30

3.1 Matrix multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.2 Matrix multiplication for a composition of matrices. . . . . . . . . . . . . . . . . 553.3 Orthogonal transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.1 Example of co- and contravariant base vectors in E2. . . . . . . . . . . . . . . . 814.2 Special case of a Cartesian basis. . . . . . . . . . . . . . . . . . . . . . . . . . . 824.3 Projection of a vector v on the dircetion of the vector u. . . . . . . . . . . . . . . 864.4 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964.5 Resulting stress vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974.6 The polar decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174.7 An example of the physical components of a second order tensor. . . . . . . . . . 1194.8 Principal axis problem with Cartesian coordinates. . . . . . . . . . . . . . . . . 120

5.1 The tangent vector in a point P on a space curve. . . . . . . . . . . . . . . . . . 1365.2 The moving trihedron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375.3 The covariant base vectors of a curved surface. . . . . . . . . . . . . . . . . . . 1385.4 Curvilinear coordinates in a Cartesian coordinate system. . . . . . . . . . . . . . 1405.5 The natural basis of a curvilinear coordinate system. . . . . . . . . . . . . . . . . 1415.6 The volume element dV with the surface dA. . . . . . . . . . . . . . . . . . . . 1525.7 The Volume, the surface and the subvolumes of a body. . . . . . . . . . . . . . . 154

6.1 A simple statically determinate plane truss. . . . . . . . . . . . . . . . . . . . . 1626.2 Free-body diagram for the node 2. . . . . . . . . . . . . . . . . . . . . . . . . . 1626.3 Free-body diagrams for the nodes 1 and 3. . . . . . . . . . . . . . . . . . . . . . 1636.4 A simple statically indeterminate plane truss. . . . . . . . . . . . . . . . . . . . 1646.5 Free-body diagrams for the nodes 2 and 4. . . . . . . . . . . . . . . . . . . . . . 1656.6 An arbitrary bar and its local coordinate system x, y. . . . . . . . . . . . . . . . 1666.7 An arbitrary bar in a global coordinate system. . . . . . . . . . . . . . . . . . . . 167

IX

Page 6: Introduction to Continuum Mechanics - Vector and Tensor Calculus

X List of Figures

6.8 The given structure of rigid bars. . . . . . . . . . . . . . . . . . . . . . . . . . . 1746.9 The free-body diagrams of the subsystems left of node C, and right of node D

after the excursion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1756.10 The free-body diagram of the complete structure after the excursion. . . . . . . . 1766.11 Matrix multiplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826.12 Example of co- and contravariant base vectors in E2. . . . . . . . . . . . . . . . 1846.13 The given spiral staircase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1986.14 The winding up of the given spiral staircase. . . . . . . . . . . . . . . . . . . . . 1996.15 An arbitrary line element with the forces, and moments in its sectional areas. . . 2046.16 The free-body diagram of the loaded spiral staircase. . . . . . . . . . . . . . . . 2076.17 The given cylindrical shell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

List of Tables

2.1 Compatibility of norms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

XI

Page 7: Introduction to Continuum Mechanics - Vector and Tensor Calculus

XII List of Tables

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter 1

Introduction

1

Page 8: Introduction to Continuum Mechanics - Vector and Tensor Calculus

2 Chapter 1. Introduction

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter 2

Basics on Linear Algebra

For example about vector spaces HALMOS [6], and ABRAHAM, MARSDEN, and RATIU [1].And in german DE BOER [3], and STEIN ET AL. [13].In german about linear algebra JÄNICH [8], FISCHER [4], FISCHER [9], and BEUTELSPACHER

[2].

3

Page 9: Introduction to Continuum Mechanics - Vector and Tensor Calculus

4 Chapter 2. Basics on Linear Algebra

Chapter Table of Contents

2.1 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.1.1 Denotations and Symbols of Sets . . . . . . . . . . . . . . . . . . . . 6

2.1.2 Subset, Superset, Union and Intersection . . . . . . . . . . . . . . . . 7

2.1.3 Examples of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.1 De£nition of a Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.2 Injective, Surjective and Bijective . . . . . . . . . . . . . . . . . . . . 8

2.2.3 De£nition of an Operation . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.4 Examples of Operations . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.5 Counter-Examples of Operations . . . . . . . . . . . . . . . . . . . . . 9

2.3 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3.1 De£nition of a Field . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3.2 Examples of Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3.3 Counter-Examples of Fields . . . . . . . . . . . . . . . . . . . . . . . 11

2.4 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4.1 De£nition of a Linear Space . . . . . . . . . . . . . . . . . . . . . . . 12

2.4.2 Examples of Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . 14

2.4.3 Linear Subspace and Linear Manifold . . . . . . . . . . . . . . . . . . 15

2.4.4 Linear Combination and Span of a Subspace . . . . . . . . . . . . . . 15

2.4.5 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.4.6 A Basis of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . 15

2.5 Metric Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.5.1 De£nition of a Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.5.2 Examples of Metrices . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.5.3 De£nition of a Metric Space . . . . . . . . . . . . . . . . . . . . . . . 17

2.5.4 Examples of a Metric Space . . . . . . . . . . . . . . . . . . . . . . . 17

2.6 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.6.1 De£nition of a Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.6.2 De£nition of a Normed Space . . . . . . . . . . . . . . . . . . . . . . 18

2.6.3 Examples of Vector Norms and Normed Vector Spaces . . . . . . . . . 18

2.6.4 Hölder Sum Inequality and Cauchy’s Inequality . . . . . . . . . . . . . 20

2.6.5 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter Table of Contents 5

2.6.6 Compatibility of Vector and Matrix Norms . . . . . . . . . . . . . . . 22

2.6.7 Vector and Matrix Norms in Eigenvalue Problems . . . . . . . . . . . . 22

2.6.8 Linear Dependence and Independence . . . . . . . . . . . . . . . . . . 23

2.7 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.7.1 De£nition of a Scalar Product . . . . . . . . . . . . . . . . . . . . . . 25

2.7.2 Examples of Scalar Products . . . . . . . . . . . . . . . . . . . . . . . 25

2.7.3 De£nition of an Inner Product Space . . . . . . . . . . . . . . . . . . . 26

2.7.4 Examples of Inner Product Spaces . . . . . . . . . . . . . . . . . . . . 26

2.7.5 Unitary Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.8 Af£ne Vector Space and the Euclidean Vector Space . . . . . . . . . . . . . 28

2.8.1 De£nition of an Af£ne Vector Space . . . . . . . . . . . . . . . . . . . 28

2.8.2 The Euclidean Vector Space . . . . . . . . . . . . . . . . . . . . . . . 29

2.8.3 Linear Independence, and a Basis of the Euclidean Vector Space . . . . 30

2.9 Linear Mappings and the Vector Space of Linear Mappings . . . . . . . . . 32

2.9.1 De£nition of a Linear Mapping . . . . . . . . . . . . . . . . . . . . . 32

2.9.2 The Vector Space of Linear Mappings . . . . . . . . . . . . . . . . . . 32

2.9.3 The Basis of the Vector Space of Linear Mappings . . . . . . . . . . . 33

2.9.4 De£nition of a Composition of Linear Mappings . . . . . . . . . . . . 34

2.9.5 The Attributes of a Linear Mapping . . . . . . . . . . . . . . . . . . . 34

2.9.6 The Representation of a Linear Mapping by a Matrix . . . . . . . . . . 35

2.9.7 The Isomorphism of Vector Spaces . . . . . . . . . . . . . . . . . . . 35

2.10 Linear Forms and Dual Vector Spaces . . . . . . . . . . . . . . . . . . . . . 36

2.10.1 De£nition of Linear Forms and Dual Vector Spaces . . . . . . . . . . . 36

2.10.2 A Basis of the Dual Vector Space . . . . . . . . . . . . . . . . . . . . 36

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 10: Introduction to Continuum Mechanics - Vector and Tensor Calculus

6 Chapter 2. Basics on Linear Algebra

2.1 Sets

2.1.1 Denotations and Symbols of Sets

A set M is a £nite or in£nite collection of objects, so called elements, in which order has nosigni£cance, and multiplicity is generally also ignored. The set theory was originally founded byCantor 1. In advance the meanings of some often used symbols and denotations are given below. . .

• m1 ∈M : m1 is an element of the set M.

• m2 /∈M : m2 is not an element of the M.

• . . . : The term(s) or element(s) included in this type of brackets describe a set.

• . . . | . . . : The terms on the left-hand side of the vertical bar are the elements of thegiven set and the terms on the right-hand side of the bar describe the characteristics of theelements include in this set.

• ∨ : An "OR"-combination of two terms or elements.

• ∧ : An "AND"-combination of two terms or elements.

• ∀ : The following condition(s) should hold for all mentioned elements.

• =⇒ This arrow means that the term on the left-hand side implies the term on the right-handside.

Sets could be given by . . .

• an enumeration of its elements, e.g.

M1 = 1, 2, 3, . (2.1.1)

The set M1 consists of the elements 1, 2, 3

N = 1, 2, 3, . . . . (2.1.2)

The set N includes all integers larger or equal to one and it is also called the set of naturalnumbers.

• the description of the attributes of its elements, e.g.

M2 = m | (m ∈M1) ∨ (−m ∈M1) , (2.1.3)

= 1, 2, 3,−1,−2,−3 .

The set M2 includes all elements m with the attribute, that m is an element of the set M1, orthat −m is an element of the set M1. And in this example these elements are just 1, 2, 3 and−1,−2,−3.

1Georg Cantor (1845-1918)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.1. Sets 7

2.1.2 Subset, Superset, Union and Intersection

A set A is called a subset of B, if and only if 2, every element of A, is also included in B

A ⊆ B⇐⇒ (∀a ∈ A⇒ a ∈ B) . (2.1.4)

The set B is called the superset of AB ⊇ A. (2.1.5)

The union C of two sets A and B is the set of all elements, that at least are an element of one ofthe sets A and B

C = A ∪ B = c | (c ∈ A) ∨ (c ∈ B) . (2.1.6)

The intersection C of two sets A and B is the set of all elements common to the sets A and B

C = A ∩ B = c | (c ∈ A) ∧ (c ∈ B) . (2.1.7)

2.1.3 Examples of Sets

Example: The empty set. The empty set contains no elements and is denoted,

∅ = . (2.1.8)

Example: The set of natural numbers. The set of natural numbers, or just the naturals, N,sometimes also the whole numbers, is de£ned by

N = 1, 2, 3, . . . . (2.1.9)

Unfortunately, zero "0"is sometimes also included in the list of natural numbers, then the set Nis given by

N0 = 0, 1, 2, 3, . . . . (2.1.10)

Example: The set of integers. The set of the integers Z is given by

Z = z | (z = 0) ∨ (z ∈ N) ∨ (−z ∈ N) . (2.1.11)

Example: The set of rational numbers. The set of rational numbers Q is described by

Q = z

n| (z ∈ Z) ∧ (n ∈ N)

. (2.1.12)

Example: The set of real numbers. The set of real numbers is de£ned by

R = . . . . (2.1.13)

Example: The set of complex numbers. The set of complex numbers is given by

C =α + β i | (α, β ∈ R) ∧

(i =√−1)

. (2.1.14)

2The expression "if and only if" is often abbreviated with "iff".

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 11: Introduction to Continuum Mechanics - Vector and Tensor Calculus

8 Chapter 2. Basics on Linear Algebra

2.2 Mappings

2.2.1 De£nition of a Mapping

Let A and B be sets. Then a mapping, or just a map, of A on B is a function f , that assigns everya ∈ A one unique f(a) ∈ B,

f :

A −→ Ba 7−→ f(a)

. (2.2.1)

The set A is called the domain of the function f and the set B the range of the function f .

2.2.2 Injective, Surjective and Bijective

Let V and W be non empty sets. A mapping f between the two vector spaces V and W assignsto every x ∈ V a unique y ∈W, which is also mentioned by f(x) and it is called the range of x(under f ). The set V is the domain and W is the range also called the image set of f . The usualnotation of a mapping (represented by the three parts, the rule of assignment f , the domain Vand the range W) is given by

f : V −→W or f :

V −→Wx 7−→ f (x)

. (2.2.2)

For every mapping f : V → W with the subsets A ⊂ V, and B ⊂ W the following de£nitionshold

f (A) := f (x) ∈W : x ∈ A the range of A, and (2.2.3)

f−1 (B) := x ∈ V : f (x) ∈ B the range of B. (2.2.4)

With this the following identities hold

f is called surjective, if and only if f (V) = W , (2.2.5)

f is called injective, iff every f (x) = f (y) implies to x = y , and (2.2.6)

f is called bijective, iff f is surjective and injective. (2.2.7)

For every injective mapping f : V→W there exists an inverse

f−1 :

f (V) −→ Vf (x) 7−→ x

, (2.2.8)

and the compositions of f and its inverse are de£ned by

f−1 f = idV ; f f−1 = idW . (2.2.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.2. Mappings 9

The mappings idV : V→ V and idW : W→W are the identity mappings in V, and W, i.e.

idV (x) = x ∀x ∈ V ; idW (y) = y ∀y ∈W . (2.2.10)

Furthermore f must be surjective, in order to expand the existence of this mapping f−1 fromf(V) ⊂ W to the whole set W. Then f : V → W is bijective, if and only if the mappingg : W→ V with g f = idV and f g = idW exists. In this case is g = f−1 the inverse.

2.2.3 De£nition of an Operation

An operation or a combination, symbolized by ¦, over a set M is a mapping, that maps twoarbitrary elements of M onto one element of M.

¦ :

M×M −→M(m,n) 7−→ m ¦ n (2.2.11)

2.2.4 Examples of Operations

Example: The addition of natural numbers. The addition over the natural numbers N is anoperation, because for every m ∈ N and every n ∈ N the sum (m+ n) ∈ N is again a naturalnumber.

Example: The subtraction of integers. The subtraction over the integers Z is an operation,because for every a ∈ Z and every b ∈ Z the difference (a− b) ∈ Z is again an integer.

Example: The addition of continuous functions. Let Ck be the set of the k-times continuouslydifferentiable functions. The addition over Ck is an operation, because for every function f(x) ∈Ck and every function g(x) ∈ Ck the sum (f + g) (x) = (f(x) + g(x)) is again a k–timescontinuously differentiable function.

2.2.5 Counter-Examples of Operations

Counter-Example: The subtraction of natural numbers. The subtraction over the naturalnumbers N is not an operation, because there exist numbers a ∈ N and b ∈ N with a difference(a− b) 6∈ N. E.g. the difference 3− 7 = −4 6∈ N.

Counter-Example: The scalar multiplication of a n-tuple. The scalar multiplication of a n-tuple of real numbers in Rn with a scalar quantity a ∈ R is not an operation, because it does notmap two elements of Rn onto another element of the same space, but one element of R and oneelement of Rn.

Counter-Example: The scalar product of two n-tuples. The scalar product of two n-tuplesin Rn is not an operation, because it does not map an element of Rn onto an element Rn, butonto an element of R.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 12: Introduction to Continuum Mechanics - Vector and Tensor Calculus

10 Chapter 2. Basics on Linear Algebra

2.3 Fields

2.3.1 De£nition of a Field

A £eld F is de£ned as a set with an operation addition a + b and an operation multiplication abfor all a, b ∈ F. To every pair, a and b, of scalars there corresponds a scalar a+ b, called the sumc, in such a way that:

1 . Axiom of Fields. The addition is associative ,

a+ (b+ c) = (a+ b) + c ∀a, b, c ∈ F . (F1)

2 . Axiom of Fields. The addition is commutative ,

a+ b = b+ a ∀a, b ∈ F . (F2)

3 . Axiom of Fields. There exists a unique scalar 0 ∈ F, called zero or the identity element withrespect to3 the addition of the £eld F, such that the additive identity is given by

a+ 0 = a = 0 + a ∀a ∈ F . (F3)

4 . Axiom of Fields. To every scalar a ∈ F there corresponds a unique scalar −a, called theinverse w.r.t. the addition or additive inverse, such that

a+ (−a) = 0 ∀a ∈ F . (F4)

To every pair, a and b, of scalars there corresponds a scalar ab, called the product of a and b, insuch way that:

5 . Axiom of Fields. The multiplication is associative ,

a (bc) = (ab) c ∀a, b, c ∈ F . (F5)

6 . Axiom of Fields. The multiplication is commutative ,

ab = ba ∀a, b ∈ F . (F6)

7 . Axiom of Fields. There exists a unique non-zero scalar 1 ∈ F, called one or the identityelement w.r.t. the multiplication of the £eld F, such that the scalar multiplication identity is givenby

a1 = a = 1a ∀a ∈ F . (F7)

8 . Axiom of Fields. To every non-zero scalar a ∈ F there corresponds a unique scalar a−1 or1a

, called the inverse w.r.t. the multiplication or the multiplicative inverse, such that

a(a−1)= 1 = a

1

a∀a ∈ F . (F8)

9 . Axiom of Fields. The muliplication is distributive w.r.t. the addition, such that the distribu-tive law is given by

(a+ b) c = ac+ bc ∀a, b, c ∈ F . (F9)

3The expression "with respect to" is often abbreviated with "w.r.t.".

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.3. Fields 11

2.3.2 Examples of Fields

Example: The rational numbers. The set Q of the rational numbers together with the opera-tions addition "+"and multiplication "·"describe a £eld.

Example: The real numbers. The set R of the real numbers together with the operations addi-tion "+"and multiplication "·"describe a £eld.

Example: The complex numbers. The set C of the complex numbers together with the opera-tions addition "+"and multiplication "·"describe a £eld.

2.3.3 Counter-Examples of Fields

Counter-Example: The natural numbers. The set N of the natural numbers together with theoperations addition "+"and multiplication "·"do not describe a £eld! One reason for this is thatthere exists no inverse w.r.t. the addition in N.

Counter-Example: The integers. The set Z of the integers together with the operations addi-tion "+"and multiplication "·"do not describe a £eld! For example there exists no inverse w.r.t.the multiplication in Z, except for the elements 1 and −1.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 13: Introduction to Continuum Mechanics - Vector and Tensor Calculus

12 Chapter 2. Basics on Linear Algebra

2.4 Linear Spaces

2.4.1 De£nition of a Linear Space

Let F be a £eld. A linear space, vector space or linear vector space V over the £eld F is a set,with an addition de£ned by

+ :

V× V −→ V(x,y) 7−→ x+ y

∀x,y ∈ V , (2.4.1)

a scalar multiplication given by

· :

F× V −→ V(α,x) 7−→ αx

∀α ∈ F ; ∀x ∈ V , (2.4.2)

and satis£es the following axioms. The elements x, y etc. of the V are called vectors. To everypair, x and y of vectors in the space V there corresponds a vector x+ y, called the sum of x andy, in such a way that:

1 . Axiom of Linear Spaces. The addition is associative ,

x+ (y + z) = (x+ y) + z ∀x,y, z ∈ V . (S1)

2 . Axiom of Linear Spaces. The addition is commutative ,

x+ y = y + x ∀x,y ∈ V . (S2)

3 . Axiom of Linear Spaces. There exists a unique vector 0 ∈ V, called zero vector or theorigin of the space V, such that

x+ 0 = x = 0+ x ∀x ∈ V . (S3)

4 . Axiom of Linear Spaces. To every vector x ∈ V there corresponds a unique vector −x,called the additive inverse, such that

x+ (−x) = 0 ∀x ∈ V . (S4)

To every pair, α and x, where α is a scalar quantity and x a vector in V, there corresponds avector αx, called the product of α and x, in such way that:

5 . Axiom of Linear Spaces. The multiplication by scalar quantities is associative

α (βx) = (αβ)x ∀α, β ∈ F ; ∀x ∈ V . (S5)

6 . Axiom of Linear Spaces. There exists a unique non-zero scalar 1 ∈ F, called identity or theidentity element w.r.t. the scalar multiplication on the space V, such that the scalar multplicativeidentity is given by

x1 = x = 1x ∀x ∈ V . (S6)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.4. Linear Spaces 13

7 . Axiom of Linear Spaces. The scalar muliplication is distributive w.r.t. the vector addition,such that the distributive law is given by

α (x+ y) = αx+ αy ∀α ∈ F ; ∀x,y ∈ V . (S7)

8 . Axiom of Linear Spaces. The muliplication by a vector is distributive w.r.t. the scalaraddition, such that the distributive law is given by

(α + β)x = αx+ βx ∀α, β ∈ F ; ∀x ∈ V . (S8)

Some simple conclusions are given by

0 · x = 0 ∀x ∈ V ; 0 ∈ F, (2.4.3)

(−1)x = −x ∀x ∈ V ; − 1 ∈ F, (2.4.4)

α · 0 = 0 α ∈ F, (2.4.5)

and if

αx = 0 , then α = 0 , or x = 0. (2.4.6)

2.4.1.0.1 Remarks:

• Starting with the usual 3-dimensional vector space these axioms describe a generalizedde£nition of a vector space as a set of arbitrary elements x ∈ V. The classic example isthe usual 3-dimensional Euclidean vector space E3 with the vectors x,y.

• The de£nition says nothing about the character of the elements x ∈ V of the vector space.

• The de£nition implies only the existence of an addition of two elements of the V and theexistence of a scalar multiplication, which both do not lead to results out of the vectorspace V and that the axioms of vector space (S1)-(S8) hold.

• The de£nition only implies that the vector space V is a non empty set, but nothing about"how large"it is.

• F = R, i.e. only vector spaces over the £eld of real numbers R are examined, no look atvector spaces over the £eld of complex numbers C.

• The dimension dimV of the vector space V should be £nite, i.e. dimV = n for an arbitraryn ∈ N, the set of natural number.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 14: Introduction to Continuum Mechanics - Vector and Tensor Calculus

14 Chapter 2. Basics on Linear Algebra

2.4.2 Examples of Linear Spaces

Example: The space of n-tuples. The space Rn of the dimension n with the usual addition

x+ y = [x1 + y1, . . . , xn + yn] ,

and the usual scalar multiplication

αx = [αx1, . . . , αxn] ,

is a linear space over the £eld R, denoted by

Rn =

x | x = (x1, x2, . . . , xn)T ,∀x1, x2, . . . , xn ∈ R

, (2.4.7)

and with the elements x given by

x =

x1x2...xn

; ∀x1, x2, . . . , xn ∈ R.

Example: The space of n × n-matrices. The space of square matrices Rn×n over the £eld Rwith the usual matrix addition and the usual multiplication of a matrix with a scalar quantity is alinear space over the £eld R, denoted by

A =

a11 a12 · · · a1na21 a22 · · · a2n

......

. . ....

am1 am2 · · · amn

; ∀aij ∈ R , 1 ≤ i ≤ m, 1 ≤ j ≤ n , and i, j ∈ N.

(2.4.8)

Example: The £eld. Every £eld F with the de£niton of an addition of scalar quantities in the£eld and a multiplication of the scalar quantities, i.e. a scalar product, in the £eld is a linearspace over the £eld itself.

Example: The space of continous functions. The space of continuous functions C (a, b) isgiven by the open intervall (a, b) or the closed intervall [a, b] and the complex-valued functionf (x) de£ned in this intervall,

C (a, b) = f (x) | f is complex-valued and continuous in [a, b] , (2.4.9)

with the addition and scalar multiplication given by

(f + g) = f (x) + g (x) ,

(αf) = αf (x) .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.4. Linear Spaces 15

2.4.3 Linear Subspace and Linear Manifold

Let V be a linear space over the £eld F. A subset W ⊆ V is called a linear subspace or a linearmanifold of V, if the set is not empty, W 6= ∅, and the linear combination is again a vector of thelinear subspace,

ax+ by ∈W ∀x,y ∈W ; ∀a, b ∈ F . (2.4.10)

2.4.4 Linear Combination and Span of a Subspace

Let V be a linear space over the £eld F with the vectors x1,x2, . . . ,xm ∈ V. Every vector v ∈ Vcould be represented by a so called linear combination of the x1,x2, . . . ,xm and some scalarquantities a1, a2, . . . , am ∈ F

v = a1x1 + a2x2 + . . .+ amxm. (2.4.11)

Furthermore let M = x1,x2, . . . ,xm be a set of vectors. Than the set of all linear combinationsof the vectors x1,x2, . . . ,xm is called the span span (M) of the subspace M and is de£ned by

span (M) =a1x1 + a2x2 + . . .+ amxm | a1, a2, . . . am ∈ F

. (2.4.12)

2.4.5 Linear Independence

Let V be a linear space over the £eld F. The vectors x1,x2, . . . ,xn ∈ V are called linearlyindependent, if and only if

n∑

i=1

aixi = 0 =⇒ a1 = a2 = . . . = an = 0. (2.4.13)

In every other case the vectors are called linearly dependent.

2.4.6 A Basis of a Vector Space

A subset M = x1,x2, . . . ,xm of a linear space or a vector space V over the £eld F is called abasis of the vector space V, if the vectors x1,x2, . . . ,xm are linearly independent and the spanequals the vector space

span (M) = V . (2.4.14)

x =n∑

i=1

viei, (2.4.15)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 15: Introduction to Continuum Mechanics - Vector and Tensor Calculus

16 Chapter 2. Basics on Linear Algebra

2.5 Metric Spaces

2.5.1 De£nition of a Metric

A metric ρ in a linear space V over the £eld F is a mapping describing a "distance"between twoneighbouring points for a given set,

ρ :

V× V −→ F(x,y) 7−→ ρ (x,y)

. (2.5.1)

The metric satis£es the following relations for all vectors x,y, z ∈ V:

1 . Axiom of Metrices. The metric is positive,

ρ (x,y) ≥ 0 ∀x,y ∈ V . (M1)

2 . Axiom of Metrices. The metric is de£nite,

ρ (x,y) = 0⇐⇒ x = y ∀x,y ∈ V . (M2)

3 . Axiom of Metrices. The metric is symmetric,

ρ (x,y) = ρ (y,x) ∀x,y ∈ V . (M3)

4 . Axiom of Metrices. The metric satis£es the triangle inequality,

ρ (x, z) ≤ ρ (x,y) + ρ (y, z) ∀x,y, z ∈ V . (M4)

x

z

y

ρ (x,y) ρ (y, z)

ρ (x, z)

Figure 2.1: Triangle inequality.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.5. Metric Spaces 17

2.5.2 Examples of Metrices

Example: The distance in the Euclidean space. For two vectors x = (x1, x2)T and y =

(y1, y2)T in the 2-dimensional Euclidean space E2 the distance ρ between this two vectors, given

by

ρ (x,y) =

(x1 − y1)2 + (x2 − y2)

2 (2.5.2)

is a metric.

Example: Discrete metric. The mapping, called the discrete metric,

ρ (x,y) =

0, if x = y

1, else, (2.5.3)

is a metric in every linear space.

Example: The metric.ρ (x,y) = xTAy. (2.5.4)

Example: The metric tensor.

2.5.3 De£nition of a Metric Space

A vector space V with a metric ρ is called a metric space.

2.5.4 Examples of a Metric Space

Example: The £eld. The £eld of the complex numbers C is a metric space.

Example: The vector space. The vector space Rn is a metric space, too.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 16: Introduction to Continuum Mechanics - Vector and Tensor Calculus

18 Chapter 2. Basics on Linear Algebra

2.6 Normed Spaces

2.6.1 De£nition of a Norm

A norm ‖·‖ in a linear sapce V over the £eld F is a mapping

‖·‖ :

V −→ Fx 7−→ ‖x‖ . (2.6.1)

The norm satis£es the following relations for all vectors x,y, z ∈ V and every α ∈ F:

1 . Axiom of Norms. The norm is positive,

‖x‖ ≥ 0 ∀x ∈ V . (N1)

2 . Axiom of Norms. The norm is de£nite,

‖x‖ = 0⇐⇒ x = 0 ∀x ∈ V . (N2)

3 . Axiom of Norms. The norm is homogeneous,

‖αx‖ = |α| ‖x‖ ∀α ∈ F ; ∀x ∈ V . (N3)

4 . Axiom of Norms. The norm satis£es the triangle inequality,

‖x+ y‖ ≤ ‖x‖+ ‖y‖ ∀x,y ∈ V . (N4)

Some simple conclusions are given by

‖−x‖ = ‖x‖ , (2.6.2)

‖x‖ − ‖y‖ ≤ ‖x− y‖ . (2.6.3)

2.6.2 De£nition of a Normed Space

A linear space V with a norm ‖·‖ is called a normed space.

2.6.3 Examples of Vector Norms and Normed Vector Spaces

The norm of a vector x is written like ‖x‖ and is called the vector norm. For a vector norm thefollowing conditions hold, see also (N1)-(N4),

‖x‖ > 0 , with x 6= 0, (2.6.4)

with a scalar quantity α,

‖αx‖ = |α| ‖x‖ , and ∀α ∈ R , (2.6.5)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.6. Normed Spaces 19

and £nally the triangle inequality,

‖x+ y‖ ≤ ‖x‖+ ‖y‖ . (2.6.6)

A vector norm is given in the most general case by

‖x‖p = p

√√√√

n∑

i=1

|xi|p. (2.6.7)

Example: The normed vector space. For the linear vector space Rn, with the zero vector 0,there exists a large variety of norms, e.g. the l-in£nity-norm, maximum-norm,

‖x‖∞ = max |xi| , with 1 ≤ i ≤ n, (2.6.8)

the l1-norm,

‖x‖1 =n∑

i=1

|xi| , (2.6.9)

the L1-norm,

‖x‖ =∫

Ω

|x| dΩ, (2.6.10)

the l2-norm, Euclidian norm,

‖x‖2 =

√√√√

n∑

i=1

|xi|2, (2.6.11)

the L2-norm,

‖x‖ =√√√√

Ω

|x|2 dΩ, (2.6.12)

and the p-norm,

‖x‖ =(

n∑

i=1

|xi|p) 1

p

, with 1 ≤ p <∞. (2.6.13)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 17: Introduction to Continuum Mechanics - Vector and Tensor Calculus

20 Chapter 2. Basics on Linear Algebra

The maximum-norm is developed by determining the limit,

z := max |xi| , with i = 1, . . . , n,

zp ≤n∑

i=1

|xi|p ≤ nzp,

and £nally the maximum-norm is de£ned by

z ≤(

n∑

i=1

|xi|p) 1

p

≤ p√nz. (2.6.14)

Example: Simple Example with Numbers. The varoius norms of a vector x differ in mostgeneral cases. For example with the vector xT = [−1, 3,−4]:

‖x‖1 = 8,

‖x‖2 =√26 ≈ 5, 1,

‖x‖∞ = 4.

2.6.4 Hölder Sum Inequality and Cauchy’s Inequality

Let p and q be two scalar quantities, and the relationship between them is de£ned by

1

p+

1

q= 1 , with p > 1, q > 1. (2.6.15)

In the £rst quadrant of a coordinate system the graph y = xp−1 and the straight lines x = ξ, andy = η with ξ > 0, and η > 0 are displayed. The area enclosed by this two straight lines, thecurve and the axis of the coordinate system is at least the area of the rectangle given by ξη,

ξη ≤ ξp

p+

ηq

q. (2.6.16)

For the real or complex quantities xj , and yj , which are not all equal to zero, the ξ, and η couldbe described by

ξ =|xj|

(∑

j |xj|p) 1

p

, and η =|yj|

(∑

j |yj|q) 1

q

. (2.6.17)

Inserting the relations of equations (2.6.17) in (2.6.16), and summing the terms with the index j,implies

j |xj| |yj|(∑

j |xj|p) 1

p(∑

j |yj|q) 1

q

≤∑

j |xj|p

p(∑

j |xj|p) +

j |yj|q

q(∑

j |yj|q) = 1. (2.6.18)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.6. Normed Spaces 21

-

6

x

y

ξ

η

Figure 2.2: Hölder sum inequality.

The result is the so called Hölder sum inequality,

j

|xjyj| ≤(∑

j

|xj|p) 1

p(∑

j

|yj|q) 1

q

. (2.6.19)

For the special case with p = q = 2 the Hölder sum inequality, see equation (2.6.19) transformsinto the Cauchy’s inequality,

j

|xjyj| ≤(∑

j

|xj|2) 1

2(∑

j

|yj|2) 1

2

. (2.6.20)

2.6.5 Matrix Norms

In the same way like the vector norm the norm of a matrix A is introduced. This matrix normis written ‖A‖. The characterictics of the matrix norm are given below, and start with the zeromatrix 0, and the condition A 6= 0,

‖A‖ > 0, (2.6.21)

and with an arbitrary scalar quantity α,

‖αA‖ = |α| ‖A‖ , (2.6.22)

‖A+B‖ ≤ ‖A‖ ‖B‖ , (2.6.23)

‖A B‖ ≤ ‖A‖ ‖B‖ . (2.6.24)

In addition for the matrix norms and in opposite to vector norms the last axiom hold. If thiscondition holds, then the norm is called to be multiplicative. Some usual norms, which satisfy

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 18: Introduction to Continuum Mechanics - Vector and Tensor Calculus

22 Chapter 2. Basics on Linear Algebra

the conditions (2.6.21)-(2.6.22) are given below. With n being the number of rows of the matrixA, the absolute norm is given by

‖A‖M = M (A) = nmax |aik| . (2.6.25)

The maximum absolute row sum norm is given by

‖A‖R = R (A) = maxi

n∑

k=1

|aik| . (2.6.26)

The maximum absolute column sum norm is given by

‖A‖C = C (A) = maxk

n∑

i=1

|aik| . (2.6.27)

The Euclidean norm is given by

‖A‖N = N (A) =√(trATA

). (2.6.28)

The spectral norm is given by

‖A‖H = H (A) =√

largest eigenvalue of(ATA

). (2.6.29)

2.6.6 Compatibility of Vector and Matrix Norms

De£nition 2.1. A matrix norm ‖A‖ is called to be compatible to an unique vector norm ‖x‖, ifffor all matrices A and all vectors x the following inequality holds,

‖A x‖ ≤ ‖A‖ ‖x‖ . (2.6.30)

The norm of the transformed vector y = A x should be separated by the matrix norm associatedto the vector norm from the vector norm ‖x‖ of the starting vector x. In table (2.1) the mostcommon vector norms are compared with their compatible matrix norms.

2.6.7 Vector and Matrix Norms in Eigenvalue Problems

The eigenvalue problem A x = λx could be rewritten with the compatbility condition, ‖A x‖ ≤‖A‖ ‖x‖, like this

‖A x‖ = |λ| ‖x‖ ≤ ‖A‖ ‖x‖ . (2.6.31)

This equations implies immediately, that the matrix norm is an estimation of the eigenvalues.Then with this condition a compatible matrix norm associated to a vector norm is most valuable,if in the inequality ‖A x‖ ≤ ‖A‖ ‖x‖, see also (2.6.31), both sides are equal. In this case therecan not exist a value of the left-hand side, which is less than the value of the right-hand side.This upper limit is called the supremum and is written like sup (A).

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.6. Normed Spaces 23

Vector norms Compatible matrix norm Description‖x‖ = max |xi| ‖A‖M = M (A) absolute norm

‖A‖R = R (A) = sup (A) maximum absolute row sum norm

‖x‖ =∑ |xi| ‖A‖M = M (A) absolute norm‖A‖C = C (A) = sup (A) maximum absolute column sum norm

‖x‖ =√∑ |xi|2 ‖A‖M = M (A) absolute norm

‖A‖N = N (A) Euclidean norm‖A‖H = H (A) = sup (A) spectral norm

Table 2.1: Compatibility of norms.

De£nition 2.2. The supremum sup (x) of a matrix A associated to the vector norm ‖x‖ is de£nedby the scalar quantity α, in such a way that,

‖Ax‖ ≤ α ‖x‖ , (2.6.32)

for all vectors x,

sup (A) = minxαi, (2.6.33)

or

sup (A) = max‖A x‖‖x‖ . (2.6.34)

In table (2.1) above, all associated supremums are denoted.

2.6.8 Linear Dependence and Independence

The vectors a1, a2, . . . , ai, . . . , an ∈ Rn are called to be linearly independent, iff there existsscalar quantites α1, α2, . . . , αi, . . . , αn ∈ R, which are not all equal to zero, such that

n∑

i=1

αiai = 0. (2.6.35)

In every other case the vectors are called to be linearly dependent. For example three linearlyindependent vectors are given by

α1

100

+ α2

010

+ α3

001

6= 0 , with ∀αi 6= 0. (2.6.36)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 19: Introduction to Continuum Mechanics - Vector and Tensor Calculus

24 Chapter 2. Basics on Linear Algebra

The n linearly independent vectors ai with i = 1, . . . , n span a n-dimensional vector space. Thisset of n linearly independent vectors could be used as a basis of this vector space, in order todescribe another vector an+1 in this space,

an+1 =n∑

k=1

βkak , and an+1 ∈ Rn. (2.6.37)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.7. Inner Product Spaces 25

2.7 Inner Product Spaces

2.7.1 De£nition of a Scalar Product

Let V be a linear space over the £eld of real numbers R. A scalar product 4 or inner product is amapping

〈 , 〉 :

V× V −→ R(x,y) 7−→ 〈x,y〉 . (2.7.1)

The scalar product satis£es the following relations for all vectors x,y, z ∈ V and all scalarquantities α, β ∈ R:

1 . Axiom of Inner Products. The scalar product is bilinear,

〈αx+ βy, z〉 = α〈x, z〉+ β〈y, z〉 ∀α, β ∈ R ; ∀x,y ∈ V . (I1)

2 . Axiom of Inner Products. The scalar product is symmetric,

〈x,y〉 = 〈y,x〉 ∀x,y ∈ V . (I2)

3 . Axiom of Inner Products. The scalar product is positive de£nite,

〈x,x〉 ≥ 0 ∀x ∈ V , and (I3)

〈x,x〉 = 0⇐⇒ x = 0 ∀x ∈ V , (I4)

and for two varying vectors,

〈x,y〉 = 0⇐⇒

x = 0 , and an arbitrary vector y ∈ V ,

y = 0 , and an arbitrary vector x ∈ V ,

x⊥y , i.e. the vectors x and y ∈ V are orthogonal.

(2.7.2)

Theorem 2.1. The inner product induces a norm and with this a metric, too. The scalar product‖x‖ = 〈x,x〉 12 de£nes a scalar-valued function, which satis£es the axioms of a norm!

2.7.2 Examples of Scalar Products

Example: The usual scalar product in R2. Let x = (x1, x2)T ∈ R2 and y = (y1, y2)

T ∈ R2

be two vectors, then the mapping

〈x,y〉 = x1y1 + x2y2 (2.7.3)

is called the usual scalar product.

4It is important to notice, that the scalar product and the scalar multiplication are complete different mappings!

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 20: Introduction to Continuum Mechanics - Vector and Tensor Calculus

26 Chapter 2. Basics on Linear Algebra

2.7.3 De£nition of an Inner Product Space

A vector space V with a scalar product 〈 , 〉 is called an inner product space or Euclidean vectorspace 5. The axioms (N1), (N2), and (N3) hold, too, only the axiom (N4) has to be proved. Theaxiom (N4) also called the Schwarz inequality is given by

〈x,y〉 ≤ ‖x‖ ‖y‖ ,

this implies the triangel inequality,

‖x+ y‖2 = 〈x+ y,x+ y〉 = 〈x+ y,x〉+ 〈x+ y,y〉 ≤ ‖x+ y‖ · ‖x‖+ ‖x+ y‖ · ‖y‖ ,

and £nally results, the unitary space is a normed space,

‖x+ y‖ ≤ ‖x‖+ ‖y‖ .

And £nally the relations between the different subspaces of a linear vector space are describedby the following scheme,

Vinner product space→8 Vnormed space

→8 Vmetric space,

where the arrow→ describes the necessary conditions, and the arrow 8 describes the not nec-essary, but possible conditions. Every true proposition in a metric space will be true in a normedspace or in an inner product space, too. And a true proposition in a normed space is also true inan inner product space, but not necessary vice versa!

2.7.4 Examples of Inner Product Spaces

Example: The scalar product in a linear vector space. The 3-dimensional linear vector spaceR3 with the ususal scalar product de£nes a inner product by

〈u,v〉 = u · v = α = |u| |v| cos (]u,v) , (2.7.4)

is an inner product space.

Example: The inner product in a linear vector space. The Rn with an inner product and thebilinear form

〈u,v〉 = uTAv, (2.7.5)

and with the quadratic form

〈u,u〉 = uTAu, (2.7.6)

and in the special case A = 1 with the scalar product

〈u,u〉 = uTu, (2.7.7)

is an inner product space.5In mathematic literature often the restriction is mentioned, that the Euclidean vector space should be of £nite

dimension. Here no more attention is paid to this restriction, because in most cases £nite dimensional spaces areused.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.7. Inner Product Spaces 27

2.7.5 Unitary Space

A vector space V over the £eld of real numbers R, with a scalar product 〈 , 〉 is called an innerproduct space, and sometimes its complex analogue is called an unitary space over the £eld ofcomplex numbers C.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 21: Introduction to Continuum Mechanics - Vector and Tensor Calculus

28 Chapter 2. Basics on Linear Algebra

2.8 Af£ne Vector Space and the Euclidean Vector Space

2.8.1 De£nition of an Af£ne Vector Space

In matrix calculus an n-tuple a ∈ Rn over the £eld of real numbers R is studied, i.e.

ai ∈ R , and i = 1, . . . , n. (2.8.1)

One of this n-tuple, represented by a column matrix, or also called a column vector or just vector,could describe an af£ne vector, if an point of origin in a geometric sense and a displacement oforigin are established. A set W is called an af£ne vector space over the vector space V ⊂ Rn, if

-

6

1

±

I

(~R− ~P )

( ~Q− ~P )

~P

~b =−→QR

~c =−→PR

~a =−→PQ

V ⊂ R2

Figure 2.3: Vector space R2.

-

6 ±

1

I

P

Q

R

~a

~b~c

W ⊂ R2af£ne

Figure 2.4: Af£ne vector space R2affine.

a mapping given by

W×W −→ V , (2.8.2)

Rnaf£ne −→ Rn, (2.8.3)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.8. Af£ne Vector Space and the Euclidean Vector Space 29

assigns to every pair of points P and Q ∈ W ⊂ Rnaf£ne a vector

−→PQ ∈ V. And the mapping also

satis£es the following conditions:

• For every constant P the assignment

ΠP : Q −→ −→PQ, (2.8.4)

is a bijective mapping, i.e. the inverse Π−1P exists.

• Every P , Q and R ∈W satisfy

−→PQ+

−→QR =

−→PR, (2.8.5)

andΠP : W −→ V , with ΠPQ =

−→PQ , and Q ∈W. (2.8.6)

For all P , Q and R ∈W ⊂ Rnaf£ne the axioms of a linear space (S1)-(S4) for the addition hold

a+ b = c −→ ai + bi = ci , with i = 1, . . . , n, (2.8.7)

and (S5)-(S8) for the scalar multiplication

αa =∗a −→ αai =

∗ai. (2.8.8)

And a vector space is a normed space, like shown in section (2.6).

2.8.2 The Euclidean Vector Space

An Euclidean vector space En is an unitary vector space or an inner prodcut space. In additionto the normed spaces there is an inner product de£ned in an Euclidean vector space. The innerproduct assigns to every pair of vectors u and v a scalar quantity α,

〈u,v〉 ≡ u · v = v · u = α , with u,v ∈ En , and α ∈ R . (2.8.9)

For example in the 2-dimensional Euclidean vector space the angle ϕ between the vectors u andv is given by

u · v = |u| · |v| cosϕ , and cosϕ =u · v|u| · |v| . (2.8.10)

The following identities hold:

• Two normed space V and W over the same £eld are isomorphic, if and only if there existsa linear mapping f from V to W, such that the following inequality holds for two constantsm and M in every point x ∈W,

m · ‖x‖ ≤ ‖f (x)‖ ≤M · ‖x‖ . (2.8.11)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 22: Introduction to Continuum Mechanics - Vector and Tensor Calculus

30 Chapter 2. Basics on Linear Algebra

Á

1

O

u

v

ϕ

Figure 2.5: The scalar product in an 2-dimensional Euclidean vector space.

• Every two real n-dimensional normed spaces are isomorphic. For example two subspacesof the vector space Rn.

Bellow in most cases the Euclidean norm with p = 2 is used to describe the relationships betweenthe elements of the af£ne (normed) vector space x ∈ Rn

af£ne and the elements of the Euclideanvector space v ∈ En. With this condition the relations between a norm, like in section (2.6) andan inner product is given by

‖x‖2 = x · x, (2.8.12)

and

‖x‖ = ‖x‖2 =√

x2i . (2.8.13)

In this case it is possible to de£ne a bijective mapping between the n-dimensional af£ne vectorspace and the Euclidean vector space. This bijectivity is called the topology a homeomorphism,and the spaces are called to be homeomorphic. If two spaces are homeomorphic, then in bothspaces the same axioms hold.

2.8.3 Linear Independence, and a Basis of the Euclidean Vector Space

The conditions for the linear dependence and the linear independence of vectors vi in the n-dimensional Euclidean vector space En are given below. Furthermore a a vector basis of theEuclidean vector space En is introduced, and the representation of an arbitrary vector with thisbasis is described.

• The set of vectors v1,v2, . . . ,vn is linearly dependent, if there exists a number of scalarquantities a1, a2, . . . , an, not all equal to zero, such that the following condition holds,

a1v1 + a2v2 + . . .+ anvn = 0. (2.8.14)

In every other case is the set of vectors v1,v2, . . . ,vn called to be linearly independent.The left-hand side is called the linear combination of the vectors v1,v2, . . . ,vn.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.8. Af£ne Vector Space and the Euclidean Vector Space 31

• The set of all linear comnbinations of vectors v1,v2, . . . ,vn span a subspace. The dimen-sion of this subspace is equal to the number of vectors n, which span the largest linearlyindependent space. The dimension of this subspace is at most n.

• Every n + 1 vectors of the Euclidean vector space v ∈ En with the dimension n must belinearly dependent, i.e. the vector v = vn+1 could be described by a linear combination ofthe vectors v1,v2, . . . ,vn,

λv + a1v1 + a2v2 + . . .+ anvn = 0, (2.8.15)

v = −1

λ

(a1v1 + a2v2 + . . .+ anvn

). (2.8.16)

• The vectors zi given by

zi = −1

λaivi , with i = 1, . . . , n, (2.8.17)

are called the components of the vector v in the Euclidean vector space En.

• Every n linearly independent vectors vi of dimension n in the Euclidean vector space En

are called to be a basis of the Euclidean vector space En. The vectors gi = vi are calledthe base vectors of the Euclidean vector space En,

v = v1g1 + v2g2 + . . .+ vngn =n∑

i=1

vigi , with vi = −ai

λ. (2.8.18)

The vigi are called to be the components and the vi are called to be the coordinates of thevector v w.r.t. the basis gi. Sometimes the scalar quantities vi are called the componentsof the vector v w.r.t. to the basis gi, too.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 23: Introduction to Continuum Mechanics - Vector and Tensor Calculus

32 Chapter 2. Basics on Linear Algebra

2.9 Linear Mappings and the Vector Space of Linear Map-pings

2.9.1 De£nition of a Linear Mapping

Let V and W be two vector spaces over the £eld F. A mapping f : V → W from elements ofthe vector space V to the elements of the vector space W is linear and called a linear mapping,if for all x,y ∈ V and for all α ∈ R the following axioms hold:

1 . Axiom of Linear Mappings (Additive w.r.t. the vector addition). The mapping f isadditive w.r.t. the vector addition,

f (x+ y) = f (x) + f (y) ∀x,y ∈ V . (L1)

2 . Axiom of Linear Mappings (Homogeneity of linear mappings). The mapping f is homo-geneous w.r.t. scalar multiplication,

f (αx) = αf (x) ∀α ∈ F ; ∀x ∈ V . (L2)

2.9.1.0.2 Remarks:

• The linearity of the mapping f : V→W results of being additive (L1), and homogeneous(L2).

• Because the action of the mapping f is only de£ned on elements of the vector space V, itis necessary that, the sum vector x+ y ∈ V (for every x,y ∈ V) and the scalar multipliedvector αx ∈ V (for every αf ∈ R) are elements of the vector space V, too. And with thispostulation the set V must be a vector space!

• With the same arguments for the ranges f (x), f (y), and f (x+ y), also for the rangesf (αx), and αf (x) in W the set W must be a vector space!

• A linear mapping f : V→W is also called a linear transformation, a linear operator or ahomomorphism.

2.9.2 The Vector Space of Linear Mappings

In the section before the linear mappings f : V→W, which sends elements of V to elements ofW were introduced. Because it is so nice to work with vector spaces, it is interesting to check,if the linear mappings f : V → W form a vector space, too? In order to answer this question itis necessary to check the de£nitions and axioms of a linear vector space (S1)-(S8). If they hold,then the set of linear mappings is a vector space:

3 . Axiom of Linear Mappings (De£nition of the addition of linear mappings). In the de£ni-tion of a vector space the existence of an addition "+"is claimed, such that the sum of two linear

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.9. Linear Mappings and the Vector Space of Linear Mappings 33

mappings f1 : V → W and f2 : V → W should be a linear mapping (f1 + f2) : V → W, too.For an arbitrary vector x ∈ V the pointwise addition is given by

(f1 + f2) (x) := f1 (x) + f2 (x) ∀x ∈ V , (L3)

for all linear mappings f1, f2 from V to W. The sum f1+ f2 is linear, because both mappings f1and f2 are linear, i.e. (f1 + f2) is a linear mapping, too.

4 . Axiom of Linear Mappings (De£nition of the scalar multiplication of linear mappings).Furthermore a product of a scalar quantity αinR and a linear mapping f : V → W is de£nedby

(αf) (x) := αf (x) ∀α ∈ R ; ∀x ∈ V . (L4)

If the mapping f is linear, then results immediatly, that the mapping (αf) is linear, too.

5 . Axiom of Linear Mappings (Satisfaction of the axioms of a linear vector space). Thede£nitions (L3) and (L4) satisfy all linear vector space axioms given by (S1)-(S8). This is easyto prove by computing the equations (S1)-(S8). If V and W are two vector spaces over the £eldF, then the set L of all linear mappings f : V→W from V to W,

L (V,W) is a linear vector space. (L5)

The identity element w.r.t the addition of a vector space L (V,W) is the null mapping 0, whichsends every element from V to the zero vector 0 ∈W.

2.9.3 The Basis of the Vector Space of Linear Mappings

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 24: Introduction to Continuum Mechanics - Vector and Tensor Calculus

34 Chapter 2. Basics on Linear Algebra

2.9.4 De£nition of a Composition of Linear Mappings

Till now only an addition of linear mappings and a multiplication with a scalar quantity arede£ned. The next step is to de£ne a "multiplication"of two linear mappings, this combination oftwo functions to form a new single function is called a composition. Let f1 : V→W be a linearmapping and furthermore let f2 : X→ Y be linear, too. If the image set W of the linear mappingf1 also the domain of the linear mapping f2, i.e. W = X, then the composition f1 f2 : V→ Yis de£ned by

(f1 f2) (x) = f1 (f2 (x)) ∀x ∈ V . (2.9.1)

Because of the linearity of the mappings f1 and f2 the composition f1 f2 is also linear.

2.9.4.0.3 Remarks:

• The composition f1 f2 is also written as f1f2 and it is sometimes called the product of f1and f2.

• If this products exist (, i.e. the domains and image sets of the linear mappings match likein the de£nition), then the following identities hold:

f1 (f2f3) = (f1f2) f3 (2.9.2)

f1 (f2 + f3) = f1f2 + f1f3 (2.9.3)

(f1f2) f3 = f1f3 + f2f3 (2.9.4)

α (f1f2) = α (f1f2) = f1 (αf2) (2.9.5)

• If all sets are equal V = W = X = Y, then this products exist, i.e. all the linear mappingsmap the vector space V onto itself

f ∈ L (V,V) =: L (V) . (2.9.6)

In this case with f1 ∈ L (V,V), and f2 ∈ L (V,V) the composition f1 f2 ∈ L (V,V) is alinear mapping from the vector space V to itself, too.

2.9.5 The Attributes of a Linear Mapping

• Let V and W be vector spaces over the F and L (V,W) the vector space of all linear map-pings f : V→W. Because L (V,W) is a vector space, the addition and the multiplicationwith a scalar for all elements of L, i.e. all linear mappings f : V → W, is again a linearmapping from V to W.

• An arbitrary composition of linear mappings, if it exists, is again a linear mapping fromone vector space to another vector space. If the mappings f : V→W form a space in itselfexist, then every composition of this mappings exist and is again linear, i.e. the mappingis again an element of L (V,V).

• The existence of an inverse, i.e. a reverse linear mapping from W to V, and denoted byf−1 : W→ V, is discussed in the following section.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

2.9. Linear Mappings and the Vector Space of Linear Mappings 35

2.9.6 The Representation of a Linear Mapping by a Matrix

Let x and y be two arbitrary elements of the linear vector space V given by

x =n∑

i=1

xiei , and y =n∑

i=1

yiei. (2.9.7)

Let L be a linear mapping from V in itself

L = αijϕij . (2.9.8)

y = L (x) , (2.9.9)

yiei =(αklϕkl

) (xjej

)

= αklϕkl

(xjej

)

= αklxjϕkl (ej)

y = . (2.9.10)

2.9.7 The Isomorphism of Vector Spaces

The term "bijectivity"and the attributes of a bijective linear mapping f : V → W imply thefollowing de£ntion. A bijective linear mapping f : V→W is also called an isomorphism of thevector spaces V and W). The spaces V and W are said to be isomorphic.n-tuple

x = xiei ←→

x1

...xn

←→ x =

x1

...xn

, (2.9.11)

with x ∈ V dimV = n , the space of all n-tuples, x ∈ Rn .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 25: Introduction to Continuum Mechanics - Vector and Tensor Calculus

36 Chapter 2. Basics on Linear Algebra

2.10 Linear Forms and Dual Vector Spaces

2.10.1 De£nition of Linear Forms and Dual Vector Spaces

Let W ⊂ Rn be the vector space of column vectors x. In this vector space the scalar product〈 , 〉 is de£ned in the usual way, i.e.

〈 , 〉 : Rn × Rn → R and 〈x,y〉 =n∑

i=1

xiyi. (2.10.1)

The relations between the continuous linear functionals f : Rn → R and the scalar products 〈 , 〉de£ned in the Rn are given by the Riesz representation theorem, i.e.

Theorem 2.2 (Riesz representation theorem). Every continuous linear functional f : Rn → Rcould be represented by

f (x) = 〈x,u〉 ∀x ∈ Rn , (2.10.2)

and the vector u is uniquely de£ned by f (x).

2.10.2 A Basis of the Dual Vector Space

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter 3

Matrix Calculus

For example GILBERT [5], and KRAUS [10]. And in german STEIN ET AL. [13], and ZURMÜHL

[14].

37

Page 26: Introduction to Continuum Mechanics - Vector and Tensor Calculus

38 Chapter 3. Matrix Calculus

Chapter Table of Contents

3.1 De£nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.1 Rectangular Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.2 Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.3 Column Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.4 Row Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.5 Diagonal Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.1.6 Identity Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.1.7 Transpose of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.1.8 Symmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.1.9 Antisymmetric Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.2 Some Basic Identities of Matrix Calculus . . . . . . . . . . . . . . . . . . . 42

3.2.1 Addition of Same Order Matrices . . . . . . . . . . . . . . . . . . . . 42

3.2.2 Multiplication by a Scalar Quantity . . . . . . . . . . . . . . . . . . . 42

3.2.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

3.2.4 The Trace of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.2.5 Symmetric and Antisymmetric Square Matrices . . . . . . . . . . . . . 44

3.2.6 Transpose of a Matrix Product . . . . . . . . . . . . . . . . . . . . . . 44

3.2.7 Multiplication with the Identity Matrix . . . . . . . . . . . . . . . . . 45

3.2.8 Multiplication with a Diagonal Matrix . . . . . . . . . . . . . . . . . . 45

3.2.9 Exchanging Columns and Rows of a Matrix . . . . . . . . . . . . . . . 46

3.2.10 Volumetric and Deviator Part of a Matrix . . . . . . . . . . . . . . . . 46

3.3 Inverse of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3.1 De£nition of the Inverse . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3.2 Important Identities of Determinants . . . . . . . . . . . . . . . . . . . 48

3.3.3 Derivation of the Elements of the Inverse of a Matrix . . . . . . . . . . 49

3.3.4 Computing the Elements of the Inverse with Determinants . . . . . . . 50

3.3.5 Inversions of Matrix Products . . . . . . . . . . . . . . . . . . . . . . 52

3.4 Linear Mappings of an Af£ne Vector Spaces . . . . . . . . . . . . . . . . . 54

3.4.1 Matrix Multiplication as a Linear Mapping of Vectors . . . . . . . . . 54

3.4.2 Similarity Transformation of Vectors . . . . . . . . . . . . . . . . . . 55

3.4.3 Characteristics of the Similarity Transformation . . . . . . . . . . . . . 55

3.4.4 Congruence Transformation of Vectors . . . . . . . . . . . . . . . . . 56

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter Table of Contents 39

3.4.5 Characteristics of the Congruence Transformation . . . . . . . . . . . 57

3.4.6 Orthogonal Transformation . . . . . . . . . . . . . . . . . . . . . . . . 57

3.4.7 The Gauss Transformation . . . . . . . . . . . . . . . . . . . . . . . . 59

3.5 Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.5.1 Representations and Characteristics . . . . . . . . . . . . . . . . . . . 62

3.5.2 Congruence Transformation of a Matrix . . . . . . . . . . . . . . . . . 62

3.5.3 Derivatives of a Quadratic Form . . . . . . . . . . . . . . . . . . . . . 63

3.6 Matrix Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.6.1 The Special Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . 65

3.6.2 Rayleigh Quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.6.3 The General Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . 69

3.6.4 Similarity Transformation . . . . . . . . . . . . . . . . . . . . . . . . 69

3.6.5 Transformation into a Diagonal Matrix . . . . . . . . . . . . . . . . . 70

3.6.6 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . 71

3.6.7 Proof of the Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . 71

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 27: Introduction to Continuum Mechanics - Vector and Tensor Calculus

40 Chapter 3. Matrix Calculus

3.1 De£nitions

A matrix is an array of m× n numbers

A = [Aik] =

A11 A12 · · · A1nA21 A22 · · · A2n

......

. . ....

Am1 · · · · · · Amn

. (3.1.1)

The index i is the row index and k is the column index. This matrix is called a m × n-matrix.The order of a matrix is given by the number of rows and columns.

3.1.1 Rectangular Matrix

Something like in equation (3.1.1) is called a rectangular matrix.

3.1.2 Square Matrix

A matrix is said to be square, if the number of rows equals the number of columns. It is an× n-matrix

A = [Aik] =

A11 A12 · · · A1nA21 A22 · · · A2n

......

. . ....

An1 · · · · · · Ann

. (3.1.2)

3.1.3 Column Matrix

A m× 1-matrix is called a column matrix or a column vector a given by

a =

a1a2...am

=[a1 a2 · · · am

]T. (3.1.3)

3.1.4 Row Matrix

A 1× n-matrix is called a row matrix or a row vector a given by

a =[a1 a2 · · · an

]. (3.1.4)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.1. De£nitions 41

3.1.5 Diagonal Matrix

The elements of a diagonal matrix are all zero except the ones, where the column index equalsthe row index,

D = [Dik] , and Dik = 0 , iff i 6= k. (3.1.5)

Sometimes a diagonal matrix is written like this, because there are only elements on the maindiagonal of the matrix

D = d D11 · · · Dmm c. (3.1.6)

3.1.6 Identity Matrix

The identity matrix is a diagonal matrix given by

1 =

1 0 · · · 00 1 · · · 0...

.... . .

...0 0 · · · 1

=

1ik = 0 , iff i 6= k

1ik = 1 , iff i = k. (3.1.7)

3.1.7 Transpose of a Matrix

The matrix transpose is the matrix obtained by exchanging the columns and rows of the matrix

A = [aik] , and AT = [aki] . (3.1.8)

3.1.8 Symmetric Matrix

A square matrix is called to be symmetric, if the following equation is satis£ed

AT = A. (3.1.9)

It is a kind of re¤ection at the main diagonal.

3.1.9 Antisymmetric Matrix

A square matrix is called to be antisymmetric, if the following equation is satis£ed

AT = −A. (3.1.10)

For the elements of an antisymmetric matrix the following conditions hold

aik = −aki. (3.1.11)

For that reason a antisymmetric matrix must have zeros on its diagonal.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 28: Introduction to Continuum Mechanics - Vector and Tensor Calculus

42 Chapter 3. Matrix Calculus

3.2 Some Basic Identities of Matrix Calculus

3.2.1 Addition of Same Order Matrices

The matrices A, B and C are commutative under matrix addition

A+B = B + A = C. (3.2.1)

And the same connection given in components notation

Aik +Bik = Cik. (3.2.2)

The matrices A, B and C are associative under matrix addition

(A+B) + C = A+ (B + C) . (3.2.3)

And there exists an identity element w.r.t. matrix addition 0, called the additive identity, andde£ned by A + 0 = A. Furthermore there exists an inverse element w.r.t. matrix addition −A,called the additive inverse, and de£ned by A+X = 0→ X = −A.

3.2.2 Multiplication by a Scalar Quantity

The scalar multiplication of matrices is given by

αA = Aα =

αA11 αA12 · · · αA1nαA21 αA22 · · · αA2n

......

. . ....

αAm1 · · · · · · αAmn

; α ∈ R. (3.2.4)

3.2.3 Matrix Multiplication

The product of two matrices A and B is de£ned by the matrix multiplication

A(l×m) B(m×n) = C(l×n) (3.2.5)

Cik =m∑

ν=1

AiνBνk. (3.2.6)

It is important to notice the condition, that the number of columns of the £rst matrix equals thenumber of rows of the second matrix, see index m in equation (3.2.5). Matrix multiplication isassociative

(A B)C = A (B C) , (3.2.7)

and also matrix multiplication is distributive

(A+B)C = A C +B C. (3.2.8)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.2. Some Basic Identities of Matrix Calculus 43

A(l×m) C(l×n)

B(m×n)

-

?

cij

Figure 3.1: Matrix multiplication.

But in general matrix multiplication is in general not commutative

A B 6= B A. (3.2.9)

There is an exception, the so called commutative matrices, which are diagonal matrices of thesame order.

3.2.4 The Trace of a Matrix

The trace of a matrix is de£ned as the sum of the diagonal elements,

trA = tr [Aik](m×n) =n∑

i=1

Aii. (3.2.10)

It is possible to split the trace of a sum of matrices

tr (A+B) = trA+ trB. (3.2.11)

Computing the trace of a matrix product is commutative,

tr (A B) = tr (B A) , (3.2.12)

but still the matrix multiplication in general is not commutative, see equation (3.2.9),

A B 6= B A. (3.2.13)

The trace of an identity matrix of dimension n is de£ned by,

tr 1(n×n) = n. (3.2.14)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 29: Introduction to Continuum Mechanics - Vector and Tensor Calculus

44 Chapter 3. Matrix Calculus

3.2.5 Symmetric and Antisymmetric Square Matrices

Every matrix M could be described as a sum of a symmetric part S and an antisymmetric part A

M (n×n) = S(n×n) + A(n×n). (3.2.15)

The symmetric part is de£ned like in equation (3.1.9),

S = ST , i.e. Sik = Ski. (3.2.16)

The antisymmetric part is de£ned like in equation (3.1.10),

A = −AT , i.e. Aik = −Aki , and Aii = 0. (3.2.17)

For example an antisymmetric matrix looks like this

A =

0 1 5−1 0 −2−5 2 0.

The symmetric and antisymmetric part of a square matrix are de£ned by

M =1

2

(M +MT

)+

1

2

(M −MT

)= S + A. (3.2.18)

The transpose of the symmetric and the antisymmetric part of a square matrix are given by,

ST =1

2

(M +MT

)T=

1

2

(MT +M

)= S, and (3.2.19)

AT =1

2

(M −MT

)T=

1

2

(MT −M

)= −A. (3.2.20)

3.2.6 Transpose of a Matrix Product

The transpose of a matrix product of two matrices is de£ned by

(A B)T = BTAT , and (3.2.21)

⇒(ATBT

)T= B

(AT)T

= B A, (3.2.22)

for more than two matrices

(A B C) = CTBTCT , etc. (3.2.23)

The proof starts with the l × n-matrix C, which is given by the two matrices A and B

C(l×n) = A(l×m)B(m×n) ; Cik =m∑

ν=1

AiνBνk.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.2. Some Basic Identities of Matrix Calculus 45

The transpose of the matrix C is given by

CT = [Cki] , and Cki =m∑

ν=1

AkνBνi =m∑

ν=1

BiνAνk,

and £nally in symbol notation

CT = (A B)T = BTAT .

3.2.7 Multiplication with the Identity Matrix

The identity matrix is the multiplicative identity w.r.t. the matrix multiplication

A 1 = 1 A = A. (3.2.24)

3.2.8 Multiplication with a Diagonal Matrix

A diagonal matrix D is given by

D = [Dik] =

D11 0 · · · 00 D22 · · · 0...

.... . .

...0 0 · · · Dnn

(n×n)

. (3.2.25)

Because the matrix multiplication is non-commutative, there exists two possibilities two computethe product of two matrices. The £rst possibility is the multiplication with the diagonal matrixfrom the left-hand side, this is called the pre-multiplication

D A =

D11a1D22a2

...Dnnan

; A =

a1a2...an

. (3.2.26)

Each row of the matrix A, described by a so called row vector ai or a row matrix

ai =[ai1 ai2 · · · ain

], (3.2.27)

is multiplied with the matching diagonal element Dii. The result is the matrix D A in equation(3.2.26). The second possibility is the multiplication with the diagonal matrix from the right-hand side, this is called the post-multiplication

A D =[a1D11 a2D22 · · · anDnn

]; A =

[a1, a2, . . . , an

]. (3.2.28)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 30: Introduction to Continuum Mechanics - Vector and Tensor Calculus

46 Chapter 3. Matrix Calculus

Each column of the matrix A, described by a so called column vector ai or a column matrix

ai =

ai1ai2...ain

, (3.2.29)

is multiplied with the matching diagonal element Dii. The result is the matrix A D in equation(3.2.28).

3.2.9 Exchanging Columns and Rows of a Matrix

Exchanging the i-th and the j-th row of the matrix A is realized by the pre-multiplication withthe matrix T

T (n×n)A(n×n) = A(n×n) (3.2.30)

1

i...

j

...i · · · 0 · · · · · · 1 · · ·

... 1...

... 1...

j · · · 1 · · · · · · 0 · · ·...

... 1

a11 a12 · · · · · · a1nai......ajan

=

a1aj......aian

.

The matrix T is same as its inverse T = T−1. And with another matrix T the i-th and the j-throw are exchanged, too.

T =

1

i...

j

...i · · · 0 · · · · · · −1 · · ·

.... . .

......

. . ....

j · · · 1 · · · · · · 0 · · ·...

... 1

, T =(

TT)−1

Furthermore the old j-th row is multplied by −1. Finally post-multiplication with such a matrixT exchanges the columns i and j of a matrix.

3.2.10 Volumetric and Deviator Part of a Matrix

It is possible to split up every symmetric matrix S in a diagonal matrix (volumetric matrix) Vand in an antisymmetric matrix (deviator matrix) D

S(n×n) = V (n×n) +D(n×n). (3.2.31)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.2. Some Basic Identities of Matrix Calculus 47

The ball part is given by

Vii =1

n

n∑

i=1

Sii =1

ntrS , or V =

(1

ntrS

)

. (3.2.32)

The deviator part is the difference between the matrix S and the volumetric part

Rii = Sii − Vii , R = S −(1

ntrS

)

, (3.2.33)

the non-diagonal elements of the deviator are the elements of the former matrix S

Rik = Sik , i 6= k ,and R = RT . (3.2.34)

The diagonal elements of the volumetric part are all equal

V = [V δik] =

VV

. . .V

. (3.2.35)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 31: Introduction to Continuum Mechanics - Vector and Tensor Calculus

48 Chapter 3. Matrix Calculus

3.3 Inverse of a Square Matrix

3.3.1 De£nition of the Inverse

A linear equation system is given by

A(n×n)x(n×1) = y(n×1) ; A = [Aik] . (3.3.1)

The inversion of this system of equations introduces the inverse of a matrix A−1 .

x = A−1y ; A−1 := X = [Xik] . (3.3.2)

The pre-multiplication of A x = y with the inverse A−1 implies

A−1A x = A−1y → x = A−1y , and A−1A = 1. (3.3.3)

Finally the inverse of a matrix is de£ned by the following relations between a matrix A and itsinverse A−1

(A−1)−1 = A, (3.3.4)

A−1A = A A−1, (3.3.5)

[Aik] [Xki] = 1. (3.3.6)

The solution of the linear equation system could only exist, if and only if the inverse A−1 exists.

The inverse A−1 of a square matrix A exists, if the matrix is nonsingular (invertible),i.e. detA 6= 0; or the difference between the rank and the number of columns resp.rows d = n − r of the matrix A must be equal to zero, i.e. the rank r of the matrixA(n×n) must be equal to the number n of columns or rows (r = n). The rank of arectangular matrix A(n×n) is de£ned by the largest number of linearly independentrows (number of rows m) or columns (number of columns n). The smaller value ofm and n is the characteristic value of the rank.

3.3.2 Important Identities of Determinants

3.3.2.0.4 1. The determinant stays the same, if a row (or a column) is added to another row(or column).

3.3.2.0.5 2. The determinante equals zero, if the expanded row (or column) is exchanged byanother row (or column). In this case two rows (or columns) are the same, i.e. these rows (orcolumns) are linearly dependent.

3.3.2.0.6 3. This is the generaliziation of the £rst and second rule. The determinant equalszero, if the rows (or columns) of the matrix are linearly dependent. In this case it is possible toproduce a row (or column) with all elements equal to zero, and if the determinant is expandedabout this row (or column) the determinant itself equals zero.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.3. Inverse of a Square Matrix 49

3.3.2.0.7 4. By exchanging two rows (or columns) the sign of the determinant changes.

3.3.2.0.8 5. Multiplication with a scalar quantity is de£ned by

det(λA(n×n)

)= λn detA(n×n) ; λ ∈ R. (3.3.7)

3.3.2.0.9 6. The determinant of a product of two matrices is given by

det (A B) = det (B A) = detA detB. (3.3.8)

3.3.3 Derivation of the Elements of the Inverse of a Matrix

The n column vectors (n-tuples) ak (k = 1, . . . , n) of the matrix A and ak ∈ Rn are linearlyindependent, i.e. the sum

∑nν=1 ανa

ν 6= 0 is for all αν equal to zero

A(n×n) =[a1 a2 · · · ak · · · an

]; ak =

A1kA2k

...Ank

. (3.3.9)

The ak span a n-dimensional vector space. Than every other vector, the (n+1)-th vector an+1 =r ∈ Rn, could be described by an unique linear combination of the former vectors ak , i.e. thevector r ∈ Rn is linearly dependent of the n vectors ak ∈ Rn. For that reason the linear equationsystem

A(n×n)x(n×1) = r(n×1) ; r 6= 0 ; r ∈ Rn ; x ∈ Rn (3.3.10)

has an unique solutionA−1 := X , A X = 1. (3.3.11)

To compute the inverse X from the equation A X = 1 it is necessary to solve n-times the linearequation system with the unit vector 1j (j = 1, . . . , n) on the right-hand side. Then the j-thequation system is given by

[a1 a2 · · · ak · · · an

]

X1j

X2j...Xkj

...Xnj

=

00...1...0

,

A Xj = 1j , (3.3.12)

with the inverse represented by its column vectors

X = A−1 =[X1 X2 · · · Xj · · · Xn

](3.3.13)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 32: Introduction to Continuum Mechanics - Vector and Tensor Calculus

50 Chapter 3. Matrix Calculus

and the identity matrix also represented by its column vectors

1 =[11 12 · · · 1j · · · 1n

], (3.3.14)

and £nally the identity matrix itself

1 =

1 0 · · · 00 1 · · · 0...

.... . .

...0 0 · · · 1

. (3.3.15)

The solutions represented by the vectors X j could be computed with the determinants.

3.3.4 Computing the Elements of the Inverse with Determinants

The determinant detA(n×n) of a square matrix Aik ∈ R is a real number, de£ned by Leibnitz likethis

detA(n×n) =∑

(−1)I A1jA2kA3l · · ·Ann. (3.3.16)

The indices j, k, l, . . . , n are rearranged in all permutations of the numbers 1, 2, . . . , n and I is thetotal number of inversions. The determinant detA is established as the sum of all (n!) elements.In every case there exists the same number of positive and negative terms. For example thedeterminant detA of an 3× 3-matrix is computed

A(3×3) = A =

A11 A12 A13A21 A22 A23A31 A32 A33

. (3.3.17)

An even permutation of the numbers 1, 2, . . . , 3 is a sequence like this,

1→ 2→ 3 , or 2→ 3→ 1 , or 3→ 1→ 2, (3.3.18)

and an odd permutation is a sequence like this,

3→ 2→ 1 , or 2→ 1→ 3 , or 1→ 3→ 2. (3.3.19)

For this example with n = 3 equation (3.3.16) becomes

detA =A11A22A33 + A12A23A31 + A13A21A32

− A31A22A13 − A32A23A11 − A33A21A12

=A11 (A22A33 − A32A23)

− A12 (A21A33 − A31A23)

+ A13 (A21A32 − A31A22) . (3.3.20)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.3. Inverse of a Square Matrix 51

This result is the same like the result by using the determinant expansion by minors. In thisexample the determinant is expanded about its £rst row. In general the determinant is expandedabout the i-th row like this

detA(n×n) =n∑

j=1

Aij det

[∗Aij

]

(−1)i+j =n∑

j=1

AijAij . (3.3.21)

[∗Aij

]

is a matrix, created by eliminating the i-th row and the j-th column of A. The factor Aij

is the so called cofactor of the element Aij . For this factor again the determinant expansion isused.

Example: Simple 3× 3-matrix. The matrix A

A =

1 4 02 1 1−1 0 2,

is expanded about the £rst row

detA =1 · (−1)1+1 · det[1 10 2

]

+ 4 · (−1)1+2 · det[2 1−1 2

]

+ 0 · (−1)1+3 · det[2 1−1 0

]

,

and £nally the result is

=1 · 1 · 2− 4 · 1 · 5 + 0 · 1 · 1 = −18.

In order to compute the inverse X of a matrix the determinant detA is calculated by expandingthe i-th row of the matrix A. The matrix A(n×n) is assumpted to be linearly independent, i.e.detA 6= 0. Equation (3.3.21) implies

n∑

j=1

AijAij = detA = 1 · detA. (3.3.22)

The second rule about determinants implies that exchanging the expanded row i by the row kleads to an linearly dependent matrix,

n∑

j=1

AkjAij = 0 = 0 · detA if i 6= k, (3.3.23)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 33: Introduction to Continuum Mechanics - Vector and Tensor Calculus

52 Chapter 3. Matrix Calculus

or

n∑

j=1

AijAkj = 0 = 0 · A if i 6= k. (3.3.24)

The de£nition of the Kronecker delta δ ik is given by

δik =

1,iff i = k

0,iff i 6= k; [δik] =

1 0 · · · 00 1 · · · 0...

.... . .

...0 0 · · · 1

= 1. (3.3.25)

Equation (3.3.22) is rewritten with the de£ntion of the Kronecker delta,

n∑

j=1

AijAkj = δik detA. (3.3.26)

The elements xjk of the inverse X = A−1 are de£ned by

n∑

j=1

AijXjk = δik ; A X = 1, (3.3.27)

and comparing (3.3.26) with (3.3.27) implies

Xjk =Akj

detA; [Xjk] = A−1. (3.3.28)

If the matrix is symmetric, i.e. A = AT , the equations (3.3.26) and (3.3.27) imply

Xjk =Ajk

detA; [Xjk] = A−1, (3.3.29)

and £nally

A−1 =(A−1)T . (3.3.30)

3.3.5 Inversions of Matrix Products

3.3.5.0.10 1. The inverse of a matrix prodcut is given by

(A B)−1 = B−1A−1. (3.3.31)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.3. Inverse of a Square Matrix 53

Proof. Start with the assumption

(A B)−1 (A B) = 1,

using equation (3.3.31) this implies

B−1A−1A B = 1,

and £nally

B−11 B = 1.

3.3.5.0.11 2. The inverse of the triple matrix product is given by

(A B C)−1 = C−1B−1A−1. (3.3.32)

3.3.5.0.12 3. The order of inversion and transposition could be exchanged,

(A−1)T =

(AT)−1

. (3.3.33)

Proof. The inverse is de£ned by

A A−1 = 1 =(A A−1)T =

(A−1)T AT ,

and this £nally implies

(AT)−1

AT = 1→(AT)−1

=(A−1)T .

3.3.5.0.13 4. If the matrix A is symmetric, then the inverse matrix A−1 is symmetric, too,

A = AT → A−1 =(A−1)T . (3.3.34)

3.3.5.0.14 5. For the diagonal matrix D the following relations hold,

detD =n∏

i=1

Dii, (3.3.35)

D−1 =

[1

Dii

]

. (3.3.36)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 34: Introduction to Continuum Mechanics - Vector and Tensor Calculus

54 Chapter 3. Matrix Calculus

3.4 Linear Mappings of an Af£ne Vector Spaces

3.4.1 Matrix Multiplication as a Linear Mapping of Vectors

The linear mapping is de£ned by

y = A x with x ∈ Rm , y ∈ Rl , and A ∈ Rl×m, (3.4.1)

and its components are

yi =n∑

j=1

Aijxj ⇔

y1 = A11x1 + . . .+ A1jxj + . . .+ A1mxm

...

yi = Ai1x1 + . . .+ Aijxj + . . .+ Aimxm

...

yl = Al1x1 + . . .+ Aljxj + . . .+ Almxm

. (3.4.2)

This linear function describes a mapping of the m-tuple (vector) x onto the l-tuple (vector) ywith a matrix A. Furthermore the vector x ∈ Rm is described by a linear mapping with a matrixB and a vector z ∈ Rn

x = B z , with x ∈ Rm , z ∈ Rn , and B ∈ Rm×n, (3.4.3)

with the components

xj =n∑

k=1

Bikzk. (3.4.4)

Inserting equation (3.4.4) in equation (3.4.2) implies

yi =m∑

j=1

(Aijxj) =m∑

j=1

(

Aij

n∑

k=1

Bjkzk

)

=n∑

k=1

(m∑

j=1

AijBjk

)

zk =n∑

k=1

Cikzk. (3.4.5)

With this relation the matrix multiplication is de£ned by

A B = C, (3.4.6)

with the components Cik given by

m∑

j=1

AijBjk = Cik. (3.4.7)

The matrix multiplication is the combination or the composition of two linear mappings

y = A xx = B z

⇒ y = A (B z) = (A B) z = C z. (3.4.8)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.4. Linear Mappings of an Af£ne Vector Spaces 55

1·i·l

1 . . . . . . . . .m

AC

1......m

1 . . . k . . . n

B

Cik

Figure 3.2: Matrix multiplication for a composition of matrices.

3.4.2 Similarity Transformation of Vectors

For a square matrix A a linear mapping is de£ned by

y = A x , with x, y ∈ Rn , and A ∈ Rn×n. (3.4.9)

The two vectors x and y are described by the same linear mapping and the same nonsingularsquare matrix T and the vectors x and y. The vectors are called to be similar, because they aretransformed in the same way

x = T x , with x, x ∈ Rn , T ∈ Rn×n , and detT 6= 0. (3.4.10)

y = T y , with y, y ∈ Rn , T ∈ Rn×n , and detT 6= 0. (3.4.11)

Inserting this relations in equation (3.4.9) implies

T−1· | T y = A T x, (3.4.12)

y = T−1A T x, (3.4.13)

and £nally

A = T−1A T . (3.4.14)

The matrix A = T−1A T is the result of the so called similarity transformation of the matrixA with the nonsingular transformation matrix T . The matrices A and A are said to be similarmatrices.

3.4.3 Characteristics of the Similarity Transformation

Similar matrices A and A have some typical characteristics . . .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 35: Introduction to Continuum Mechanics - Vector and Tensor Calculus

56 Chapter 3. Matrix Calculus

• the determinants are equal

detA = det(T−1A T

),

= detT−1 detA detT , and detT−1 =1

detT,

detA = detA. (3.4.15)

• the traces are equal

tr (A B) = tr (B A) , (3.4.16)

trA = tr(T−1A T

),

= tr(A T T−1) ,

trA = trA. (3.4.17)

• the same eigenvalues.

• the same characteristic polynomial.

3.4.4 Congruence Transformation of Vectors

Let y = A x a be linear mapping

y = A x , with x, y ∈ Rn , and A ∈ Rn×n. (3.4.18)

with a square matrix A. The vectors x and y are computed in an opposite way (kontragredient)with the nonsingular square matrix T and the vectors x and y

x = T x , with x, x ∈ Rn , T ∈ Rn×n , and detT 6= 0. (3.4.19)

The y is the result of the mutliplication of the transpose of the matrix T and the vector y

y = T Ty , with y, y ∈ Rn , T ∈ Rn×n , and detT 6= 0. (3.4.20)

Inserting equation (3.4.19) in equation (3.4.18) implies

T T · | y = A T x,

and comparing this with equation (3.4.20) implies

T Ty = T TA T x,

and £nally

y = A x, (3.4.21)

A = T TA T . (3.4.22)

The matrix product A = T TA T is called the congruence transformation of the matrix A. Thematrices A and A are called to be congruent matrices.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.4. Linear Mappings of an Af£ne Vector Spaces 57

3.4.5 Characteristics of the Congruence Transformation

Congruent matrices A and A have some typical characteristics . . .

• the congruence transformation keeps the matrix A symmetric

condition: A = AT , (3.4.23)

assumption: A = AT

, (3.4.24)

proof: AT= (T TA T )T = T TATT = T TA T = A. (3.4.25)

• the product P = xT y = xTA x is an invariant scalar quantity

assumption: P = xT y = P = xT y, (3.4.26)

proof: x = T x ⇒

x = T−1x

y = T Ty, with detT 6= 0. (3.4.27)

xT y =(T−1x

)TT Ty

= xT(T−1)T T Ty

= xT(T T)−1

T Ty

= xTy

The scalarproduct P = xTy = xTA x is also called the quadratic form of the vector x.The quantity P could describe a mechnical work, if the elements of the vector x describea displacement and the components of the vector y describe the assigned forces of a staticsystem. The invariance of this work under a congruent transformations is important fornumerical mechanics, e.g. for the £nite element method.

3.4.6 Orthogonal Transformation

Let the square matrix A describe a linear mapping

y = A x , with x, y ∈ Rn , and A ∈ Rn×n. (3.4.28)

The vectors x and y will be transformed in the similar way and in the congruent way with the socalled orthogonal matrix T = Q, detQ 6= 0.

x = Q x , y = Q y ⇒ y = Q−1 y → similar transformation, (3.4.29)

y = QTy → congruent transformation. (3.4.30)

For the orthogonal transformation the transformations matrices are called to be orthogonal, ifthey ful£ll the relations

Q−1 = QT or Q QT = 1. (3.4.31)

For the orthogonal matrices the following identities hold.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 36: Introduction to Continuum Mechanics - Vector and Tensor Calculus

58 Chapter 3. Matrix Calculus

• If a matrix is orthogonal, its inverse equals the transpose of the matrix.

• The determinant of an orthogonal matrix has the value +1 or −1,

detQ = ±1. (3.4.32)

• The product of orthogonal matrices is again orthogonal.

• An Orthogonal matrix A with detA = +1 is called a rotation matrix.

-y1-axis

6y2-axis

:y1-axis

Oy2-axis

6

?6

?

y1 sinα

y2 cosα

¾ -

¾ -y2 sinα

y1 cosα

9

:9:

y1 cosαy2 sinα

O

W

O

W

y1 sinα

y2 cosα

y1

y2

y1

y2

α

α

Figure 3.3: Orthogonal transformation.

The most important usage of this rotation matrices is the rotation transformation of coordinates.For example the rotation transformation in R2 is given by

y = Q y, (3.4.33)[y1y2

]

=

[cosα − sinαsinα cosα

] [y1y2

]

⇒ y = Q y, (3.4.34)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.4. Linear Mappings of an Af£ne Vector Spaces 59

and

y = QTy, (3.4.35)[y1y2

]

=

[cosα sinα− sinα cosα

] [y1y2

]

⇒ y = QT y. (3.4.36)

The inversion of equation (3.4.33) with the aid of determinants implies

y = Q−1y ; Q−1 := x ; Xik =Qki

detQ. (3.4.37)

Solving this equations step by step, starting with computing the determinant,

detQ = cos2 α + sin2 α = 1, (3.4.38)

the general form of the equation to compute the elements of the inverse of the matrix,

Qki = (−1)k+i det∗Q

ki, (3.4.39)

the different elements

X11 =Q22

1= cosα, (3.4.40)

X12 = (−1)3Q12 = (−1)3 (− sinα) = + sinα, (3.4.41)

X21 = (−1)3Q21 = (−1)3 (− sinα) = − sinα, (3.4.42)

X22 = Q11 = cosα, (3.4.43)

and £nally

X = Q−1 =

[cosα − sinαsinα cosα

]

. (3.4.44)

Comparing this result with equation (3.4.35) leads to

Q−1 = QT . (3.4.45)

3.4.7 The Gauss Transformation

Let A(m×n) be a real valued matrix, Aik ∈ R. If m > n, then the matrix A is nonsingular w.r.t.the columns, i.e. the column vectors are linearly independent. The Gauss transformation isde£ned by

B = ATA , with B ∈ Rn×n , AT ∈ Rn×m , and A ∈ Rm×n. (3.4.46)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 37: Introduction to Continuum Mechanics - Vector and Tensor Calculus

60 Chapter 3. Matrix Calculus

The matrix B is symmetric, i.e.B = BT , (3.4.47)

becauseBT = (ATA)T = ATA = B. (3.4.48)

If the rows and columns A are nonsingular, then the matrix B is nonsingular, i.e. the determinantis not equal to zero,

detB 6= 0. (3.4.49)

This product was introduced by Gauss in order to compute the so called normal equation. Thematrix A is given by

A =

a1 a2 · · · an

ai = i-th column vector of A, (3.4.50)

and the matrix B is computed by

→ B = ATA, (3.4.51)

=

aT1aT2...aTn

a1 a2 · · · an

,

=

aT1 a1 aT1 a2 · · · aT1 an

aT2 a1 aT2 a2 · · ·...

......

. . ....

aTna1 · · · · · · aTnan

n×n

. (3.4.52)

An element Bik of the product matrix is the scalar product of the i-th column vector with the k-thcolumn vector of A,

Bik = aTi ak. (3.4.53)

The diagonal elements are called the quadratic value of the norm of the column vectors and thisvalue is always positive (ai 6= 0). The sum, i.e. the trace of the product ATA or the sum of allA2ik, is the quadratic valued of a matrix norm, called the Euklidian matrix norm N(A),

N (A) =

tr(ATA) =

√∑

i,k

A2ik. (3.4.54)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.4. Linear Mappings of an Af£ne Vector Spaces 61

For example

A =

2 10 3−3 2

; ATA =

[13 −4−4 14

]

,

N (A) =√

22 + 12 + 32 + (−3)2 + 22 = 3√3 =

√∑

i,k

A2ik,

=√13 + 14 = 3

√3 =

tr(ATA

).

The matrix B = ATA is positive de£nite.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 38: Introduction to Continuum Mechanics - Vector and Tensor Calculus

62 Chapter 3. Matrix Calculus

3.5 Quadratic Forms

3.5.1 Representations and Characteristics

Let y = A x be a linear mapping

y = A x , with x ∈ Rn , y ∈ Rn , and Aik ∈ R, (3.5.1)

with the nonsingular, square and symmetric matrix A, i.e.

detA 6= 0 , and A = AT . (3.5.2)

the productα := xTy = xTA x. (3.5.3)

is a real number, α ∈ R, and is called the quadratic form of A. The following conditions hold

α = αT , scalar quantities are invariant w.r.t. to transposition, and (3.5.4)

α = xTA x = αT = xTATx , because A = AT , (3.5.5)

i.e. the matrix A must be symmetric. The scalar quantity α and than the matrix A, too, are calledto be positive de£nite (or negative de£nite), if the following conditions hold,

α = xTA x

>

(<)0 , for every x 6= 0

= 0 , iff x = 0

. (3.5.6)

It is necessary, that the determinant does not equal zero detA 6= 0, i.e. the matrix A must benonsingular. If there exists a vector x 6= 0, such that α = 0, then the form α = xTA x is calledsemide£nite. In this case if the matrix A is singular, i.e. detA = 0, then the homogenous systemof equations,

A x = 0(→ xTA x = 0 , iff x 6= 0 , and detA = 0

), (3.5.7)

or resp.a1x1 + a2x2 + . . .+ anxn = 0, (3.5.8)

has got only nontrivial solutions, because of the linear dependence of the columns of the matrixA. The condition xTA x = 0 could only hold, iff the vector is nonequal to zero, x 6= 0, and thedeterminant of the matrix A equals zero, detA = 0.

3.5.2 Congruence Transformation of a Matrix

Let α = xTA x be a quadratic form, given by

α = xTA x , and AT = A. (3.5.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.5. Quadratic Forms 63

The vector x is treated by the nonsingular transformation T

x = T y, (3.5.10)

⇒ α = yTT TA T y,

α = yTB y. (3.5.11)

The matrix A transforms like,B = T TA T , (3.5.12)

where T is a real nonsingular matrix. Than the matrices B and A are called to be congruent toeach other,

Ac∼ B. (3.5.13)

The congruence transformation bewares the symmetry of the matrix A, because the followingequation holds,

B = BT . (3.5.14)

3.5.3 Derivatives of a Quadratic Form

The quadratic formα = xTA x , and AT = A. (3.5.15)

should be partial derived w.r.t. the components of the vector x. The result forms the columnmatrix ∂α

∂x,

∂x

∂xi

=

00...1...0

= ei , i-th unit vector, (3.5.16)

and∂xT

∂xi

=[0 0 · · · 1 · · · 0

]= eTi . (3.5.17)

With equations (3.5.16) and (3.5.17) the derivative of the quadratic form is given by,

∂α

∂xi

= eTi A x+ xTA ei. (3.5.18)

With the symmetry of the matrix AA = AT , (3.5.19)

the second part of equation (3.5.18) is rewritten as,

(xTA ei

)T= eTi A

Tx = eTi A x, (3.5.20)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 39: Introduction to Continuum Mechanics - Vector and Tensor Calculus

64 Chapter 3. Matrix Calculus

and £nally∂α

∂xi

= 2 eTi A x. (3.5.21)

The quantity eTi A x is the i-th component of the vector A x. Furthermore the n derivatives ∂α∂xi

are combined as a column matrix

∂α

∂x=

∂α∂x1∂α∂x2...∂α∂xn

= 2

1 0 · · · 00 1 · · · 0...

.... . .

...0 · · · · · · 1

A x

= 21 A x

∂α

∂x= 2A x. (3.5.22)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.6. Matrix Eigenvalue Problem 65

3.6 Matrix Eigenvalue Problem

3.6.1 The Special Eigenvalue Problem

For a given linear mapping

y(n×1) = A(n×n)x(n×1) , with x, y ∈ Rn , xi, yi, Aik ∈ F , and detA 6= 0, (3.6.1)

£nd the vectors x0 in the direction of the vectors y0,

y0= λ x0 , and λ ∈ F. (3.6.2)

The directions associated to the eigenvector x0 is called a principal axis. The whole task isdescribed as the principal axes problem of the matrix A. The scalar quantity λ is called theeigenvalue, because of this de£nition the whole problem is also called the eigenvalue problem.The equation (3.6.2) could be rewritten like this

y0= λ 1 x0, (3.6.3)

and inserting this in equation (3.6.1) implies

y0= A x0 = λ 1 x0, (3.6.4)

and £nally the special eigenvalue problem,

(A − λ1)x0 = 0. (3.6.5)

The so called special eigenvalue problem is characterized by the eigenvalues λ distributed onlyon the main diagonal. For the homogeneous linear equation system of the x0 exists a trivialsolution x0 = 0. A nontrivial solution exists only if this condition is ful£lled,

det (A− λ1) = 0. (3.6.6)

This equation is called the characteristic equation, and the left-hand side det (Aλ− 1) is calledthe characteristic polynomial. The components of the vector x0 are yet unknown. The vectorx0 could be computed by determing the norm, because the principal axes are searched. Solvingthe determinant implies for a matrix with n rows a polynomial of n-th degree. The roots orsometimes also called the zeros of this equation or polynomial are the eigenvalues.

p (λ) = det (λ1− A) = λn + an−1λn−1 + . . .+ a1λ+ a0. (3.6.7)

The £rst and the last coef£cient of the polynomial are given by

an−1 = − trA, and (3.6.8)

a0 (−1)n = detA. (3.6.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 40: Introduction to Continuum Mechanics - Vector and Tensor Calculus

66 Chapter 3. Matrix Calculus

With the polynomial factorization the equation p (λ) or the polynomial (3.6.7) could be describedby

p (λ) = (λ− λ1) (λ− λ2) · . . . · (λ− λn) . (3.6.10)

Comparing this with Newton’s relation for a symmetric polynomial the equations (3.6.8) and(3.6.9) could be rewritten like this,

trA = λ1 + λ2 + . . .+ λn, and (3.6.11)

detA = λ1 · λ2 · . . . · λn. (3.6.12)

Because the associated eigenvectors to each eigenvalue could not be computed explict, the wholeequation is normed

y0i= A xi0 = λi x0i , with i = 1, 2, 3, . . . , n, (3.6.13)

and the eigenvectors x0i. If for example the matrix (Aλ− 1) has the reduction of rank d = 1 andthe vector x(1)0i is an arbitrary solution of the eigenvalue problem, then the equation,

x0i = cx(1)0i , (3.6.14)

with the parameter c represents the general solution of the eigenvalue problem. If the reductionof rank of the matrix is larger than 1, then there exist d > 1 linearly independent eigenvectorsx1. As a rule of thumb,

eigenvectors of different eigenvalues are linearly independent.

For a symmetric matrix A the following identities hold.

3.6.1.0.15 1st Rule. The eigenvectors x0i of a nonsingular and symmetric matrix A are or-thogonal to each other.

3.6.1.0.16 Proof. Let the vectors x0i = xi, x1 and x2 be eigenvectors

(A− λ11)x1 = 0, (3.6.15)

muliplied with the vector x2 from the left-hand side,

xT2 (A− λ11)x1 = 0, (3.6.16)

and also

xT1 (A− λ21)x2 = 0, (3.6.17)

and £nally equation (3.6.16) subtracted from equation (3.6.17),

−xT2A x1 + λ1x

T2 x1 + xT

1A x2 + λ2xT1 x2 = 0. (3.6.18)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.6. Matrix Eigenvalue Problem 67

With A being symmetric

α = xT1A x2 =

(xT1A x2

)T= xT

2A x1 , and A = AT , (3.6.19)

results

(λ1 − λ2)xT1 x2 = 0, (3.6.20)

and because (λ1 − λ2) 6= 0, and the scalar product xT1 x2 = 0 must equal zero, this means that

the vectors are orthogonalx1⊥x2 , iff λ1 6= λ2. (3.6.21)

Furthermore hold the

3.6.1.0.17 2nd Rule. A real, nonsingular and symmetric square matrix with n rows has exactn real eigenvalues λi, being the roots of its characterstic equation.

3.6.1.0.18 Proof. Let the eigenvalues be complex numbers, given by

λ1 = β + iγ , and λ2 = β − iγ, (3.6.22)

than the eigenvectors are given by

x1 = b+ ic , and x2 = b− ic. (3.6.23)

Inserting this relations in the above orthogonality condition,

(λ1 − λ2)xT1 x2 = 0, (3.6.24)

implies(λ1 − λ2)

(bT + icT

) (bT − icT

)= 0, (3.6.25)

and £nally2iγ(bT b+ cT c

)= 0. (3.6.26)

This equation implies γ = 0, because the term(bT b+ cT c

)6= 0 is nonzero, i.e. the eigenvalues

are real numbers.

3.6.2 Rayleigh Quotient

The largest eigenvalue λ1 of a symmetric matrix could be estimated with the Rayleigh quotient.The special eigenvalue problem y = A x = λx, or

(A− λ1)x = 0 , with A = AT , detA 6= 0 , Aij ∈ R, (3.6.27)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 41: Introduction to Continuum Mechanics - Vector and Tensor Calculus

68 Chapter 3. Matrix Calculus

has the n eigenvalues, in order of magnitude,

|λ1| ≥ |λ2| ≥ |λ3| ≥ . . . λn , with λ ∈ R. (3.6.28)

For large matrices A the setting-up and the solution of the characteristic equation is very compli-cated. Furthermore for some problems it is suf£cient to know only the largest and/or the smallesteigenvalue, e.g. for a stability problem only the smallest eigenvalue is of interest, because this isthe critical load. Therefore the so called direct method to compute the approximated eignevaluesλ1 by von Mises is interesting. For using this method it is necessary to compute the inverse be-fore starting with the actual method to determine the smallest, critical load case. This so calledvon Mises iteration is given by

zν = A zν−1 = Aνz0. (3.6.29)

In this iterative process the vector zν converges to x1, i.e. the vector converges to the eigen-value λ1 with the largest absolute value. The starting vector z0 is represented by the linearlyindependent eigenvectors xi,

z0 = C1x1 + C2x2 + . . .+ Cnxn 6= 0, (3.6.30)

and an arbitrary vector is given by

zν = λν1C1x1 + λν

2C2x2 + . . .+ λνnCnxn 6= 0. (3.6.31)

If the condition |λ1| ≥ |λ2| ≥ |λ3| ≥ . . . ≥ λn holds, then with the raising value of ν the vectorzu converges to the eigenvector x1 multiplied with a constant c1,

zν → λν1c1x1, (3.6.32)

zν+1 → λ1zν . (3.6.33)

A component is given by,

q(ν)i =

z(ν)i

z(ν−1)i

→ λ1. (3.6.34)

The convergence will be better, if the ratio |λ1| / |λ2| increases. A very good approximated valueΛ1 for the dominant (largest) eigenvalue λ1 is established with the so called Rayleigh quotient,

Λ1 = R [zν ] =zTν zν+1zTν zν

=zTνA zνzTν zν

, with Λ1 ≤ λ1. (3.6.35)

The numerator and the denominator of the Rayleigh quotient include scalar products of the ap-proximated vectors. For this reason the information of all components q

(ν)i are used in this

approximation.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.6. Matrix Eigenvalue Problem 69

3.6.3 The General Eigenvalue Problem

The general eigenvalue problem is de£ned by

A x = λB x, (3.6.36)

(A− λB)x = 0, (3.6.37)

with the matrices A and B being nonsingular. The eigenvalues λ are multiplied with an arbitrarymatrix B and not with the identity matrix 1. This problem is reduced to the special eigenvalueproblem by multiplication with the inverse of matrix B form the left-hand side,

(B−1A− λ1

)x = 0, (3.6.38)

(B−1A− λ1

)x = 0. (3.6.39)

Even if the matrices A and B are symmetric, the matrix C = B−1A is in general a nonsymmetricmatrix, because the matrix multiplication is noncommutative.

3.6.4 Similarity Transformation

In the special eigenvalue problem

A x = y = λx = λ1 x, (3.6.40)

the vectors are transformed like in a similar transformation,

x = T x , and y = T y , with y = λ1 x. (3.6.41)

The transformation matrix T is nonsingular, i.e. detT 6= 0, and Tik ∈ R. This implies

A T x = λT x,

T−1A T x = λx = 0,(T−1A T − λ1

)x = 0, and (3.6.42)

(

A− λ1)

x = 0. (3.6.43)

The determinant of the inverse of matrix T is given by

det(T−1) =

1

detT, (3.6.44)

and the determinant of the product is split in the product of determinants,

det(T−1A T

)= det

(T−1) detA detT , (3.6.45)

det A = detA.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 42: Introduction to Continuum Mechanics - Vector and Tensor Calculus

70 Chapter 3. Matrix Calculus

3.6.4.0.19 Rule. The eigenvalues of the matrix A do not change if the matrix is transformedinto the similar matrix A,

det(T−1A T − λ1

)= det

(

A− λ1)

= det (A− λ1) = 0. (3.6.46)

3.6.5 Transformation into a Diagonal Matrix

The nonsingular symmetric matrix A with n rows contains n linearly independent eigenvec-tors xi, if and only if for any multiple eigenvalue λσ (, i.e. multiple roots of the characteristicpolynomial) of multiplicity pσ the reduction of rank dσ = pσ (, with σ = 1, 2, . . . , s) for thecharacteristic matrix (A− λ1) equals the multiplicity of the multiple eigenvalue. The quantity sdescribes the number of different eigenvalues. The n linearly independent normed eigenvectorsxi of the matrix A are combined as column vectors to form the nonsingular eigenvector matrix,

X = [x1, x2, . . . , xn] , with detX 6= 0. (3.6.47)

The equation of eigenvalues given by

A xi = λixi, (3.6.48)

A X = [A x1, A x2, . . . , A xn] = [λ1x1, λ2x2, . . . , λnxn] , (3.6.49)

[λ1x1, . . . , λn] = [x1, x2, . . . , xn]

λ1 0 · · · 0

0 λ2...

.... . .

...0 · · · · · · λn

, (3.6.50)

[λ1x1, . . . , λn] = X Λ. (3.6.51)

Combining the results impliesA X = X Λ, (3.6.52)

and £nallyX−1A X = X Λ , with detX 6= 0. (3.6.53)

Therefore the diagonal matrix of eigenvalues Λ could be computed by the similarity transforma-tion of the matrix A with the eigenvector matrix X . In the opposite direction a transformationmatrix T must ful£ll some conditions, in order to transform a matrix A by a similarity transfor-mation into a diagonal matrix, i.e.

T−1A T = D = dDiic, (3.6.54)

orA T = T D , with T = [t1, . . . , tn] , (3.6.55)

and £nallyA ti = Diiti. (3.6.56)

The column vectors ti of the transformation matrix T are the n linearly independent eigenvectorsof the matrix A with the associated eigenvalues λi = Dii.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.6. Matrix Eigenvalue Problem 71

3.6.6 Cayley-Hamilton Theorem

The Cayley-Hamilton Theorem says, that an arbitrary square matrix A satis£es its own charac-teristic equation. If the characteristic polynomial for the matrix A is

p (λ) = det (λ1− A) , (3.6.57)

= λn + an−1λn−1 + . . .+ a1λ+ a0, (3.6.58)

then the matrix A solves the Cayley-Hamilton equation

p (A) = A+ an−1An−1 + . . .+ a1A+ a01 = 0, (3.6.59)

and the matrix A to the power n is given by

An = A A . . . A. (3.6.60)

The matrix with the exponent n, written like An, could be described by a linear combinationof the matrices with the exponents n − 1 up to 0, resp. An−1 till A0 = 1. If the matrix A isnonsingular, then also negative quantities as exponents are allowed, e.g.

A−1 = p (A) = 0, (3.6.61)(

− 1

a0

)(An−1 + an−1A

n−2 + . . .+ a1)= 0 , a0 6= 0. (3.6.62)

Furthermore the power series P (A) of a matrix A, with the eigenvalues λσ appearing µσ-timesin the minimal polynomial, converges, if and only if the usual power series converges for alleigenvalues λσ of the matrix A. For example

[eA]= 1 + A+

1

2!A2 +

1

3!A3 + . . . , (3.6.63)

[cos (A)] = 1− 1

2!A2 +

1

4!A4 −+ . . . , (3.6.64)

[sin (A)] = A− 1

3!A3 +

1

5!A5 −+ . . . . (3.6.65)

3.6.7 Proof of the Cayley-Hamilton Theorem

A vector z ∈ Rn is represented by a combination of the linearly independent eigenvectors xi ofthe matrix A similar to a diagonal matrix with n rows,

z = c1x1 + c2x2 + . . .+ cnxn, (3.6.66)

with the ci called the evaluation coef£cients. Introducing some basic vectors and matrices, inorder to establish the evaluation theorem,

X = [x1, x2, . . . , xn] , (3.6.67)

c = [c1, c2, . . . , cn]T , (3.6.68)

X c = z, and (3.6.69)

c = X−1z. (3.6.70)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 43: Introduction to Continuum Mechanics - Vector and Tensor Calculus

72 Chapter 3. Matrix Calculus

Let z0 be an arbitrary real vector to start with, and establish some iterated vectors

z1 = A z0, (3.6.71)

z2 = A z1 = A2z0, (3.6.72)...

zn = A zn−1 = Anz0. (3.6.73)

The n + 1 vectors z0 till zn are linearly dependent, because every n + 1 vectors in Rn must belinearly dependent. The characteristic polynomial of the matrix A is given by

p (λ) = det (λ1− A) , (3.6.74)

= a0 + a1λ+ . . .+ an−1λn−1 + λn. (3.6.75)

The relation between the starting vector z0 and the £rst n iterated vectors z i is given by thefollowing equations, the evaluation theorem

z0 = c1x1 + c2x2 + . . .+ cnxn, (3.6.76)

and the eigenvalue problem

A xi = λixi , and p (λ) = det (λ1− A) = 0. (3.6.77)

The n vectors z0 till zn are iterated by

z0 = z0, (3.6.78)

and

z1 = A; z0,

z1 = c1A;x1 + c2A;x2 + . . .+ cnA;xn,

z1 = λ1c1x1 + λ2c2x2 + . . .+ λncnxn, (3.6.79)...

zn = λn1c1x1 + λn

2c2x2 + . . .+ λnncnxn, (3.6.80)

and £nally summed like this, with an = 1,

z0 = z0 | · a0+ z1 = λ1c1x1 + λ2c2x2 + . . .+ λncnxn | · a1

...

+ zn = λn1c1x1 + λn

2c2x2 + . . .+ λnncnxn| · 1

(3.6.81)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

3.6. Matrix Eigenvalue Problem 73

leads to the result

a0z0 + a1z1 + . . .+ zn =(a0 + a1λ1 + . . .+ an−1λ

n−11 + λn

1

)c1x1

+(a0 + a1λ2 + . . .+ an−1λ

n−12 + λn

2

)c2x2

...

+(a0 + a1λn + . . .+ an−1λ

n−1n + λn

n

)cnxn. (3.6.82)

With equations (3.6.71)-(3.6.73),

(a01 + a1A+ . . .+ An) z0 = p (λ1) c1x1 + p (λ2) c2x2 + . . .+ p (λn) cnxn,

p (A) z0 = 0 · c1x1 + 0 · c2x2 + . . .+ 0 · cnxn,

and £nally

p (A) z0 = a0z0 + a1z1 + . . .+ zn = 0. (3.6.83)

Inserting the iterated vectors zk = akz0, see equations (3.6.71)-(3.6.73), in equation (3.6.83)leads to,

p (A) z0 = a0z0 + a1A z0 + . . .+ Anz0 = 0, (3.6.84)

= (a01 + a1A+ . . .+ An) z0 = 0, (3.6.85)

and with an arbitrary vector z0 the term in brackets must equal the zero matrix,

a01 + a1A+ . . .+ An = 0. (3.6.86)

In other words, an arbitrary square matrix A solves its own characteristic equation. If the char-acteristic polynomial of the matrix A is given by equation (3.6.74), then the matrix A solves theso called Cayley-Hamilton equation,

p (A) = a01 + a1A+ . . .+ An = 0. (3.6.87)

The polynomial p (A) of the matrix A equals the zero matrix, and the ai are the coef£cients ofthe characteristic polynomial of matrix A,

p (λ) = det (λ1− A) = λn + an−1λn−1 + . . .+ a1λ+ a0 = 0. (3.6.88)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 44: Introduction to Continuum Mechanics - Vector and Tensor Calculus

74 Chapter 3. Matrix Calculus

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter 4

Vector and Tensor Algebra

For example SIMMONDS [12], HALMOS [6], MATTHEWS [11], and ABRAHAM, MARSDEN, andRATIU [1].And in german DE BOER [3], STEIN ET AL. [13], and IBEN [7].

75

Page 45: Introduction to Continuum Mechanics - Vector and Tensor Calculus

76 Chapter 4. Vector and Tensor Algebra

Chapter Table of Contents

4.1 Index Notation and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4.1.1 The Summation Convention . . . . . . . . . . . . . . . . . . . . . . . 78

4.1.2 The Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.1.3 The Covariant Basis and Metric Coef£cients . . . . . . . . . . . . . . 80

4.1.4 The Contravariant Basis and Metric Coef£cients . . . . . . . . . . . . 81

4.1.5 Raising and Lowering of an Index . . . . . . . . . . . . . . . . . . . . 82

4.1.6 Relations between Co- and Contravariant Metric Coef£cients . . . . . . 83

4.1.7 Co- and Contravariant Coordinates of a Vector . . . . . . . . . . . . . 84

4.2 Products of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.2.1 The Scalar Product or Inner Product of Vectors . . . . . . . . . . . . . 85

4.2.2 De£nition of the Cross Product of Base Vectors . . . . . . . . . . . . . 87

4.2.3 The Permutation Symbol in Cartesian Coordinates . . . . . . . . . . . 87

4.2.4 De£nition of the Scalar Triple Product of Base Vectors . . . . . . . . . 88

4.2.5 Introduction of the Determinant with the Permutation Symbol . . . . . 89

4.2.6 Cross Product and Scalar Triple Product of Arbitrary Vectors . . . . . . 90

4.2.7 The General Components of the Permutation Symbol . . . . . . . . . . 92

4.2.8 Relations between the Permutation Symbols . . . . . . . . . . . . . . . 93

4.2.9 The Dyadic Product or the Direct Product of Vectors . . . . . . . . . . 94

4.3 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

4.3.1 Introduction of a Second Order Tensor . . . . . . . . . . . . . . . . . . 96

4.3.2 The De£nition of a Second Order Tensor . . . . . . . . . . . . . . . . 97

4.3.3 The Complete Second Order Tensor . . . . . . . . . . . . . . . . . . . 99

4.4 Transformations and Products of Tensors . . . . . . . . . . . . . . . . . . . 101

4.4.1 The Transformation of Base Vectors . . . . . . . . . . . . . . . . . . . 101

4.4.2 Collection of Transformations of Basis . . . . . . . . . . . . . . . . . 103

4.4.3 The Tensor Product of Second Order Tensors . . . . . . . . . . . . . . 105

4.4.4 The Scalar Product or Inner Product of Tensors . . . . . . . . . . . . . 110

4.5 Special Tensors and Operators . . . . . . . . . . . . . . . . . . . . . . . . . 112

4.5.1 The Determinant of a Tensor in Cartesian Coordinates . . . . . . . . . 112

4.5.2 The Trace of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 112

4.5.3 The Volumetric and Deviator Tensor . . . . . . . . . . . . . . . . . . . 113

4.5.4 The Transpose of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . 114

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter Table of Contents 77

4.5.5 The Symmetric and Antisymmetric (Skew) Tensor . . . . . . . . . . . 115

4.5.6 The Inverse of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.5.7 The Orthogonal Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 116

4.5.8 The Polar Decomposition of a Tensor . . . . . . . . . . . . . . . . . . 117

4.5.9 The Physical Components of a Tensor . . . . . . . . . . . . . . . . . . 118

4.5.10 The Isotropic Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

4.6 The Principal Axes of a Tensor . . . . . . . . . . . . . . . . . . . . . . . . . 120

4.6.1 Introduction to the Problem . . . . . . . . . . . . . . . . . . . . . . . 120

4.6.2 Components in a Cartesian Basis . . . . . . . . . . . . . . . . . . . . . 122

4.6.3 Components in a General Basis . . . . . . . . . . . . . . . . . . . . . 122

4.6.4 Characteristic Polynomial and Invariants . . . . . . . . . . . . . . . . 123

4.6.5 Principal Axes and Eigenvalues of Symmetric Tensors . . . . . . . . . 124

4.6.6 Real Eigenvalues of a Symmetric Tensors . . . . . . . . . . . . . . . . 124

4.6.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

4.6.8 The Eigenvalue Problem in a General Basis . . . . . . . . . . . . . . . 125

4.7 Higher Order Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

4.7.1 Review on Second Order Tensor . . . . . . . . . . . . . . . . . . . . . 127

4.7.2 Introduction of a Third Order Tensor . . . . . . . . . . . . . . . . . . . 127

4.7.3 The Complete Permutation Tensor . . . . . . . . . . . . . . . . . . . . 128

4.7.4 Introduction of a Fourth Order Tensor . . . . . . . . . . . . . . . . . . 128

4.7.5 Tensors of Various Orders . . . . . . . . . . . . . . . . . . . . . . . . 129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 46: Introduction to Continuum Mechanics - Vector and Tensor Calculus

78 Chapter 4. Vector and Tensor Algebra

4.1 Index Notation and Basis

4.1.1 The Summation Convention

For a product the summation convention, invented by Einstein, holds if one index of summationis a superscript index and the other one is a subscript index. This repeated index implies that theterm is to be summed from i = 1 to i = n in general,

n∑

i=1

aibi = a1b1 + a2b2 + . . .+ anbn = aibi, (4.1.1)

and for the special case of n = 3 like this,

3∑

j=1

vjgj = v1g1 + v2g2 + v3g3 = vjgj , (4.1.2)

or even for two suf£ces

3∑

i=1

3∑

k=1

gikuivk = g11u

1v1 + g12u1v2 + g13u

1v3

+ g21u2v1 + g22u

2v2 + g23u2v3

+ g31u3v1 + g32u

3v2 + g33u3v3 = giku

ivk. (4.1.3)

The repeated index of summation is also called the dummy index. This means that changing theindex i to j or k or any other symbol does not infect the value of the sum. But is important tonotice, that it is not allowed to repeat an index more than twice! Another important thing to noteabout index notation is the use of the free indices. The free indices in every term and on bothsides of an equation must match. For that reason the addition of two vectors could be written indifferent ways, where a, b and c are vectors in the vector space V with the dimension n, and theai, bi and ci are their components,

a+ b = c ⇔ ai + bi = ci ⇔

a1 + b1 = c1

a2 + b2 = c2...

an + bn = cn

, ∀a,b, c ∈ V. (4.1.4)

For the special case of Cartesian coordinates there holds another important convention. In thiscase it is allowed to sum repeated subscript or superscript indices, in general for a Cartesiancoordinate system the subscript index is preferred,

3∑

i=1

xiei = xiei. (4.1.5)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.1. Index Notation and Basis 79

If the indices of two terms are written in brackets it is forbidden to sum these terms

v(m)g(m) 6=3∑

m=1

vmgm. (4.1.6)

4.1.2 The Kronecker delta

The Kronecker delta is de£ned by

δij = δ ij = δij = δij =

1 if i = j

0 if i 6= j. (4.1.7)

An index i, for example in a 3-dimensional space, is substituted with another index j by multi-plication with the Kronecker delta,

δji vj =3∑

j=1

δji vj = δ1i v1 + δ2i v2 + δ3i v3 = vi, (4.1.8)

or with a summation over two indices,

δji viuj =

3∑

i=1

3∑

j=1

δji viuj

= δ11v1u1 + δ21v

1u2 + δ31v1u3

+ δ12v2u1 + δ22v

2u2 + δ32v2u3

+ δ13v3u1 + δ23v

3u2 + δ33v3u3

= 1 · v1u1 + 0 · v1u2 + 0 · v1u3+ 0 · v2u1 + 1 · v2u2 + 0 · v2u3+ 0 · v3u1 + 0 · v3u2 + 1 · v3u3

δji viuj = v1u1 + v2u2 + v3u3 = viui ⇔ v · u, (4.1.9)

or just for a Kronecker delta with two equal indices,

δjj =3∑

j=1

δjj = δ11 + δ22 + δ33 = 3, (4.1.10)

and for the scalar product of two Kronecker deltas,

δji δkj =

3∑

j=1

δji δkj = δ1i δ

k1 + δ2i δ

k1 + δ3i δ

k3 = δki . (4.1.11)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 47: Introduction to Continuum Mechanics - Vector and Tensor Calculus

80 Chapter 4. Vector and Tensor Algebra

For the special case of Cartesian coordinates the Kronecker delta is identi£ed by the unit matrixor identity matrix,

[δij] =

1 0 00 1 00 0 1

. (4.1.12)

4.1.3 The Covariant Basis and Metric Coef£cients

In an n-dimensional af£ne vector space Rnaff ↔ En = V a vector v is given by

v = vigi , with v,gi ∈ V , and i = 1, 2, 3. (4.1.13)

The vectors gi are choosen as linear independent, i.e. they are a basis. If the index i is ansubscript index, the de£nitions

gi , covariant base vectors, (4.1.14)

and

vi , contravariant coordinates, (4.1.15)

of v with respect to the gi, hold. The v1g1, v2g2, v3g3 are called the components of v. TheScalar product of the base vectors gi and gk is de£ned by

gi · gk = gik (4.1.16)

= gk · gi = gki (4.1.17)

gik = gki (4.1.18)

and these coef£cients are called the

gik = gki , covariant metric coef£cients.

The metric coef£cients are symmetric, because of the commutativity of the scalar product g i ·gk = gk · gi. The determinant of the matrix of the covariant metric coef£cients g ik,

g = det [gik] (4.1.19)

is nonzero, if and only if the gi form a basis. For the Cartesian basis the metric coef£cientsvanish except the ones for i = k and the coef£cient matrix becomes the identity matrix or theKronecker delta

ei · ek = δik =

1 i = k

0 i 6= k. (4.1.20)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.1. Index Notation and Basis 81

4.1.4 The Contravariant Basis and Metric Coef£cients

Assume a new reciprocal basis to the covariant base vectors gi by introducing the

gk , contravariant base vectors,

in the same space like the covariant base vectors. This contravariant base vectors are de£ned by

gi · gk = δki =

1 i = k

0 i 6= k, (4.1.21)

and with the covariant coordinates vi the vector v is given by

v = vigi , with v,gi ∈ V , and i = 1, . . . , n. (4.1.22)

For example in the 2-dimensional vector space E2

zg1

Og2 º g2

:g1

Figure 4.1: Example of co- and contravariant base vectors in E2.

g1 · g2 = 0Ã g1⊥g2, (4.1.23)

g2 · g1 = 0Ã g2⊥g1, (4.1.24)

g1 · g1 = 1, (4.1.25)

g2 · g2 = 1. (4.1.26)

The scalar product of the contravariant base vectors gi,

gi · gk = gik (4.1.27)

= gk · gi = gki (4.1.28)

gik = gki (4.1.29)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 48: Introduction to Continuum Mechanics - Vector and Tensor Calculus

82 Chapter 4. Vector and Tensor Algebra

-

6

ª

e1 = e1

e2 = e2

e3 = e3

Figure 4.2: Special case of a Cartesian basis.

de£nes thegik = gki , contravariant metric coef£cients.

For the special case of Cartesian coordinates and an orthonormal basis ei co- and contravariantbase vectors are equal. For that reason it is not necessary to differentiate between indices assubscript or superscript indices. From now on Cartesian base vectors and Cartesian coordinatesget only indicies as subscript indices,

u = uiei , or u = ujej . (4.1.30)

4.1.5 Raising and Lowering of an Index

If the vectors gi, gm and gk are in the same space V, it must be possible to describe gk by aproduct of gm and some coef£cient like Akm,

gk = Akmgm . (4.1.31)

Both sides of the equations are multiplied with gi,

gk · gi = Akmgm · gi, (4.1.32)

and with the de£nition of the Kronecker delta,

gki = Akmδim, (4.1.33)

gki = Aki. (4.1.34)

The result is the following relation between co- and contravariant base vectors

gk = gkigi. (4.1.35)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.1. Index Notation and Basis 83

Lemma 4.1. The covariant base vectors transform with the contravariant metric coef£cientsinto the contravariant base vectors.

gk = gkigi Raising an index with the contravariant metric coef£cients.

The same argumentation for the covariant metric coef£cients starts with

gk = Akmgm, (4.1.36)

gk · gi = Akmδmi , (4.1.37)

gki = Aki, (4.1.38)

and £nally impliesgk = gkig

i. (4.1.39)

As a rule of thumb:

Lemma 4.2. The contravariant base vectors transform with the covariant metric coef£cientsinto the covariant base vectors.

gk = gkigi Lowering an index with the covariant metric coef£cients.

4.1.6 Relations between Co- and Contravariant Metric Coef£cients

Both sides of the transformation formula

gk = gkmgm, (4.1.40)

are multiplied with the vector gi

gk · gi = gkmgm · gi. (4.1.41)

Comparing this with the de£nitions of the Kronecker delta (4.1.7) and of the metric coef£cients(4.1.16) and (4.1.27) leads to

δki = gkmgmi. (4.1.42)

Like in the expression A−1A = 1 co- und contravariant metric coef£cients are inverse to eachother. In matrix notation equation (4.1.42) denotes

1 =[gkm][gmi] , (4.1.43)

[gkm]= [gmi]

−1 , (4.1.44)

and for the determinants

det[gik]=

1

det [gik]. (4.1.45)

With the de£nition of the determinant, equation (4.1.19), the determinant of the contravariantmetric coef£cients gives

det [gik] = g, (4.1.46)

and the determinant of the contravariant metric coef£cients

det[gik]=

1

g. (4.1.47)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 49: Introduction to Continuum Mechanics - Vector and Tensor Calculus

84 Chapter 4. Vector and Tensor Algebra

4.1.7 Co- and Contravariant Coordinates of a Vector

The vector v ∈ V is represented by the two expressions v = vigi and v = vkgk. Comparing

these expressions, with respect to equation (4.1.39) gi = gikgk, leads to

vigikgk = vkg

k ⇒ vk = gikvi. (4.1.48)

After changing the indices of the symmetric covariant metric coef£cient, like in equation (4.1.29),the transformation from contravariant coordinates to covariant coordinates denotes like this

vk = gkivi. (4.1.49)

In the same way comparing the contravariant vector gk = gikgi with the equations (4.1.35) and(4.1.18) gives

vigi = vkgkigi ⇒ vi = gkivk, (4.1.50)

and after changing the indicesvi = gikvk. (4.1.51)

Lemma 4.3. The covariant coordinates transform like the covariant base vectors with the con-travariant metric coef£cients and vice versa. In index notation the transformation for the covari-ant coordinates and the covariant base vectors looks like this

vi = gikvk gi = gikgk, (4.1.52)

and for the contravariant coordinates and the contravariant base vectors

vk = gkivi gk = gkig

i. (4.1.53)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.2. Products of Vectors 85

4.2 Products of Vectors

4.2.1 The Scalar Product or Inner Product of Vectors

The scalar product of two vectors u and v ∈ V is denoted by

α =< u | v >≡ u · v , α ∈ R, (4.2.1)

and also called the inner product or dot product of vectors. The vectors u and v are representedby

u = uigi and v = vigi, (4.2.2)

with respect to the covariant base vectors gi ∈ V and i = 1, . . . , n or by

u = uigi and v = vig

i. (4.2.3)

w.r.t. the contravariant base vectors gj and j = 1, . . . , n. By combining these representations thescalar product of two vectors could be written in four variations

α = u · v = uivjgi · gj = uivjgij = uivi, (4.2.4)

= uivjgi · gj = uivjg

ij = uivi, (4.2.5)

= uivjgi · gj = uivjδ.ji = uivi, (4.2.6)

= uivjgi · gj = uiv

jδi.j = uivi. (4.2.7)

The Euclidean norm is the connection between elements of the same dimension in a vector space.The absolute values of the vectors u and v are represented by

|u| = ‖u‖2 =√u · u, (4.2.8)

|v| = ‖v‖2 =√v · v. (4.2.9)

The scalar product or inner product of two vectors in V is a bilinear mapping from two vectorsto α ∈ R.

Theorem 4.1. In the 3-dimensional Euclidean vector space E3 one important application of thescalar product is the de£nition of the work as the force times the distance moved in the directionopposite to the force,

à Work = Force in direction of the distance ∗ Distance

or α = f · d . (4.2.10)

Theorem 4.2. The scalar product in 3-dimensional Euclidean vector space E3 is written u · vand is de£ned as the product of the absolute values of the two vectors and the cosine of the anglebetween them,

α = u · v := |u| |v| cosϕ. (4.2.11)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 50: Introduction to Continuum Mechanics - Vector and Tensor Calculus

86 Chapter 4. Vector and Tensor Algebra

:

|v| cosϕ

Á

v ≡ f:

u ≡ d

:eu

ϕ

Figure 4.3: Projection of a vector v on the dircetion of the vector u.

The quantity |v| cosϕ represents in the 3-dimensional Euclidean vector space E3 the projectionof the vector v in the direction of vector u. The unit vector in direction of vector u is given by

eu =u

|u| . (4.2.12)

Therefore the cosine of the angle is given by

cosϕ =u · v|u| |v| . (4.2.13)

The absolute value of a vector is its Euclidean norm and is computed by

|u| =√u · u , and |v| =

√v · v. (4.2.14)

This formula rewritten with the base vectors gi and gi simpli£es in index notation to

|u| =√

uigi · ukgk =√

uiukδki =√

uiui, (4.2.15)

|v| =√

vivi. (4.2.16)

The cosine between two vectors in the 3-dimensional Euclidean vector space E3 is de£ned by

cosϕ =uivi

√ujuj

√vkvk

=uiv

i

√ujuj√vkvk

. (4.2.17)

For example the scalar product of two vectors w.r.t. the Cartesian basis gi = ei = ei in the

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.2. Products of Vectors 87

3-dimensional Euclidean vector space E3,

637

·

123

=6 · 1 + 3 · 2 + 7 · 3 = 33,

=uivjgij = u1vjg1j + u2vjg2j + u3vjg3j ,

=6 · 1 · 1 + 6 · 2 · 0 + 6 · 3 · 0,

+ 3 · 1 · 0 + 3 · 2 · 1 + 3 · 3 · 0,

+ 7 · 1 · 0 + 7 · 2 · 0 + 7 · 3 · 1 = 33.

4.2.2 De£nition of the Cross Product of Base Vectors

The cross product, also called the vector product, or the outer product, is only de£ned in the3-dimensional Euclidean vector space E3. The cross product of two arbitrary, linear independentcovariant base vectors gi, gj ∈ E3 implies another vector gk ∈ E3 and is introduced by

gi × gj = αgk, (4.2.18)

with the conditions

i 6= j 6= k,

i, j, k = 1, 2, 3 , or another even permutation of i, j, k.

4.2.3 The Permutation Symbol in Cartesian Coordinates

The cross products of the Cartesian base vectors ei in the 3-dimensional Euclidean vector spaceE3 are given by

e1 × e2 = e3 = e3, (4.2.19)

e2 × e3 = e1 = e1, (4.2.20)

e3 × e1 = e2 = e2, (4.2.21)

e2 × e1 = −e3 = −e3, (4.2.22)

e3 × e2 = −e1 = −e1, (4.2.23)

e1 × e3 = −e2 = −e2. (4.2.24)

The Cartesian components of a permutation tensor , or just the permutation symbols, are de£nedby

eijk =

+1 if (i, j, k) is an even permutation of (1, 2, 3),

−1 if (i, j, k) is an odd permutation of (1, 2, 3),

0 if two or more indices are equal.

(4.2.25)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 51: Introduction to Continuum Mechanics - Vector and Tensor Calculus

88 Chapter 4. Vector and Tensor Algebra

Thus, returning to equations (4.2.19)-(4.2.24), the cross products of the Cartesian base vectorscould be described by the permutation symbols like this,

ei × ej = eijkek. (4.2.26)

For example

e1 × e2 = e121 · e1 + e122 · e2 + e123 · e3 = 0 · e1 + 0 · e2 + 1 · e3 = e3

e1 × e3 = e131 · e1 + e132 · e2 + e133 · e3 = 0 · e1 + (−1) · e2 + 0 · e3 = −e2.

4.2.4 De£nition of the Scalar Triple Product of Base Vectors

Starting again with the cross product of base vectors, see equation (4.2.18),

gi × gj = αgk, (4.2.27)

i 6= j 6= k,

i, j, k = 1, 2, 3 , or another even permutation of i, j, k.

The gk are the contravariant base vectors and the scalar quantity α is computed by multiplicationof equation (4.2.18) with the covariant base vector gk,

(gi × gj) · gk = αgk · gk, (4.2.28)

[g1,g2,g3] = αδkk = 3α. (4.2.29)

This result is the so called scalar triple product of the base vectors

α = [g1,g2,g3] . (4.2.30)

This scalar triple product α of the base vectors gi for i = 1, 2, 3 represents the volume of theparallelepiped formed by the three vectors gi for i = 1, 2, 3. Comparing equations (4.2.28) and(4.2.29) implies for contravariant base vectors

gk =gi × gj

[g1,g2,g3], (4.2.31)

and for covariant base vectors

gk =gi × gj

[g1,g2,g3]. (4.2.32)

Furthermore the scalar product of two scalar triple products of base vectors is given by

[g1,g2,g3] ·[g1,g2,g3

]= α · 1

α= 1. (4.2.33)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.2. Products of Vectors 89

4.2.5 Introduction of the Determinant with the Permutation Symbol

The scalar quantity α in the section above could also be described by the square root of thedeterminant of the covariant metric coef£cients,

α = (det gij)12 =√g. (4.2.34)

The determinant of a 3× 3 matrix could be represented by the permutations symbols eijk,

α = det [amn] =

∣∣∣∣∣∣

a11 a12 a13a21 a22 a23a31 a32 a33

∣∣∣∣∣∣

= a1i · a2j · a3k · eijk. (4.2.35)

Computing the determinant by expanding about the £rst row implies

α = a11 ·∣∣∣∣

a22 a23a32 a33

∣∣∣∣− a12 ·

∣∣∣∣

a21 a23a31 a33

∣∣∣∣+ a13 ·

∣∣∣∣

a21 a22a31 a32

∣∣∣∣

and £nally

α = a11 · a22 · a33 − a11 · a32 · a23− a12 · a21 · a33 + a12 · a31 · a23+ a13 · a21 · a32 − a13 · a31 · a22. (4.2.36)

The alternative way with the permutation symbol is given by

α = a11 · a22 · a33 · e123 + a11 · a23 · a32 · e132+ a12 · a23 · a31 · e231 + a12 · a21 · a33 · e213+ a13 · a21 · a32 · e312 + a13 · a22 · a31 · e321,

and after inserting the values of the various permutation symbols,

= a11 · a22 · a33 · 1 + a11 · a23 · a32 · (−1)+ a12 · a23 · a31 · 1 + a12 · a21 · a33 · (−1)+ a13 · a21 · a32 · 1 + a13 · a22 · a31 · (−1)

and £nally the result is equal to the £rst way of computing the determinant, see equation (4.2.36),

α = a11 · a22 · a33 − a11 · a23 · a32+ a12 · a23 · a31 − a12 · a21 · a33+ a13 · a21 · a32 − a13 · a22 · a31. (4.2.37)

Equations (4.2.35) can be written with contravariant elements, too,

α∗ = a1i · a2j · a3k · eijk. (4.2.38)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 52: Introduction to Continuum Mechanics - Vector and Tensor Calculus

90 Chapter 4. Vector and Tensor Algebra

The matrix of the covariant metric coef£cients is the inverse of the matrix of the contravariantmetric coef£cients and vice versa,

gij · gjk = δki , (4.2.39)

det[gij · gjk

]= det

([gij]

[gjk])

= det[δki]= 1. (4.2.40)

The product rule of determinants

det([gij]

[gjk])

= det [gij] · det[gjk]

(4.2.41)

simpli£es for this special case

1 = g · 1g

, (4.2.42)

and £nally

det [gij] = g det[gij]=

1

g. (4.2.43)

For this reason the determinants of the matrix of the metric coef£cients are represented with thepermutation symbols, see equations (4.2.35) and (4.2.43), like this

g = g1i · g2j · g3k · eijk = det [gij] , (4.2.44)1

g= g1i · g2j · g3k · eijk = det

[gij]

. (4.2.45)

4.2.6 Cross Product and Scalar Triple Product of Arbitrary Vectors

The vectors a up to f are written in the 3-dimensional Euclidean vector space E3 with the basevectors gi and gi,

a = aigi d = digi, (4.2.46)

b = bigi e = eigi, (4.2.47)

c = cigi f = f igi. (4.2.48)

The cross product (4.2.26) rewritten with the formulae for the scalar triple product (4.2.28) -(4.2.30),

a× b = aigi × bjgj = aibjeijk [g1,g2,g3]gk,

a× b = [g1,g2,g3]

∣∣∣∣∣∣

a1 a2 a3

b1 b2 b3

g1 g2 g3

∣∣∣∣∣∣

= [g1,g2,g3]

∣∣∣∣∣∣

g1 g2 g3

a1 a2 a3

b1 b2 b3

∣∣∣∣∣∣

. (4.2.49)

Two scalar triple products are de£ned by

[a,b, c] = (a× b) · c, (4.2.50)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.2. Products of Vectors 91

and

[d, e, f ] = (d× e) · f . (4.2.51)

and the £rst one of this scalar triple product is given by

(a× b) · c = [g1,g2,g3] aibjeijkg

k · crgr = [g1,g2,g3] aibjeijkδ

kr c

r

= [g1,g2,g3] aibjckeijk

=[g1,g2,g3

]

∣∣∣∣∣∣

a1 a2 a3

b1 b2 b3

c1 c2 c3

∣∣∣∣∣∣

= [g1,g2,g3]

∣∣∣∣∣∣

a1 b1 c1

a2 b2 c2

a3 b3 c3

∣∣∣∣∣∣

= α

∣∣∣∣∣∣

a1 b1 c1

a2 b2 c2

a3 b3 c3

∣∣∣∣∣∣

. (4.2.52)

The same formula written with covariant components and contravariant base vectors,

(a× b) · c =[g1,g2,g3

]aibjcke

ijk =1

αaibjcke

ijk

=[g1,g2,g3

]

∣∣∣∣∣∣

a1 a2 a3b1 b2 b3c1 c2 c3

∣∣∣∣∣∣

= [g1,g2,g3]

∣∣∣∣∣∣

a1 b1 c1

a2 b2 c2

a3 b3 c3

∣∣∣∣∣∣

. (4.2.53)

The productP = [a,b, c] [d, e, f ] (4.2.54)

is therefore with the equations (4.2.52), (4.2.53) and (4.2.46) up to (4.2.48)

P = [g1,g2,g3][g1,g2,g3

]

∣∣∣∣∣∣

a1 a2 a3

b1 b2 b3

c1 c2 c3

∣∣∣∣∣∣

∣∣∣∣∣∣

d1 e1 f1d2 e2 f2d3 e3 f3

∣∣∣∣∣∣

= α1

α|A| |B| = |A| |B| . (4.2.55)

The element (1, 1) of the product matrix A B with respect to the product rule of determinantsdetA detB = det (A B) is given by

a1d1 + a2d2 + a3d3 = aigi · djgj

= aidjδji

= aidi

= a · d. (4.2.56)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 53: Introduction to Continuum Mechanics - Vector and Tensor Calculus

92 Chapter 4. Vector and Tensor Algebra

Comparing this with the product P leads to

P = [a,b, c] [d, e, f ] =

∣∣∣∣∣∣

a · d a · e a · fb · d b · e b · fc · d c · e c · f

∣∣∣∣∣∣

, (4.2.57)

and for the scalar triple product [a,b, c] to the power two,

[a,b, c]2 =

∣∣∣∣∣∣

a · a a · b a · cb · a b · b b · cc · a c · b c · c

∣∣∣∣∣∣

. (4.2.58)

4.2.7 The General Components of the Permutation Symbol

The square value of a scalar triple product of the covariant base vectors, like in equations (4.2.58),

[g1,g2,g3]2 =

∣∣∣∣∣∣

g1 · g1 g1 · g2 g1 · g3g2 · g1 g2 · g2 g2 · g3g3 · g1 g3 · g2 g3 · g3

∣∣∣∣∣∣

= |gij| = det [gij] = g (4.2.59)

reduces to[g1,g2,g3] =

√g. (4.2.60)

The same for relation for the scalar triple product of the contravariant base vectors leads to

[g1,g2,g3

]2= det

[gij]=

1

g(4.2.61)

[g1,g2,g3

]=

1√g

. (4.2.62)

Equation (4.2.60) could be rewritten analogous to equation (4.2.26)

gi × gj = [g1,g2,g3]gk

=√g · eijkgk,

gi × gj = εijkgk, (4.2.63)

and for the corresponding contravariant base vectors

gi × gj =[g1,g2,g3

]gk

=1√g· eijkgk,

gi × gj = εijkgk. (4.2.64)

For example the general permutation symbol could be given by the covariant ε symbol,

εijk =

+√g if (i, j, k) is an even permutation of (1, 2, 3),

−√g if (i, j, k) is an odd permutation of (1, 2, 3),

0 if two or more indices are equal,

(4.2.65)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.2. Products of Vectors 93

or by the contravariant ε symbol,

εijk =

+ 1√g

if (i, j, k) is an even permutation of (1, 2, 3),

− 1√g

if (i, j, k) is an odd permutation of (1, 2, 3),

0 if two or more indices are equal.

(4.2.66)

4.2.8 Relations between the Permutation Symbols

Comparing equations (4.2.25) with (4.2.65) and (4.2.66) shows the relations,

εijk =√geijk , and eijk =

1√gεijk, (4.2.67)

and

εijk =1√geijk , and eijk =

√gεijk. (4.2.68)

The comparison of equation (4.2.44) and (4.2.35) gives

g = |gij| = g1ig2jg3keijk, (4.2.69)

g · elmn = gligmjgnkeijk, (4.2.70)

g · 1√gεlmn = gligmjgnk

√gεijk, (4.2.71)

εlmn = gligmjgnkεijk, (4.2.72)

and

εlmn = gligmjgnkεijk. (4.2.73)

The covariant ε symbols are converted into the contravariant ε symbols with the contravariantmetric coef£cients and vice versa. This transformation is the same as the one for tensors. Theconclusion is that the ε symbols are tensors! The relation between the e and ε symbols is writtenas follows

eijkelmn =

√g√gεijkεlmn, (4.2.74)

eijkelmn = εijkεlmn. (4.2.75)

The relation between the permutation symbols and the Kronecker delta is given by∣∣∣∣∣∣

δil δim δinδjl δjm δjnδkl δkm δkn

∣∣∣∣∣∣

= εijkεlmn = eijkelmn. (4.2.76)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 54: Introduction to Continuum Mechanics - Vector and Tensor Calculus

94 Chapter 4. Vector and Tensor Algebra

After expanding the determinant and setting k = n,

εijkεlmk = δilδjm − δimδ

jl , (4.2.77)

and if i = l and j = mεijkεijn = 2δkn, (4.2.78)

and if all three indices are equal

εijkεijk = eijkeijk = 2δkk = 6. (4.2.79)

4.2.9 The Dyadic Product or the Direct Product of Vectors

The dyadic product of two vectors a and b ∈ V de£nes a so called simple second order tensorwith rank = 1 in the tensor space V⊗ V∗ over the vector space V by

∗T = a⊗ b , and

∗T ∈ V⊗ V∗. (4.2.80)

This tensor describes a linear mapping of the vector v ∈ V with the scalar product by

∗T · v = (a⊗ b) · v = a (b · v) . (4.2.81)

The dyadic product a ⊗ b could be represented by a matrix, for example with a, b ∈ R3 andT ∈ R3 ⊗ R3,

∗T = a bT =

a1a2a3

3×1

[b1 b2 b3

]

1×3 (4.2.82)

=

a1b1 a1b2 a1b3a2b1 a2b2 a2b3a3b1 a3b2 a3b3

3×3

. (4.2.83)

The rank of this mapping is rank = 1, i.e. det∗T

(3×3)= 0 and det

∗T i(2×2)

= 0 for i = 1, 2, 3. The

mapping∗T v denotes in matrix notation

a1b1 a1b2 a1b3a2b1 a2b2 a2b3a3b1 a3b2 a3b3

v1v2v3

=

a13∑

i=1

bivi

a23∑

i=1

bivi

a33∑

i=1

bivi

, (4.2.84)

or

(a bT

)v =

∗T v = a

(bT v

). (4.2.85)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.2. Products of Vectors 95

The proof that the dyadic product is a tensor starts with the assumption, see equations (T4) and(T5),

(a⊗ b) (αu+ βv) = α (a⊗ b) · u+ β (a⊗ b) · v. (4.2.86)

Equation (4.2.86) rewritten with the mapping∗T is given by

∗T (αu+ βv) = α

∗T (u) + β

∗T (v) . (4.2.87)

With the de£nitions of the mapping∗T it follows, that

∗T (αu+ βv) = (a⊗ b) (αu+ βv) = a [b · (αu+ βv)]

= a [αb · u+ βb · v]= α [a (b · u)] + β [a (b · v)]

= α∗T (u) + β

∗T (v) .

The vectors a and b are represented by the base vectors gi (covariant) and gj (contravariant)

a = aigi , b = bjgj , and gi,g

j ∈ V. (4.2.88)

The dyadic product i.e. the mapping∗T is de£ned by

∗T = a⊗ b = aibjgi ⊗ gj =

∗T i

jgi ⊗ gj , (4.2.89)

and bygi ⊗ gj the dyadic product of the base vectors,

with the conditions

det∗T i

j = 0 , r

( ∗T i

j

)

= 1 , and rank = 1. (4.2.90)

The mapping∗T maps the vector v = vkgk onto the vector

∗w,

∗w =

∗T · v =

∗T i

j

(gi ⊗ gj

)· vkgk

=∗T i

jvkgi

(gj · gk

)

=∗T i

jvkgiδ

jk,

∗w =

∗T i

jvjgi =

∗wigi. (4.2.91)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 55: Introduction to Continuum Mechanics - Vector and Tensor Calculus

96 Chapter 4. Vector and Tensor Algebra

4.3 Tensors

4.3.1 Introduction of a Second Order Tensor

With the de£nition of linear mappings f in chapter (2.8), the de£niton of vector spaces of linearmappings L over the vector space V, and the de£nition of dyads it is possible to de£ne the secondorder tensor. The original de£nition of a tensor was the description of a stress state at a pointand time in a continuum, e.g. a ¤uid or a solid, given by Cauchy. The stress tensor or the Cauchystress tensor T at a point P assigns a stress vector σ (P ) to an arbitrarily oriented section,given by a normal vector at the point P . The resulting stress vector t(n)(P ) at an arbitrarily

µn

6t(n)(P )

P

Figure 4.4: Resulting stress vector.

oriented section, described by a normal vector n at a point P , in a rigid body loaded by anequilibrium system of external forces could have an arbitrary direction! Because the equilibriumconditions hold only for forces and not for stresses, an equlibirum system of forces is establishedat an in£nitesimal tetrahedron. This tetrahedron will have four in£nitesimal section surfaces,too. If the section surface is rotated, then the element of reference (the vector of direction)will be transformed and the direction of stress will be transformed, too. Comparing this with thetransformation of stresses yields to products of cosines, which lead to quantities with two indices.The stress state at a point could not be described by one or two vectors, but by a combinationof three vectors t(1), t(2), and t(3). The stress tensor T for the equilibrium conditions for ain£nitesimal tetrahedron, given by the three stress vectors t1, t2, and t3, assigns to every directiona unique resulting stress vector t(n).

4.3.1.0.20 Remarks

• The scalar product F · n = Fn; |n| = 1 projects F · cosϕ on the direction of n, and theresult is a scalar quantity.

• The cross product r×F =MA establishs a vector of momentum at a point A in the normaldirection of the plane rA,F and perpendicular to F, too.

• The dyadic product a⊗ b = T assigns a second order tensor T to a pair of vectors a andb.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.3. Tensors 97

-

6

ª

x2

x3

x1

±dF(n) = t(n)dA(n)

¸n

1dF(1) = t(1)dA(1)

µ

n(1)

I

dF(2) = t(2)dA(2)

¾n(2)

RdF(3) = t(3)dA(3)

?n(3)

dA(n)

dA(1)

dA(2)

dA(3)

Figure 4.5: Resulting stress vector.

• The spaces R3 and E3 are homeomorphic, i.e. for all vectors x ∈ R3 and v ∈ E3 the samerules and axioms hold. For this reason it is suf£cient to have a look at the vector space R3.

• Also the spaces Rn, En and V are homeomorphic, but with n 6= 3 the usual cross productwill not hold. For this reason the following de£nitions are made for the general vectorspace V, but most of the examples are given in the 3-dimensional Euclidean vector spaceE3. In this space the cross product holds, and this space is the visual space.

4.3.2 The De£nition of a Second Order Tensor

A linear mapping f = T of an (Euclidean) vector space V into itself or into its dual space V∗

is called a second order tensor. The action of a linear mapping T on a vector v is written like a"dot"-product or multiplication, and in most cases the "dot"is not written any more,

T · v = Tv. (4.3.1)

The de£nitions and rules for linear spaces in chapter (2.4), i.e. the axioms of vector space (S1)up to (S8) are rewritten for tensors T ∈ V⊗ V.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 56: Introduction to Continuum Mechanics - Vector and Tensor Calculus

98 Chapter 4. Vector and Tensor Algebra

4.3.2.0.21 Linearity for the Vectors.

1 . Axiom of Second Order Tensors. The tensor (the linear mapping) T ∈ V ⊗ V maps thevector u ∈ V onto the same space V,

T (u) = T · u = Tu = v ; ∀u ∈ V ; v ∈ V. (T1)

This mapping is the same like the mapping of a vector with a quadratic matrix in a space withCartesian coordinates.

2 . Axiom of Second Order Tensors. The action of the zero tensor 0 on any vector u maps thevector on the zero vector,

0 · u = 0u = 0 ; u,0 ∈ V. (T2)

3 . Axiom of Second Order Tensors. The unit tensor 1 sends any vector into itself,

1 · u = 1u = u ; u ∈ V,1 ∈ V⊗ V. (T3)

4 . Axiom of Second Order Tensors. The multiplication by a tensor is distributive with respectto vector addition,

T (u+ v) = Tu+Tv ; ∀u,v ∈ V. (T4)

5 . Axiom of Second Order Tensors. If the vector u is multiplied by a scalar, then the linearmapping is denoted by

T (αu) = αTu ; ∀u ∈ V, α ∈ R . (T5)

4.3.2.0.22 Linearity for the Tensors.

6 . Axiom of Second Order Tensors. The multiplication with the sum of tensors of the samespace is distributive,

(T1 +T2) · u = T1 · u+T2 · u ; ∀u ∈ V,T1,T2 ∈ V⊗ V. (T6)

7 . Axiom of Second Order Tensors. The multiplication of a tensor by a scalar is linear, likein equation (T5) the multiplication of a vector by a scalar,

(αT) · u = T · (αu) ; ∀u ∈ V, α ∈ R . (T7)

8 . Axiom of Second Order Tensors. The action of tensors on a vector is associative,

T1 (T2 · u) = (T1T2) · u = T · u, (T8)

but like in matrix calculus not commutative, i.e. T1T2 6= T2T1. The "product"T1T2 of thetensors is also called a "composition"of the linear mappings T1,T2.

9 . Axiom of Second Order Tensors. The inverse of a tensor T−1 is de£ned by

v = T · u⇔ u = T−1v, (T9)

and it exists, if and only if T is linear independent, i.e. detT 6= 0.

10 . Axiom of Second Order Tensors. The transpose of the transpose is the tensor itself,(TT)T

= T. (T10)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.3. Tensors 99

4.3.3 The Complete Second Order Tensor

The simple second order tensor T ∈ V⊗V is de£ned as a linear combination of n dyads and itsrank is n,

T =n∑

i=1

∗Ti =

n∑

i=1

ai ⊗ bi, (4.3.2)

T = ai ⊗ bi = ai ⊗ bi, (4.3.3)

and detT 6= 0 , if the vectors ai and bi are linearly independent.

If the vectors ai and bi are represented with the base vectors gi and gi ∈ V like this,

ai = aijgj ; bi = bilgl, (4.3.4)

then the second order tensor is given by

T = aijbilgj ⊗ gl, (4.3.5)

and £nally the complete second order tensor is given by

T = T jlgj ⊗ gl the mixed formulation of a second order tensor. (4.3.6)

The dyadic product of the base vectors includes one co- and one contravariant base vector. Themixed components T j

l of the tensor in mixed formulation are written with one co- and one con-travariant index, too,

detT jl 6= 0. (4.3.7)

If the contravariant base vector is transformed with the metric coef£cient ,

gl = glkgk, (4.3.8)

the tensor T changes, like

T = T jlgj ⊗ glkgk, (4.3.9)

T = T jlg

lkgj ⊗ gk, (4.3.10)

and the result is

T = T jkgj ⊗ gk

the tensor with covariant base vectors

and contravariant coordinates.(4.3.11)

The transformation of a covariant base vector in a contravariant base vector,

gj = gjkgk, (4.3.12)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 57: Introduction to Continuum Mechanics - Vector and Tensor Calculus

100 Chapter 4. Vector and Tensor Algebra

implies

T = T jlgjkg

k ⊗ gl, (4.3.13)

T = Tklgk ⊗ gl the tensor with contravariant base vectors

and covariant coordinates.(4.3.14)

The action of the tensor T ∈ V⊗ V on the vector

g = vkgk ∈ V, (4.3.15)

creates the vector w ∈ V, and this one is computed by

w = T · v = T ij (gi ⊗ gj) · vkgk (4.3.16)

= T ijvkgigjk = T ijvjgi,

w = wigi and wi = T ijvj . (4.3.17)

In the same way the other representation of the vector w is given by

w = T · v = T ji

(gi ⊗ gj

)· vkgk (4.3.18)

= T ji vkg

iδkj = T ji vjg

i,

w = wigi , and wi = T j

i vj (4.3.19)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.4. Transformations and Products of Tensors 101

4.4 Transformations and Products of Tensors

4.4.1 The Transformation of Base Vectors

A vector v with v ∈ V is given by the covariant basis gi and i = 1, . . . , n and afterwards inanother covariant basis gi and i = 1, . . . , n. For example this case describes the situation of asolid body with different con£gurations of deformation and a tangent basis, which moves alongthe coordinate curves. And the same de£nitions are made with a contravariant basis g i and atransformed contravariant basis gi. Then the representations of the vector v are

v = vigi = vigi = vigi = vig

i. (4.4.1)

The relation between the two covariant base vectors gi and gi could be written with a secondorder tensor like

gi = A · gi. (4.4.2)

If this linear mapping exists, then the coef£cients of the transformation tensor A are given by

gi = 1gi =(gk ⊗ gk

)gi (4.4.3)

=(gk · gi

)gk = Ak

igk,

gi = Akigk , and Ak

i = gkgi. (4.4.4)

The complete tensor A in the mixed formulation is then de£ned by

A =(gk · gi

)gk ⊗ gi = Ak

igk ⊗ gi. (4.4.5)

Insert equation (4.4.5) in (4.4.2), in order to get the transformation (4.4.4) again,

gm =(Ak

igk ⊗ gi)gm = Ak

iδimgk = Ak

mgk. (4.4.6)

If the inverse transformation of equation (4.4.2) exists, then it should be denoted by

gi = Agi. (4.4.7)

This existence results out of its linear independence. The "retransformation"tensor A is againde£ned by the multiplication with the unit tensor 1

gi = 1gi =(gk ⊗ gk

)gi (4.4.8)

=(gkgi

)gk = A

k

igk,

gi = Ak

igk , and Ak

i = gkgi. (4.4.9)

The transformation tensor A in the mixed representation is given by

A =(gkgi

)gk ⊗ gi = A

k

igk ⊗ gi. (4.4.10)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 58: Introduction to Continuum Mechanics - Vector and Tensor Calculus

102 Chapter 4. Vector and Tensor Algebra

Inserting equation (4.4.10) in (4.4.7) implies again the transformation relation (4.4.9),

gm =(

Ak

igk ⊗ gi)

gm = Ak

iδimgk = A

k

mgk. (4.4.11)

The tensor A is the inverse to A and vice versa. This is a result of equation (4.4.7) ,

A−1· | gi = A · gi, (4.4.12)

à A−1 · gi = gi. (4.4.13)

Comparing this with equation (4.4.2) implies

A = A−1Ã A ·A = 1, (4.4.14)

and in the same way

A = A−1 Ã A ·A = 1. (4.4.15)

In index notation with equations (4.4.4) and (4.4.9) the relation between the "normal"and the"overlined"coef£cients of the transformation tensor is given by

gi = Akigk = Ak

iAm

kgm | ·gj , (4.4.16)

δji = AkiA

m

kδjm,

δji = AkiA

j

k. (4.4.17)

The transformation of contravariant basis works in the same way. If in equations (4.4.3) or(4.4.8) the metric tensor of covariant coef£cients is used in stead of the identity tensor, thenanother representation of the transformation tensor is described by

gm = 1gm =(gikg

i ⊗ gk)gi (4.4.18)

= gikAkmg

i,

gm = Aimgi , and Aim = gikA

km. (4.4.19)

If the transformed covariant base vectors gm should be represented by the contravariant basevectors gi, then the complete tensor of transformation is given by

A = (gi · gk)gi ⊗ gk = Aikg

i ⊗ gk. (4.4.20)

The inverse transformation tensor T is given by an equation developed in the same way like theequations (4.4.16) and (4.4.8). This inverse tensor is denoted and de£ned by

A = A−1 = (gi · gk)gi ⊗ gk = Aikg

i ⊗ gk. (4.4.21)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.4. Transformations and Products of Tensors 103

4.4.2 Collection of Transformations of Basis

There is a large number of transformation relations between the co- and contravariant basis ofboth systems of coordinates. Transformation from the "normal basis"to the "overlined basis",like gi à gi and gk à gk, is given by the following equations. First the relation between thecovariant base vectors in both systems of coordinates is de£ned by

gi = 1gi =(gk ⊗ gk

)gi =

(gk · gi

)gk = Ak

i gk. (4.4.22)

With this relationship the transformed (overlined) covariant base vectors are represented by thecovariant base vectors

gi = Agi , and A =(gk · gm

)gk ⊗ gm = Ak

mgk ⊗ gm, (4.4.23)

gi =(gk · gi

)gk = Ak

i gk, (4.4.24)

and the transformed (overlined) covariant base vectors are represented by the contravariant basevectors,

gi = Agi , and A = (gk,gm)gk ⊗ gm = Akmg

k ⊗ gm, (4.4.25)

gi = (gk · gi)gk = Akig

k. (4.4.26)

The relation between the contravariant base vectors in both systems of coordinates is de£ned by

gi = 1gi =(gk ⊗ gk

)gi =

(gk · gi

)gk = Bi

kgk. (4.4.27)

With this relationship the transformed (overlined) contravariant base vectors are represented bythe contravariant base vectors

gi = Bgi , and B = (gk · gm)gk ⊗ gm = Bmk g

k ⊗ gm, (4.4.28)

gi =(gk · gi

)gk = Bi

kgk, (4.4.29)

and the transformed (overlined) contravariant base vectors are represented by the covariant basevectors,

gi = Bgi , and B =(gk · gm

)gk ⊗ gm = Bkmgk ⊗ gm, (4.4.30)

gi =(gk · gi

)gk = Bkigk. (4.4.31)

The inverse relation gi à gi, and gk à gk representing the "retransformations"from the trans-formed (overlined) to the "normal"system of coordinates are given by the following equations.The inverse transformation between the covariant base vectors of both systems of coordinates isdenoted and de£ned by

gi = 1gi =(gk ⊗ gk

)· gi =

(gk · gi

)gk = A

k

i gk. (4.4.32)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 59: Introduction to Continuum Mechanics - Vector and Tensor Calculus

104 Chapter 4. Vector and Tensor Algebra

With this relationship the covariant base vectors are represented by the transformed (overlined)covariant base vectors,

gi = Agi , and A =(gk · gm

)gk ⊗ gm = A

k

mgk ⊗ gm, (4.4.33)

gi =(gk · gi

)gk = A

k

i gk, (4.4.34)

and the covariant base vectors are represented by the transformed (overlined) contravariant basevectors,

gi = Agi , and A = (gm · gk)gm ⊗ gk = Amkg

m ⊗ gk, (4.4.35)

gi = (gk · gi)gk = Akig

k. (4.4.36)

The inverse relation between the contravariant base vectors in both systems of coordinates isde£ned by

gi = 1gi =(gk ⊗ gk

)gi =

(gk · gi

)gk = B

i

kgk. (4.4.37)

With this relationship the contravariant base vectors are represented by the transformed (over-lined) contravariant base vectors,

gi = Bgi , and B = (gk · gm)gk ⊗ gm = Bm

k gk ⊗ gm, (4.4.38)

gi =(gk · gi

)gk = B

i

kgk, (4.4.39)

and the contravariant base vectors are represented by the transformed (overlined) covariant basevectors,

gi = Bgi , and B =(gk · gm

)gk ⊗ gm = B

kmgk ⊗ gm, (4.4.40)

gi =(gk · gi

)gk = B

kigk. (4.4.41)

There exist the following relations between the transformation tensors A and A,

AA = 1 , or AA = 1 (4.4.42)

AmiA

k

m = δki , i.e. Am

iAkm = δki (4.4.43)

and for the inverse transformation tensors B and B

BB = 1 , or BB = 1 (4.4.44)

B imB

m

k = δik , i.e. Bi

mBmk = δik. (4.4.45)

Furthermore exists a relation between the transformation tensors A and the retransformationtensor B

AmiB

km = δki ; B k

m = Ak

m, (4.4.46)

and a relation between the transformation tensors B and the retransformation tensor A,

B km = A

k

m. (4.4.47)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.4. Transformations and Products of Tensors 105

4.4.3 The Tensor Product of Second Order Tensors

The vector v is de£ned by the action of the linear mapping given by T on the vector u,

v = T · u = Tu , with u,v ∈ V , and T ∈ V⊗ V∗. (4.4.48)

In index notation with the covariant base vectors gi ∈ V equation (4.4.48) denotes like that

v =(Tmkgm ⊗ gk

)(urgr)

= Tmkur (gk · gr)gm,

and with lowering an index, see (4.1.39)

= Tmkurgkrgm

v = Tmkukgm = vmgm. (4.4.49)

Furthermore with the linear mapping,

w = Sv , with w ∈ V , and S ∈ V⊗ V∗, (4.4.50)

and the linear mapping (4.4.48) the associative law for the linear mappings holds

w = Sv = S (T · u) = (ST)u. (4.4.51)

The second linear mappingw = Sv in the mixed formulation with the contravariant base vectorsgi ∈ V is given by

S = Sijgi ⊗ gj . (4.4.52)

Then the vector w in index notation with the results of the equations (4.4.49), (4.4.51) and(4.4.52) is rewritten as

w = S (Tu) =(Sijgi ⊗ gj

) (Tmkukgm

)

= SijT

mkukδjmgi

= SijT

jkukgi = wigi,

and the coef£cients of the vector are given by

wi = SijT

jkuk. (4.4.53)

For the second order tensor product ST exists in general four representations with all possiblecombinations of base vectors

S ·T = SimT

mkgi ⊗ gk covariant basis, (4.4.54)

S ·T = SimTmk g

i ⊗ gk contravariant basis, (4.4.55)

S ·T = SimTmkgi ⊗ gk mixed basis, (4.4.56)

and

S ·T = SimTmkgi ⊗ gk mixed basis. (4.4.57)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 60: Introduction to Continuum Mechanics - Vector and Tensor Calculus

106 Chapter 4. Vector and Tensor Algebra

Lemma 4.4. The result of the tensor product of two dyads is the scalar product of the innervectors of the dyads and the dyadic product of the outer vectors of the dyads. The tensor productof two dyads of vectors is denoted by

(a⊗ b) (c⊗ d) = (b · c) a⊗ d. (4.4.58)

With this rule the index notation of equations (4.4.53) up to (4.4.57) is easily computed, forexample equation (4.4.53) or (4.4.54) implies

ST =(Simgi ⊗ gm

) (T nkgn ⊗ gk

)

= SimT

nkδmn gi ⊗ gk,

and £nally

ST = SimT

mkgi ⊗ gk. (4.4.59)

The "multiplication"or composition of two linear mappings S and T is called a tensor product

P = S ·T = ST (tensor · tensor = tensor). (4.4.60)

The linear mappings w = Sv and v = Tu with the vectors u,v, and w ∈ V are composed like

w = Sv = S (Tu) = (ST)u = STu = Pu. (4.4.61)

This "multiplication"is, like in matrix calculus, but not like in "normal"algebra (a · b = b · a),noncommutative, i.e.

ST 6= TS. (4.4.62)

For the three second order tensorsR,S,T ∈ V⊗V∗ and the scalar quantity α ∈ R the followingidentities for tensor products hold.

Multiplication by a scalar quantity:

α (ST) = (αS)T = S (αT) (4.4.63)

Multiplication by the identity tensor:

1T = T1 = T (4.4.64)

Existence of a zero tensor:

0T = T0 = 0 (4.4.65)

Associative law for the tensor product:

(RS)T = R (ST) (4.4.66)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.4. Transformations and Products of Tensors 107

Distributive law for the tensor product:

(R+ S)T = RT+ ST (4.4.67)

In general cases NO commutative law:

ST 6= TS (4.4.68)

Transpose of a tensor product:

(ST)T = TTST (4.4.69)

Inverse of a tensor product:

(ST)−1 = T−1S−1 if S and T are nonsingular. (4.4.70)

Determinant of a tensor product:

det (ST) = detS detT (4.4.71)

Trace of a tensor product:

tr(ST) = tr(TS) (4.4.72)

Proof of equation (4.4.63).

α (ST) = (αS)T = S (αT) ,

with the assumption

(αS)v = α (Sv) , with v ∈ V, (4.4.73)

with this

[α (ST)]v = α [S (Tv)] = (αS) (Tv) = [(αS)T]v,

and £nally

(αST) = (αS)T. (4.4.74)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 61: Introduction to Continuum Mechanics - Vector and Tensor Calculus

108 Chapter 4. Vector and Tensor Algebra

Proof of equation (4.4.64).

1T = T1 = T , with 1 ∈ V⊗ V,

with the assumption of equation (4.4.61),

S (Tv) = (ST)v, (4.4.75)

and the identity

1v = v, (4.4.76)

this implies

(1T)v = 1 (Tv) = Tv,

(T1)v = T (1v) = Tv,

and £nally

1T = T1 = T. (4.4.77)

Proof of equation (4.4.66).

(RS)T = R (ST) , with R,S,T ∈ V⊗ V,

with the assumption of equation (4.4.61) and inserting it into equation (4.4.66),

[(RS)T]v = (RS) (Tv) = (RS)w , with v,w ∈ V, (4.4.78)

with equation (4.4.61) again,

(RS)w = R (Sw) , and w = Tv,

[(RS)T]v = R [S (Tv)] = R [(ST)v] = R (Lv) = (RL)v

and £nally

(RS)T = R (ST) . (4.4.79)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.4. Transformations and Products of Tensors 109

Proof of equation (4.4.67).

(R+ S)T = RT+ ST,

with the well known condition for a linear mapping,

(R+ S)v = Rv + Sv, (4.4.80)

with this and equation (4.4.61),

[(R+ S)T]v = (R+ S) (Tv) = R (Tv) + S (Tv) = (RT)v + (ST)v,

and £nally

(R+ S)T = RT+ ST. (4.4.81)

(4.4.82)

Proof of equation (4.4.69).

(ST)T = TTST ,

with the de£ntion

(ST)T

= S, (4.4.83)

which implies

(

(ST)T)T

= ST, (4.4.84)

and this equation only holds, if

(

(ST)T)T

=(TTST

)T= ST. (4.4.85)

Proof of equation (4.4.70).

(ST)−1 = T−1S−1,

and if the inverses T−1 and S−1 exist, then

(ST) (ST)−1 = 1, (4.4.86)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 62: Introduction to Continuum Mechanics - Vector and Tensor Calculus

110 Chapter 4. Vector and Tensor Algebra

with equations (4.4.64) and (4.4.66),

S−1 [(ST) (ST)−1]= S−11 = S−1, (4.4.87)

and equation (4.4.61) implies

S−1 [(ST) (ST)−1]=[S−1 (ST)

](ST)−1 , (4.4.88)

and

S−1 (ST) =(S−1S

)T = T, (4.4.89)

with equation (4.4.88) inserted in (4.4.89) and comparing with equation (4.4.87),

T (ST)−1 = S−1, (4.4.90)

T−1 [T (ST)−1]= T−1S−1, (4.4.91)

and with equation (4.4.61) and (4.4.90),

T−1 [T (ST)−1]=(T−1T

)(ST)−1 = 1 (ST)−1 , (4.4.92)

and £nally comparing this with equation (4.4.91)

(ST)−1 = T−1S−1. (4.4.93)

4.4.4 The Scalar Product or Inner Product of Tensors

The scalar product of tensors is de£ned by

T : (v ⊗w) = vTw , with v,w ∈ V , and T ∈ V⊗ V∗. (4.4.94)

For the three second order tensorsR,S,T ∈ V⊗V∗ and the scalar quantity α ∈ R the followingidentities for scalar products of tensors hold.

Commutative law for the scalar product of tensors:

S : T = T : S (4.4.95)

Distributive law for the scalar product of tensors:

T : (R+ S) = T : R+T : S (4.4.96)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.4. Transformations and Products of Tensors 111

Multiplication by a scalar quantity:

(αT) : S = T : (αS) = α (T : S) (4.4.97)

Existence of an additive identity:

T : S = 0 , and if T is arbitrary, then S = 0. (4.4.98)

Existence of a positive de£nite tensor:

T : T = tr(TTT

)

> 0 , if T 6= 0

= 0 , iff T = 0, i.e. T is positive de£nite. (4.4.99)

Absolute value or norm of a tensor:

|T| =√

tr (TTT ) (4.4.100)

The Schwarz inequality:

|TS| ≤ |T| |S| (4.4.101)

For the norms of tensors, like for the norms of vectors, the following identities hold,

|αT| = |α| |T| , (4.4.102)

|T+ S| ≤ |T|+ |S| . (4.4.103)

And as a rule of thumb:

Lemma 4.5. The result of the scalar product of two dyads is the scalar product of the £rst vectorsof each dyad and the scalar product of the second vectors of each dyad. The scalar product oftwo dyads of vectors is denoted by

(a⊗ b) : (c⊗ d) = (a · c) (b · d) . (4.4.104)

With this rule the index notation of equation (4.4.104) implies for example

S : T =(Simgi ⊗ gm

):(Tnkg

n ⊗ gk)

= SimTnkδni δ

km

and £nally

S : T = SnmTnm. (4.4.105)

And for the other combinations of base vectors the results are

S : T = SnmTnm (4.4.106)

S : T = SnmT

mn (4.4.107)

S : T = S mn T n

m. (4.4.108)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 63: Introduction to Continuum Mechanics - Vector and Tensor Calculus

112 Chapter 4. Vector and Tensor Algebra

4.5 Special Tensors and Operators

4.5.1 The Determinant of a Tensor in Cartesian Coordinates

It is not absolutely correct to speak about the determinant of a tensor, because it is only thedeterminant of the coef£cients of the tensor in Cartesian coordinates 1 and not of the whole tensoritself. For the different notations of a tensor with covariant, contravariant and mixed coef£cientsthe determinant is given by

detT = det[T ij]= det [Tij] = det

[T i

j

]= det

[T ji

]. (4.5.1)

Expanding the determinant of the coef£cient matrix of a tensor T works just the same as for anyother matrix. For example the determinant could be described with the permutation symbol ε,like in equation (4.2.35)

detT = det [Tmn] =

∣∣∣∣∣∣

T11 T12 T13T21 T22 T23T31 T32 T33

∣∣∣∣∣∣

= T1i · T2j · T3k · εijk. (4.5.2)

Some important identities are given without a proof by

det (αT) = α3 detT, (4.5.3)

det (TS) = detT detS, (4.5.4)

detTT = detT, (4.5.5)

(detQ)2 = 1 , if Q is an orthogonal tensor, (4.5.6)

detT−1 = (detT)−1 if T−1 exists. (4.5.7)

4.5.2 The Trace of a Tensor

The inner product of a tensor T with the identity tensor 1 is called the trace of a tensor,

trT = 1 : T = T : 1. (4.5.8)

The same statement written in index notation,

(gk ⊗ gk

):(T ijgi ⊗ gj

)= T ijδkiδ

kj = T k

k , (4.5.9)

and in this way it is easy to see that the result is a scalar. For the dyadic product of two vectorsthe trace is given by the scalar product of the two involved vectors,

tr (a⊗ b) = 1 : (a⊗ b) = a · (1 · b) = a · b, (4.5.10)

1To compute the determinant of a second order tensor in general coordinates is much more complicated, and thisis not part of this lecture/script, for any details see for example DE BOER [3].

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.5. Special Tensors and Operators 113

and in index notation,

(gk ⊗ gk

):(aigi ⊗ bjgj

)= aibjδkiδ

kj = akb

k. (4.5.11)

The trace of a product of two tensors S and T is de£ned by

tr(STT

)= S : T (4.5.12)

and easy to proof just writting it in index notation. Starting with this some more importantidentities could be found,

trT = trTT , (4.5.13)

tr (ST) = tr (TS) , (4.5.14)

tr (RST) = tr (TRS) = tr (STR) , (4.5.15)

tr [T (R+ S)] = tr (TR) + tr (TS) , (4.5.16)

tr [(αS)T] = α tr (ST) , (4.5.17)

T : T = tr(TTT

)

> 0 , if T 6= 0,

= 0 , iff T = 0,, i.e. the tensor T is positive de£nite (4.5.18)

|T| =√

tr (TTT ) the absolute value of a tensor T, (4.5.19)

and £nally the inequality

|S : T| ≤ |S| |T| . (4.5.20)

4.5.3 The Volumetric and Deviator Tensor

Like for the symmetric and skew parts of a tensor there are also a lot of notations for the volumet-ric and deviator parts of a tensor. The volumetric part of a tensor in the 3-dimensional Euclideanvector space E3 is de£ned by

TV = Tvol =1

n(trT)1 , and T ∈ E3 ⊗ E3. (4.5.21)

It is important to notice that all diagonal components V(i)(i) are equal and all the other compo-nents equals zero

V ij = 0 if i 6= j. (4.5.22)

The deviator part of a tensor is given by

TD = Tdev = devT = T−Tvol = T−TV = T− 1

n(trT)1. (4.5.23)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 64: Introduction to Continuum Mechanics - Vector and Tensor Calculus

114 Chapter 4. Vector and Tensor Algebra

4.5.4 The Transpose of a Tensor

The transpose TT of a second order tensor T is de£ned by

w · (T · v) = v · (TT ·w) , and v,w ∈ V , T ∈ V⊗ V. (4.5.24)

For a dyadic product of two vectors the transpose is assumed as

w · [(a⊗ b) · v] = v · [(b⊗ a) ·w] , (4.5.25)

and

(a⊗ b)T = (b⊗ a) . (4.5.26)

The left-hand side of equation (4.5.25),

w · [(a⊗ b) · v] = (w · a) (b · v) , (4.5.27)

and the right-hand side of equation (4.5.25)

v · [(b⊗ a) ·w] = (v · b) (a ·w) = (a ·w) (v · b) , (4.5.28)

are equal, q.e.d. For the transpose of a tensor the following identities hold,

(a⊗ b)T = (b⊗ a), (4.5.29)

(TT )T = T, (4.5.30)

1T = 1, (4.5.31)

(S+T)T = ST +TT , (4.5.32)

(αT)T = αTT , (4.5.33)

(S ·T)T = TT · ST . (4.5.34)

The index notations w.r.t. to the different basis are given by

T = T ijgi ⊗ gj ⇒ TT = T ijgj ⊗ gi = T jigi ⊗ gj , (4.5.35)

T = Tijgi ⊗ gj ⇒ TT = Tijg

j ⊗ gi = Tjigi ⊗ gj , (4.5.36)

T = T ijgi ⊗ gj ⇒ TT = T i

jgj ⊗ gi = T j

i gi ⊗ gj , (4.5.37)

T = T ji g

i ⊗ gj ⇒ TT = T ji gj ⊗ gi = T i

jgi ⊗ gj , (4.5.38)

and the relations between the tensor components,(T ij)T

= T ji , or (Tij)T = Tji, (4.5.39)

and

(T i

j

)T= T i

j , or(T ji

)T= T j

i. (4.5.40)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.5. Special Tensors and Operators 115

4.5.5 The Symmetric and Antisymmetric (Skew) Tensor

There are a lot of different notations for the symmetric part of a tensor T, for example

TS = Tsym = symT, (4.5.41)

and for the antisymmetric or skew part of a tensor T

TA = Tasym = skewT. (4.5.42)

A second rank tensor is said to be symmetric, if and only if

T = TT . (4.5.43)

And a second rank tensor is said to be antisymmetric or skew, if and only if

T = −TT . (4.5.44)

The same statement in index notation

T ij = T ji , if T is symmetric, (4.5.45)

T ij = −T ji , if T is antisymmetric. (4.5.46)

Any second rank tensor can be written as a sum of a symmetric tensor and an antisymmetrictensor,

T = TS +TA = TTsym +TT

asym = symT+ skewT =1

2(T+TT ) +

1

2(T−TT ). (4.5.47)

The symmetric part of a tensor is de£ned by

TS = Tsym = symT =1

2(T+TT ), (4.5.48)

and the antisymmetric (skew) part of a tensor is de£ned by

TA = Tasym = skewT =1

2(T−TT ). (4.5.49)

4.5.6 The Inverse of a Tensor

The inverse of a tensor T exists, if for any two vectors v and w the expression,

w = Tv, (4.5.50)

could be transformed inv = T−1w. (4.5.51)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 65: Introduction to Continuum Mechanics - Vector and Tensor Calculus

116 Chapter 4. Vector and Tensor Algebra

Comparing these two equations gives

TT−1 = T−1T = 1 , and(T−1)−1 = T. (4.5.52)

The inverse of a tensor,

T−1 exists, if and only if detT 6= 0. (4.5.53)

The inverse of a product of a scalar and a tensor is de£ned by

(αT)−1 =1

αT−1, (4.5.54)

and the inverse of a product of two tensors is de£ned by

(ST)−1 = T−1S−1. (4.5.55)

4.5.7 The Orthogonal Tensor

An orthogonal tensor Q satis£es

QQT = QTQ = 1 , i.e. Q−1 = QT . (4.5.56)

From this it follows that the mapping w = Qv with ww = vv implies

wQv = vQTw = vQ−1w. (4.5.57)

The orthogonal mappings of two arbitrary vectors v and w is rewritten with the de£nition of thetranspose (4.5.24)

(Qw) · (Qv) = wQT ·Qv, (4.5.58)

and with the de£nition of the orthogonal tensor (4.5.56)

(Qw) · (Qv) = w · v. (4.5.59)

The scalar product of two vectors equals the scalar product of their orthogonal mappings. Andfor the square value of a vector and its orthogonal mapping equation (4.5.59) denotes

(Qv)2 = v2. (4.5.60)

Sometimes even equation (4.5.59) and not (4.5.56) is used as de£nition of an orthogonal tensor.The orthogonal tensor Q describes a rotation. For the special case of the Cartesian basis thecomponents of the orthogonal tensor Q are given by the cosine of the rotation angle

Q = qikei ⊗ ek ; qik = cos (^ (ei; ek)) and detQ = ±1. (4.5.61)

If detQ = +1, then the tensor is called proper orthogonal tensor or a rotator.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.5. Special Tensors and Operators 117

4.5.8 The Polar Decomposition of a Tensor

The polar decomposition of an nonsingular second order tensor T is given by

T = RU , or T = VR , with T ∈ V⊗ V , and detT 6= 0. (4.5.62)

In the polar decomposition the tensor R = Q is chosen as an orthogonal tensor, i.e. RT = R−1

and detR = ±1. In this case the tensors U and V are positive de£nite and symmetric tensors.The tensorU is named the right-hand Cauchy strain tensor andV is named the left-hand Cauchystrain tensor. Both describe the strains, e.g. if a ball (a circle) is deformed in an ellipsoid (anellipse), on the opposite R represents a rotation. The £gure (4.6) implies the de£nition of the

3U

zR

-F = T

j

R

1

V

PP

µdX

-dx

µd∗z

-dz

Figure 4.6: The polar decomposition.

vectors

dz = R · dX, (4.5.63)

and

dx = V · dz. (4.5.64)

The composition of this two linear mappings is given by

dx = V ·R · dX = F · dX, (4.5.65)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 66: Introduction to Continuum Mechanics - Vector and Tensor Calculus

118 Chapter 4. Vector and Tensor Algebra

The other possible way to describe the composition is with the vector d∗z,

dx = R · d∗z, (4.5.66)

and

d∗z = U · dX, (4.5.67)

and £nallydx = R ·U · dX = F · dX, (4.5.68)

The composed tensor F is called the deformation gradient and its polar decomposition is givenby

T ≡ F = R ·U = V ·R. (4.5.69)

4.5.9 The Physical Components of a Tensor

In general a tensor w.r.t. the covariant basis gi is given by

T = T ikgi ⊗ gk. (4.5.70)

The physical components∗T ik are de£ned by

T =∗T ik gi

|gi|⊗ gk

|gk|

=

∗T ik

√g(i)(i)

√g(k)(k)

gi ⊗ gk, (4.5.71)

∗T ik = T ik√g(i)(i)√g(k)(k). (4.5.72)

The stress tensor T = T ikgi ⊗ gk is given w.r.t. to the bais gi. Then the associated stress vectorti w.r.t. a point in the sectional area dai is de£ned by

ti =df i

da(i); df i = tida(i), (4.5.73)

ti = τ ikgk ; df i = τ ikgkda(i), (4.5.74)

with the differential force df i. Furthermore the sectional area and its absolute value are given by

dai = da(i)g(i), (4.5.75)

and

|dai| = da(i)∣∣g(i)

∣∣ ,

|dai| = da(i)√

g(i)(i). (4.5.76)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.5. Special Tensors and Operators 119

6g3

*g2

^g1

6da3 µt3

Figure 4.7: An example of the physical components of a second order tensor.

The de£nition of the physical stresses∗τ ik is given by

df i =∗τ ik gk

|gk|∣∣da(i)

∣∣ ,

df i =∗τ ik gk√

g(k)(k)da(i)

g(i)(i). (4.5.77)

Comparing equations (4.5.74) and (4.5.77) implies(

τ ik − ∗τ ik

g(i)(i)√g(k)(k)

)

da(i)√

g(i)(i) = 1,

and £nally the de£nition for the physical components of the stress tensor∗τ ik is given by

∗τ ik = τ ik

√g(k)(k)√

g(i)(i). (4.5.78)

4.5.10 The Isotropic Tensor

An isotropic tensor is a tensor, which has in every rotated coordinate system, if it is a Cartesiancoordinate system the same components. Every tensor with order zero, i.e. a scalar quantity, isan isotropic tensor, but no £rst order tensor, i.e. a vector, could be isotropic. The unique isotropicsecond order tensor is the Kronecker delta, see section (4.1).

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 67: Introduction to Continuum Mechanics - Vector and Tensor Calculus

120 Chapter 4. Vector and Tensor Algebra

4.6 The Principal Axes of a Tensor

4.6.1 Introduction to the Problem

The computation of the . . .

• invariants,

• eigenvalues and eigenvectors,

• vectors of associated directions, and

• principal axes

is described for the problem of the principal axes of a stress tensor, also called the directions ofprincipal stress. The Cauchy stress tensor in Cartesian coordinates is given by

T = T ikei ⊗ ek. (4.6.1)

This stress tensor T is symmetric because of the equilibrium conditon of moments. With this

-

6

ª

x2

x3

x1

±t(n)dA(n)

¸n

1 t(1)dA(1)It(3)dA(3)

R−t(2)dA(2)

-

6

ªe2

e3

e1

Figure 4.8: Principal axis problem with Cartesian coordinates.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.6. The Principal Axes of a Tensor 121

condition the shear stresses in orthogonal sections are equal

T ik = T ki, and (4.6.2)

T = TT = T kiei ⊗ ek. (4.6.3)

The stress vector in the section surface, given by e1 = e1, is de£ned by the linear mapping ofthe normal unit vector e1 with the tensor T

t1 = T · (−e1) (4.6.4)

= −T ik(ei ⊗ ek)e1

= −T ikδ1kei,

t1 = −T 1ie1. (4.6.5)

The stress tensor T assigns the resulting stress vector t(n) to the direction of the normal vectorperpendicular to the section surface n. This linear mapping ful£lls the equilibrium conditions

t(n) = T · n, (4.6.6)

with the normal vector

n = nlel, (4.6.7)

n · ek = nlδkl = nk = cos(n, ek

),

and the absolute value

|n| = 1.

The stress vector in direction of n is computed by

t(n) =(T ikei ⊗ ek

)· nlel (4.6.8)

= T iknl (ek · el) ei= T iknlδklei,

t(n) = T iknkei. (4.6.9)

The action of the tensor T on the normal vector n reduces the order of the second order tensorT (stress tensor) to a £rst order tensor t(n) (i.e. the stress vector in direction of n).

Lemma 4.6. Principal axes problem Exists a direction no in space, in such a way, that theresulting stress vector t(n0) is oriented in this direction, i.e. that the vector n0 ful£lls the followingequations?

t(n0) = λn0 (4.6.10)

= λ1 · n0. (4.6.11)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 68: Introduction to Continuum Mechanics - Vector and Tensor Calculus

122 Chapter 4. Vector and Tensor Algebra

Comparing equations (4.6.6) and (4.6.11) leads to T · n0 = λ1 · n0 and therefore to

(T− λ1) · n0 = 0. (4.6.12)

For this special case of eigenvalue problem . . .

• the directions n0jare called the principal stress directions, and they are given by the eigen-

vectors.

• and the λj = τj are called the eigenvalues or resp. in this case the principal stresses.

4.6.2 Components in a Cartesian Basis

The equation (4.6.12) is rewritten with index notaion

(T ikei ⊗ ek − λδikei ⊗ ek

)· nl

0el = 0, (4.6.13)(T ik − λδik

)nl0 (ek · el) ei = 0,

(T ik − λδik

)nl0δklei = 0,

(T ik − λδik

)n0kei = 0,

and £nally

(T ik − λδik

)n0k = 0. (4.6.14)

This equation could be represented in matrix notation, because it is given in a Cartesian basis,

([T ik]− λ

[δik])

[n0k] = [0] , (4.6.15)

(T − λ1)n0 = 0,

T 11 − λ T 12 T 13

T 21 T 22 − λ T 23

T 31 T 32 T 33 − λ

·

n01n02n03

=

000

. (4.6.16)

This is a linear homogenous system of equations for n01, n02 and n03, then non-trivial solutuionsexist, if and only if

det(T− λ1) = 0, (4.6.17)

or in index notationdet(T ik − λδik) = 0. (4.6.18)

4.6.3 Components in a General Basis

In a general basis with a stress tensor with covariant base vectors T = T ikgi ⊗ gk, a normalvector n0 = nl

0gl, and a unit tensor with covariant base vectors 1 = G = gikgi ⊗ gk, too the

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.6. The Principal Axes of a Tensor 123

equation (4.6.12) is denoted like this

(

T ikgi ⊗ gk − λgikgi ⊗ gk

)

· nl0gl = 0, (4.6.19)

(

T ik − λgik)

nl0 (gk · gl)gi = 0,

(

T ik − λgik)

nl0gklgi = 0,

(

T ik − λgik)

n0kgi = 0,

and £nally in index notation

(

T ik − λgik)

= 0,

det(

T ik − λgik)

= 0. (4.6.20)

And the same in mixed formulation is given by

(T ik − λδik

)nk0 = 0,

det(T ik − λδik

)= 0. (4.6.21)

4.6.4 Characteristic Polynomial and Invariants

The characteristic polynomial of an eigenvalue problem with the invariants I1, I2, and I3 in a3-dimensional space E3 is de£ned by

f (λ) = I3 − λI2 + λ2I1 − λ3 = 0. (4.6.22)

For a Cartesian basis the equation (4.6.22) becomes a cubic equation, because of being an eigen-value problem in E3 with the invariants given by

I1 = trT = gikTik = δikT

ik = T kk = T kk, (4.6.23)

I2 =1

2

[(trT)2 − tr (T)2

]=

1

2

[T iiT kk − T ikT ki

], (4.6.24)

I3 = detT = det(T ik)

. (4.6.25)

The fundamental theorem of algebra implies that there exists three roots λ1, λ2, and λ3, such thatthe following equations hold

I1 = λ1 + λ2 + λ3, (4.6.26)

I2 = λ1λ2 + λ2λ3 + λ3λ1, (4.6.27)

I3 = λ1λ2λ3. (4.6.28)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 69: Introduction to Continuum Mechanics - Vector and Tensor Calculus

124 Chapter 4. Vector and Tensor Algebra

4.6.5 Principal Axes and Eigenvalues of Symmetric Tensors

The assumption that the tensor is symmetric is denoted in tensor and index notation like thisTT = T, T ik = T ki and resp. in matrix notation T T = T . The eigenvalue problem withλj 6= λl is given in matrix notation by

(T − λ1)n0 = 0, (4.6.29)

in matrix notation for an an arbitrary λj

+nT0l· | (T − λj1)n0j = 0, (4.6.30)

and with an arbitrary λl

−nT0j· | (T − λl1)n0l = 0. (4.6.31)

The addition of the last two equations leads to the relation,

nT0lT n0j − λjn

T0ln0j − nT

0jT n0l + λlnT0jn0l = 0. (4.6.32)

A quadratic form implies

nT0jT n0l =

(nT0jT n0l

)T= nT

0lTTn0j = nT

0lT n0j , (4.6.33)

i.e.

(λl − λj)nT0jn0l ≡ 0. (4.6.34)

This equation holds, if and only if λl − λj 6= 0 and nT0jn0l = 0. The conclusion of this relation

between the eigenvalues λj , λl, and the normal unit vectors n0j , and n0l is, that the normal vectorsare orthogonal to each other.

4.6.6 Real Eigenvalues of a Symmetric Tensors

Two complex conjugate eigenvalues are denoted by

λj = β + iγ, (4.6.35)

λl = β − iγ. (4.6.36)

The coordinates of the associated eigenvectors n0j and n0l could be written as column matricesn0j and n0l. Furthermore the relations n0j = b + ic and n0l = b + ic hold. Comparing this withthe equation (4.6.34) implies

(λl − λj)nT0jn0l = 0,

2iγ (b+ ic)T (b− ic) = 0,

2iγ(bT b+ cT c

)= 0, (4.6.37)

bT b+ cT c 6= 0, (4.6.38)

i.e. γ = 0 and the eigenvalues are real numbers. The result is a symmetric stress tensor withthree real principale stresses and the associated directions being orthogonal to each other.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.6. The Principal Axes of a Tensor 125

4.6.7 Example

Compute the characteristic polynomial, the eigenvalues and the eigenvectors of this matrix

1 −4 8−4 7 48 4 1

.

By expanding the determinant det (A− λ1) the characteristic polynomial becomes

p (λ) = −λ+ 9λ2 + 81λ− 729,

with the (real) eigenvaluesλ1 = −9, λ2,3 = 9.

For this eigenvalues the orthogonal eigenvectors are established by

x1 =

21−2

, x2 =

122

und x3 =

−22−1

.

4.6.8 The Eigenvalue Problem in a General Basis

Let T be an arbitrary tensor, given in the basis gi with i = 1, . . . , n, and de£ned by

T = T ikgi ⊗ gk. (4.6.39)

The identity tensor in Cartesian coordinates 1 = δikei ⊗ ek is substituted by the identity tensor1, de£ned by

1 = gikgi ⊗ gk. (4.6.40)

Then the eigenvalue problem(T− λ1) · n0 = 0 (4.6.41)

is substituted by the eigenvalue problem in general coordinates given by(

T ikgi ⊗ gk − λgikgi ⊗ gk

)

· nl0gl = 0, (4.6.42)

with the vector n0 = nl0gl in the direction of a principal axis. Begining with the eigenvalue

problem(

T ik − λgik)

· nl0 (gk · gl)gi = 0

(

T ik − λgik)

nl0gklgi = 0

(

T ik − λgik)

n0kgi = 0, (4.6.43)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 70: Introduction to Continuum Mechanics - Vector and Tensor Calculus

126 Chapter 4. Vector and Tensor Algebra

and with

n0k 6= 0, (4.6.44)

results £nally the condition(

T ik − λgik)

n0k = 0. (4.6.45)

The tensor T could be written in the mixed notation,

T = T ikgi ⊗ gk, (4.6.46)

and with the identity tensor also in mixed notation,

1 = δikgi ⊗ gk. (4.6.47)

The eigenvalue problem

(T ik − λδik

)· nl

0

(gk · gl

)gi = 0

(T ik − λδik

)· nl

0δkl gi = 0, (4.6.48)

implies the condition(T ik − λδik

)n0k = 0. (4.6.49)

But the matrix [T ik] =

∗T is nonsymmetric. For this reason it is necessary to control the orthogo-

nality of the eigenvectors by a decomposition.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.7. Higher Order Tensors 127

4.7 Higher Order Tensors

4.7.1 Review on Second Order Tensor

A complete second order tensor T maps a vector u for example in the vector space V, like this

v = Tu , with u,v ∈ V , and T ∈ V⊗ V. (4.7.1)

For example in index notation with a vector basis gi ∈ V, a vector is given by

u = uigi = uigi, (4.7.2)

and a second order tensor by

T = T jkgj ⊗ gk , with gj,gk ∈ V. (4.7.3)

Than a linear mapping with a second order tensor is given by

v = T jk (gj ⊗ gk)uigi (4.7.4)

= T jkui (gk · gi)gj

= T jkuigkigj ,

v = T ji u

igj = vjgj . (4.7.5)

4.7.2 Introduction of a Third Order Tensor

After having a close look at a second order tensor, and realizing that a vector is nothing else buta £rst order tensor, it is easy to understand, that there might be also a higher order tensor. In thesame way like a second order tensor maps a vector onto another vector, a complete third ordertensor maps a vector onto a second order tensor. For example in index notation with a vectorbasis gi ∈ V, a vector is given

u = uigi = uigi, (4.7.6)

and a complete third order tensor by

3

A = Ajklgj ⊗ gk ⊗ gl , with gj,gk,gl ∈ V. (4.7.7)

Than a linear mapping with a third order tensor is given by

T = Ajkl (gj ⊗ gk ⊗ gl)uigi (4.7.8)

= Ajklui (gl · gi) (gj ⊗ gk)

= Ajkluigli (gj ⊗ gk) ,

T = Ajklul (gj ⊗ gk) = T jk (gj ⊗ gk) . (4.7.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 71: Introduction to Continuum Mechanics - Vector and Tensor Calculus

128 Chapter 4. Vector and Tensor Algebra

4.7.3 The Complete Permutation Tensor

The most important application of a third order tensor is the third order permutation tensor, whichis the correct description of the permutation symbol in section (4.2). The complete permutationtensor is antisymmetric and in the space E3 just represented by a scalar quantity (positive ornegative), see also the permutation symbols e and ε in section (4.2). The complete permutationtensor in Cartesian coordinates or also called the third order fundamental tensor is de£ned by

3

E = eijkei ⊗ ej ⊗ ek , or

3

E = eijkei ⊗ ej ⊗ ek. (4.7.10)

For the orthonormal basis ei the components of the covariant eijk and contravariant eijk permu-tation tensor (symbol) are equal

ei × ej = eijk · ek. (4.7.11)

Equation (4.2.26) in section (4.2) is the short form of a product in Cartesian coordinates be-tween the third order tensor

3

E and the second order identity tensor. This scalar product of thepermutation tensor and ei ⊗ ej from the right-hand side yields,

ei × ej =(erske

r ⊗ es ⊗ ek): (ei ⊗ ej)

= erskδsi δ

kj e

r = erijer,

ei × ej = eijrer, (4.7.12)

or with ej ⊗ ei from the left-hand side yields

ei × ej = (ei ⊗ ej) :(erske

r ⊗ es ⊗ ek)

= erskδri δ

sje

k,

ei × ej = eijkek. (4.7.13)

4.7.4 Introduction of a Fourth Order Tensor

The action of a fourth order tensor C, given by

C = C ijkl (gi ⊗ gj ⊗ gk ⊗ gl) , (4.7.14)

on a second order tensor T given by

T = Tmn (gm ⊗ gn) , (4.7.15)

is given in index notation, see also equation (4.4.104), by

S = C : T =(C ijklgi ⊗ gj ⊗ gk ⊗ gl

): (Tmngm ⊗ gn) , (4.7.16)

= C ijklTmn (gk · gm) (gl · gn) (gi ⊗ gj) ,

= C ijklTmngkmgln (gi ⊗ gj) = C ijklTkl (gi ⊗ gj) ,

S = Sijgi ⊗ gj . (4.7.17)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

4.7. Higher Order Tensors 129

Really important is the so called elasticity tensor C used in elasticity theory. This is a fourthorder tensor, which maps the strain tensor ε onto the stress tensor σ,

σ = Cε. (4.7.18)

Comparing this with the well known unidimensional Hooke’s law it is easy to see that this map-ping is the generalized 3-dimensional linear case of Hooke’s law. The elasticity tensor C has ingeneral in space E3 the total number of 34 = 81 components. Because of the symmetry of thestrain tensor ε and the stress tensor σ this number reduces to 36. With the potential characterof the elastic stored deformation energy the number of components reduces to 21. For an elasticand isotropic material there is another reduction to 2 independent constants, e.g. the Young’smodulus E and the Poisson’s ratio ν.

4.7.5 Tensors of Various Orders

Higher order tensor are represented with the dyadic products of vectors, e.g. a simple third ordertensor and a complete third order tensor,

3

B =n∑

i=1

ai ⊗ bi ⊗ ci =n∑

i=1

Ti ⊗ ci = Bijkgi ⊗ gj ⊗ gk, (4.7.19)

and a simple fourth order tensor and a complete fourth order tensor,

C =n∑

i=1

ai ⊗ bi ⊗ ci ⊗ di =n∑

i=1

Si ⊗Ti = C ijklgi ⊗ gj ⊗ gk ⊗ gl. (4.7.20)

For example the tensors form order zero till order four are summarized in index notation with abasis gi,

a scalar quantity, or a tensor of order zero α =(0)α , (4.7.21)

a vector, or a £rst order tensor v =(1)v = vig

i, (4.7.22)

a second order tensor T =(2)

T = T jkgj ⊗ gk, (4.7.23)

a third order tensor3

B = Bijkgi ⊗ gj ⊗ gk, (4.7.24)

a fourth order tensor C = C = C ijklgi ⊗ gj ⊗ gk ⊗ gl. (4.7.25)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 72: Introduction to Continuum Mechanics - Vector and Tensor Calculus

130 Chapter 4. Vector and Tensor Algebra

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter 5

Vector and Tensor Analysis

SIMMONDS [12], HALMOS [6], ABRAHAM, MARSDEN, and RATIU [1], and MATTHEWS [11].And in german DE BOER [3], STEIN ET AL. [13], and IBEN [7].

131

Page 73: Introduction to Continuum Mechanics - Vector and Tensor Calculus

132 Chapter 5. Vector and Tensor Analysis

Chapter Table of Contents

5.1 Vector and Tensor Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 133

5.1.1 Functions of a Scalar Variable . . . . . . . . . . . . . . . . . . . . . . 133

5.1.2 Functions of more than one Scalar Variable . . . . . . . . . . . . . . . 134

5.1.3 The Moving Trihedron of a Space Curve in Euclidean Space . . . . . . 135

5.1.4 Covariant Base Vectors of a Curved Surface in Euclidean Space . . . . 138

5.1.5 Curvilinear Coordinate Systems in the 3-dim. Euclidean Space . . . . . 139

5.1.6 The Natural Basis in the 3-dim. Euclidean Space . . . . . . . . . . . . 140

5.1.7 Derivatives of Base Vectors, Christoffel Symbols . . . . . . . . . . . . 141

5.2 Derivatives and Operators of Fields . . . . . . . . . . . . . . . . . . . . . . 143

5.2.1 De£nitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . 143

5.2.2 The Gradient or Frechet Derivative of Fields . . . . . . . . . . . . . . 143

5.2.3 Index Notation of Base Vectors . . . . . . . . . . . . . . . . . . . . . 144

5.2.4 The Derivatives of Base Vectors . . . . . . . . . . . . . . . . . . . . . 145

5.2.5 The Covariant Derivative . . . . . . . . . . . . . . . . . . . . . . . . . 145

5.2.6 The Gradient in a 3-dim. Cartesian Basis of Euclidean Space . . . . . . 146

5.2.7 Divergence of Vector and Tensor Fields . . . . . . . . . . . . . . . . . 147

5.2.8 Index Notation of the Divergence of Vector Fields . . . . . . . . . . . 148

5.2.9 Index Notation of the Divergence of Tensor Fields . . . . . . . . . . . 148

5.2.10 The Divergence in a 3-dim. Cartesian Basis in Euclidean Space . . . . 149

5.2.11 Rotation or Curl of Vector and Tensor Fields . . . . . . . . . . . . . . 150

5.2.12 Laplacian of a Field . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

5.3 Integral Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

5.3.1 De£nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

5.3.2 Gauss’s Theorem for a Vector Field . . . . . . . . . . . . . . . . . . . 155

5.3.3 Divergence Theorem for a Tensor Field . . . . . . . . . . . . . . . . . 156

5.3.4 Integral Theorem for a Scalar Field . . . . . . . . . . . . . . . . . . . 156

5.3.5 Integral of a Cross Product or Stokes’s Theorem . . . . . . . . . . . . 157

5.3.6 Another Interpretation of Gauss’s Theorem . . . . . . . . . . . . . . . 158

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.1. Vector and Tensor Derivatives 133

5.1 Vector and Tensor Derivatives

5.1.1 Functions of a Scalar Variable

A scalar function could be represented by another scalar quantity, a vector, or even a tensor,which depends on one scalar variable. These different types of scalar functions are denoted by

β = β (α) a scalar-valued scalar function, (5.1.1)

v = v (α) a vector-valued scalar function, (5.1.2)

and

T = T (α) a tensor-valued scalar function, (5.1.3)

The usual derivative w.r.t. a scalar variable α of the equation (5.1.1) is established with theTaylor series of the scalar-valued scalar function β (α) at a value α,

β (α + τ) = β (α) + γ (α) · τ +O(τ 2)

. (5.1.4)

The term γ (α) given by

γ (α) = limτ→0

β (α + τ)− β (α)

τ, (5.1.5)

is the derivative of a scalar w.r.t. a scalar quantity. The usual representations of the derivativesdβdα are given by

γ (α) =dβ

dα= β′ = lim

τ→0

β (α + τ)− β (α)

τ. (5.1.6)

The Taylor series of the scalar-valued vector function, see equation (5.1.2), at a value α is givenby

v (α + τ) = v (α) + y (α) · τ +O(τ 2)

. (5.1.7)

The derivative of a vector w.r.t. a scalar quantity α is de£ned by

y (α) =dv

dα= v′ = lim

τ→0

v (α + τ)− v (α)τ

. (5.1.8)

The total differential or also called the exact differential of the vector function v (α) is given by

dv = v′ dα. (5.1.9)

The second derivative of the scalar-valued vector function v (α) at a value α is given by

dy (α)

dα=

d

(dv

)

= v′′. (5.1.10)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 74: Introduction to Continuum Mechanics - Vector and Tensor Calculus

134 Chapter 5. Vector and Tensor Analysis

The Taylor series of the scalar-valued tensor function, see equation (5.1.3), at a value α is givenby

T (α+ τ) = T (α) +Y (α) · τ +O(τ 2)

. (5.1.11)

This implies the derivative of a tensor w.r.t. a scalar quantity,

Y (α) =dT

dα= T′ = lim

τ→0

T (α + τ)− T (α)

τ. (5.1.12)

In the following some important identities are listed,

(λv)′ = λ′v + λv′, (5.1.13)

(vw)′ = v′w + vw′, (5.1.14)

(v ×w)′ = v′ ×w + v ×w′, (5.1.15)

(v ⊗w)′ = v′ ⊗w + v ⊗w′, (5.1.16)

(Tv)′ = T′v +Tv′, (5.1.17)

(ST)′ = S′T+ ST′, (5.1.18)(T−1)′ = −T−1T′T−1. (5.1.19)

As a short example for a proof of the above identities the proof of equation (5.1.19) is given by

TT−1 = 1,(TT−1)′ = T′T−1 +T

(T−1)′ = 0,

⇒ −T′T−1 = T(T−1)′ ,

⇒(T−1)′ = −T−1T′T−1.

5.1.2 Functions of more than one Scalar Variable

Like for the functions of one scalar varaible it is also possible to de£nite varoius functions ofmore than one scalar variable, e.g.

β = β (α1, α2, . . . , αi, . . . , αn) a scalar-valued function of multiple variables, (5.1.20)

v = v (α1, α2, . . . , αi, . . . , αn) a vector-valued function of multiple variables, (5.1.21)

and £nally

T = T (α1, α2, . . . , αi, . . . , αn) a tensor-valued function of multiple variables. (5.1.22)

In stead of establishing the total differentials like in the section before, it is now necessary toestablish the partial derivatives of the functions w.r.t. the various variables. Starting with thescalar-valued function (5.1.20), the partial derivative w.r.t. the i-th scalar variable αi is de£nedby

∂β

∂αi

= β,i = limτ→0

β (α1, . . . , αi + τ, . . . , αn)− β (α1, . . . , αi, . . . , αn)

τ. (5.1.23)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.1. Vector and Tensor Derivatives 135

With this partial derivatives β,i the exact differential of the function β is given by

dβ = β,idαi. (5.1.24)

The partial derivatives of the vector-valued function (5.1.21) w.r.t. the scalar variable αi arede£ned by

∂v

∂αi

= v,i = limτ→0

v (α1, . . . , αi + τ, . . . , αn)− v (α1, . . . , αi, . . . , αn)

τ, (5.1.25)

and its exact differential is given bydv = v,idαi. (5.1.26)

The partial derivatives of the tensor-valued function (5.1.22) w.r.t. the scalar variable αi arede£ned by

∂T

∂αi

= T,i = limτ→0

T (α1, . . . , αi + τ, . . . , αn)− T (α1, . . . , αi, . . . , αn)

τ, (5.1.27)

and the exact differential is given by

dT = T,idαi. (5.1.28)

5.1.3 The Moving Trihedron of a Space Curve in Euclidean Space

A vector function x = x (Θ1) with one variable Θ1 in the Euclidean vector space E3 could berepresented by a space curve. The vector x (Θ1) is the position vector from the origin O to thepoint P on the space curve. The tangent vector t (Θ1) at a point P is then de£ned by

t(Θ1)= x′

(Θ1)=

dx (Θ1)

dΘ1. (5.1.29)

The tangent unit vector or just the tangent unit of the space curve at a point P with the positionvector x is de£ned by

t (s) =∂x

∂Θ1

∂Θ1

∂s=

dx

ds, and |t (s)| = 1. (5.1.30)

The normal vector at a point P on a space curve is de£ned with the derivative of the tangentvector w.r.t. the curve parameter s by

∗n =

dt

ds=

d2x

ds2. (5.1.31)

The term 1/ρ is a measure of the curvature or just the curvature of a space curve at a point P .The normal vector

∗n at a point P is perpendicular to the tangent vector t at this point,

∗n⊥t i.e.

∗n · t = 0,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 75: Introduction to Continuum Mechanics - Vector and Tensor Calculus

136 Chapter 5. Vector and Tensor Analysis

-

6

ª

e3

e2e1

O

P

Q

M

-s = s (Θ1)

qt (Θ1)

µ

x (Θ1)1

x (Θ1 +∆Θ1)

R∆x

ρ

Figure 5.1: The tangent vector in a point P on a space curve.

and the curvature is given by

1

ρ2=

d2x

ds2· d

2x

ds2. (5.1.32)

The proof of this assumption starts with the scalar product of two tangent vectors,

t · t = 1,d

ds(t · t) = 0,

2dt

ds· t = 0,

and £nally results, that the scalar product of the derivative w.r.t. the curve parameter and thetangent vector equals zero, i.e. this two vectors are perpendicular to each other,

dt

ds⊥t.

This implies the de£nition

1

ρ=∣∣∣∗n

∣∣∣ of the curvature of a curve at a point P .

With the curvature 1ρ the normal unit vector n or just the normal unit is de£ned by

n = ρ · ∗n. (5.1.33)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.1. Vector and Tensor Derivatives 137

1

M

ª

t

n

b

Figure 5.2: The moving trihedron.

The so called binormal unit vector b or just the binormal unit is the vector perpendicular to thetangent vector t, and the normal vector n at a point P , and de£ned by

b = t× n. (5.1.34)

The absolute value |b| of the binormal unit is a measure for the torsion of the curve in space at apoint P , and the derivative of the binormal vector w.r.t. the curve parameter s implies,

db

ds=

dt

ds× n+ t× dn

ds

=∗n× n+ t× dn

ds

= 0 +1

τn

db

ds=

1

τn. (5.1.35)

This yields the de£nition

1

τof the torsion of a curve at a point P ,

and with equation (5.1.35) the torsion is given by

1

τ= −ρ2

(dx

ds× d2x

ds2

)

· d3x

ds3. (5.1.36)

The three unit vectors t, n and b form the moving trihedron of a space curve in every point P .The derivatives w.r.t. the curve parameter ds are the so called Serret-Frenet equations, and given

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 76: Introduction to Continuum Mechanics - Vector and Tensor Calculus

138 Chapter 5. Vector and Tensor Analysis

below,

dt

ds=

1

ρn =

d2x

ds2, and b = t× n, (5.1.37)

dn

ds= −1

ρn× b− 1

τt× n = −1

ρt− 1

τb, (5.1.38)

db

ds=

1

τn = − d

ds(t× b) . (5.1.39)

5.1.4 Covariant Base Vectors of a Curved Surface in Euclidean Space

The vector-valued function x = x (Θ1,Θ2) with two scalar variables Θ1, and Θ2 representsa curved surface in the Euclidean vector space E3. The covariant base vectors of the curved

-

6

ª

e3

e2e1

O

P*

x Θ1

Θ2

za1

¸a2

oa31ds

Figure 5.3: The covariant base vectors of a curved surface.

surface are given by

a1 =∂x

∂Θ1, and a2 =

∂x

∂Θ2. (5.1.40)

The metric coef£cients of the curved surface are computed with the base vectors, and the follow-ing prede£nition for the small greek letters,

aαβ = aαaβ , and α, β = 1, 2. (5.1.41)

The normal unit vector of the curved surface, perpendicular to the vectors a1, and a2, is de£nedby

n = a3 =a1 × a2|a1 × a2|

. (5.1.42)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.1. Vector and Tensor Derivatives 139

With this relation the other metric coef£cients are given by

aα3 = 0 , and a33 = 1, (5.1.43)

and £nally the determinant of the metric coef£cients is given by

a = det aα,β . (5.1.44)

The absolute value of a line element dx is computed by

ds2 = dx · x = x,α dΘα · x,β dΘβ ,

= aα · aβ dΘα dΘβ = aαβ dΘα dΘβ ,

and £nally

⇒ ds =√

aαβ dΘα dΘβ . (5.1.45)

The differential element of area dA is given by

dA =√a dΘ1 dΘ2. (5.1.46)

The contravariant base vectors of the curved surface are computed with the metric coef£cientsand the covariant base vectors,

aα = aαβaβ , (5.1.47)

and the Kronecker delta is given by

aαβaβγ = δαγ . (5.1.48)

5.1.5 Curvilinear Coordinate Systems in the 3-dim. Euclidean Space

The position vector x in an orthonormal Cartesian coordinate system is given by

x = xiei. (5.1.49)

The curvilinear coordinates or resp. the curvilinear coordinate system is introduced by thefollowing relations between the curvilinear coordinates Θi and the Cartesian coordinates xj andbase vectors ej ,

Θi = Θi(x1, x2, x3

). (5.1.50)

The inverses of this relations in the domain are explicit de£ned by

xi = xi(Θ1,Θ2,Θ3

), (5.1.51)

if the following conditions hold, . . .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 77: Introduction to Continuum Mechanics - Vector and Tensor Calculus

140 Chapter 5. Vector and Tensor Analysis

-

6

ª

e3

e2e1

O

x3

x2

x1

3

x

Θ1

Θ2

Θ3

P

Figure 5.4: Curvilinear coordinates in a Cartesian coordinate system.

• the function is at least one-time continuous differentiable,

• and the Jacobian or more precisely the determinant of the Jacobian matrix is not equal tozero,

J = det

[∂xi

∂Θk

]

6= 0. (5.1.52)

The vector x w.r.t. the curvilinear coordinates is represented by

x = xi(Θ1,Θ2,Θ3

)ei. (5.1.53)

5.1.6 The Natural Basis in the 3-dim. Euclidean Space

A basis in the point P represented by the position vector x and tangential to the curvilinearcoordinates Θi is introduced by

gk =∂xi (Θ1,Θ2,Θ3)

∂Θkei. (5.1.54)

These base vectors gk are the covariant base vectors of the natural basis and form the so callednatural basis. In general these base vectors are not perpendicular to each other. Furthermorethis basis gk changes along the curvilinear coordinates in every point, because it depends on the

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.1. Vector and Tensor Derivatives 141

-

6

ª

e3

e2e1

1

M

±

g1

g2

g3

O

3

x

Θ1

Θ2

Θ3

P

Figure 5.5: The natural basis of a curvilinear coordinate system.

position vectors of the points along this curvilinear coordinate. The vector x w.r.t. the covariantbasis is given by

x = xigi. (5.1.55)

For each covariant natural basis gk an associated contravariant basis with the contravariant basevectors of the natural basis gi is de£ned by

gkgi = δik. (5.1.56)

The vector x w.r.t. the contravariant basis is represented by

x = ¯xigi. (5.1.57)

The covariant coordinates xi and the contravariant coordinates ¯xi of the position vector x areconnected by the metric coef£cients like this,

¯xi = gikxk, (5.1.58)

xi = gik ¯xk. (5.1.59)

5.1.7 Derivatives of Base Vectors, Christoffel Symbols

The derivative of a covariant base vector gi ∈ E3 w.r.t. a coordinate Θk is again a vector, whichcould be described by a linear combination of the base vectors g1, g2, and g3,

∂gi

∂Θk= gi,k

!= Γs

ikgs. (5.1.60)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 78: Introduction to Continuum Mechanics - Vector and Tensor Calculus

142 Chapter 5. Vector and Tensor Analysis

The Γsik are the components of the Christoffel symbol Γ(i). The Christoffel symbol could be

described by a second order tensor w.r.t. the basis gi,

Γ(i) = Γsijgs ⊗ gj . (5.1.61)

With this de£nition of the Christoffel symbol as a second order tensor a linear mapping of thebase vector gk is given by

gi,k = Γ(i) · gk = Γsij

(gs ⊗ gj

)· gk (5.1.62)

= Γsij

(gj · gk

)gs = Γs

ijδjkgs,

and £nally

gi,k = Γsikgs. (5.1.63)

Equation (5.1.63) is again the de£niton of the Christoffel symbol, like in equation (5.1.60). Withthis relation the components of the Christoffel symbol could be computed, like this

gi,k · gs = Γrikgr · gs = Γr

ikδsr = Γs

ik. (5.1.64)

Like by any other second order tensor, the raising and lowering of the indices of the Christoffelsymbol is possible with the contravariant metric coef£cients g ls, and with the covariant metriccoef£cients gls, e.g.

Γikl = glsΓsik. (5.1.65)

Also important are the relations between the derivatives of the metric coef£cients w.r.t. to thecoordinates Θi and the components of the Christoffel symbol,

Γikl =1

2(gkl,i + gil,kgik,l) . (5.1.66)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.2. Derivatives and Operators of Fields 143

5.2 Derivatives and Operators of Fields

5.2.1 De£nitions and Examples

A function of an Euclidean vector, for example of a vector of position x ∈ E3 is called a £eld.The £elds are seperated in three classes by their value,

α = α (x) the scalar-valued vector function or scalar £eld, (5.2.1)

v = v (x) the vector-valued vector function or vector £eld, (5.2.2)

and

T = T (x) the tensor-valued vector function or tensor £eld. (5.2.3)

For example some frequently used £elds in the Euclidean vector space are,

• scalar £elds - temperature £eld, pressure £eld, density £eld,

• vector £elds - velocity £eld, acceleration £eld,

• tensor £elds - stress state in a volume element.

5.2.2 The Gradient or Frechet Derivative of Fields

A vector-valued vector function or vector £eld v = v (x) is differentiable at a point P repre-sented by a vector of position x, if the following linear mapping exists

v (x+ y) = v (x) + L (x) · y +O(y2)

, and |y| → 0, (5.2.4)

L (x)1 = lim|y|→0

v (x+ y)− v (x)|y| . (5.2.5)

The linear mapping L (x) is called the gradient or the Frechet derivative

L (x) = grad v (x) . (5.2.6)

The gradient grad v (x) of a vector-valued vector function (vector £eld) is a tensor-valued func-tion (second order tensor depending on the vector of position x). For a scalar-valued vectorfunction or a scalar £eld α (x) there exists an analogue to equation (5.2.4), resp. (5.2.5),

α (x+ y) = α (x) + l (x) · y +O(y2)

, and |y| → 0, (5.2.7)

l (x) · 1 = lim|y|→0

α (x+ y)− α (x)

|y| . (5.2.8)

And with this relation the gradient of the scalar £eld is a vector-valued vector function (vector£eld) given by

l (x) = grad α (x) . (5.2.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 79: Introduction to Continuum Mechanics - Vector and Tensor Calculus

144 Chapter 5. Vector and Tensor Analysis

Finally for a tensor-valued vector function or a tensor £eld T (x) the relations analogue to equa-tion (5.2.4), resp. (5.2.5), are given by

T (x+ y) = T (x) +3

L (x) · y +3

O(y2)

, and |y| → 0, (5.2.10)

3

L (x)1 = lim|y|→0

T (x+ y)− T (x)

|y| . (5.2.11)

The gradient of the second order tensor £eld is a third order tensor-valued vector function (thirdorder tensor £eld) given by

3

L (x) = grad T (x) . (5.2.12)

The gradient of a second order tensor grad (v ⊗w) or gradT is a third order tensor, because of(gradv)⊗w being a dyadic product of a second order tensor and a vector ("£rst order tensor")is a third order tensor. For arbitrary scalar £elds α, β ∈ R, vector £elds v,w ∈ V, and tensor£eld T ∈ V⊗ V the following identities hold,

grad (α β) = α grad β + β gradα, (5.2.13)

grad (αv) = v ⊗ gradα + α gradv, (5.2.14)

grad (αT) = T⊗ gradα + α gradT, (5.2.15)

grad (v ·w) = (gradv)T ·w + (gradw)T · v, (5.2.16)

grad (v ×w) = v × gradw + gradv ×w, (5.2.17)

grad (v ⊗w) = [(gradv)⊗w] · gradw. (5.2.18)

It is important to notice, that the gradient of a vector of position is the identity tensor,

gradx = 1. (5.2.19)

5.2.3 Index Notation of Base Vectors

Most of the up now discussed relations hold for all n-dimensional Euclidean vector spaces En,but most uses are in continuum mechanics and in the 3-dimensional Euclidean vector space E3.The scalar-valued, vector-valued or tensor-valued functions depend on a vector x ∈ V, e.g. avector of position at a point P . In the sections below the following basis are used, the curvilinearcoordinates Θi with the covariant base vectors,

gi =∂x

∂Θi= x,i, (5.2.20)

and the Cartesian coordinates xi = xi with the orthonormal basis

ei = ei. (5.2.21)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.2. Derivatives and Operators of Fields 145

5.2.4 The Derivatives of Base Vectors

In section (5.1) the , resp. the partial derivatives of base vectors gi w.r.t. the coordinates Θk

were introduced by∂gi

∂Θk= gi,k = Γs

ikgs. (5.2.22)

With the Christoffel symbols de£ned by equations (5.1.60) and (5.1.61),

Γ(i) = Γsijgs ⊗ gj , (5.2.23)

the derivatives of base vectors are rewritten,

gi,k = Γ(i) · gk. (5.2.24)

The de£nition of the gradient, equation (5.2.1), compared with equation (5.2.23) shows, that theChristoffel symbols are computed by

Γ(i) = gradgi. (5.2.25)

Proof. The proof of this relation between the gradient of the base vectors and the Christoffelsymbols,

gradgi = Γ(i) = gi,j ⊗ gj , (5.2.26)

is given by

gi,k = Γ(i) · gk =(gi,j ⊗ gj

)gk

=(gj · gk

)gi,j = δjkgi,j

gi,k = gi,k.

Finally the gradient of a base vector is represented in index notation by

gradgi = gi,j ⊗ gj = Γsijgs ⊗ gj . (5.2.27)

5.2.5 The Covariant Derivative

Let v = v (x) = vigi be a vector £eld, see equation (5.2.14), then the gradient of the vector £eldis given by

gradv = grad v (x) = grad(vigi

)= gi ⊗ grad vi + vi gradgi. (5.2.28)

The gradient of a scalar-valued vector function α (x) is de£ned by

gradα =∂α

∂Θigi = α,ig

i, (5.2.29)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 80: Introduction to Continuum Mechanics - Vector and Tensor Calculus

146 Chapter 5. Vector and Tensor Analysis

than the gradient of the contravariant coef£cients v i (x) in the £rst term of equation (5.2.28) isgiven by

grad vi = vi,kgk. (5.2.30)

Equation (5.2.30) in (5.2.28), and together with equation (5.2.25) the complete gradient of avector £eld could be given by

gradv = gi ⊗ vi,kgk + viΓi, (5.2.31)

and £nally with equation (5.2.23),

gradv = vi,kgi ⊗ gk + viΓsikgs ⊗ gk. (5.2.32)

The dummy indices i and s are changed like this,

i⇒ s , and s⇒ i.

Rewritting equation (5.2.32), and factor out the dyadic product, implies

gradv =(vi,k + vsΓi

sk

) (gi ⊗ gk

). (5.2.33)

The termvi |k= vi,k + vsΓi

sk, (5.2.34)

is called the covariant derivative of the coef£cient v i w.r.t. the coordinate Θk and the basis gi.Than the gradient of a vector £eld is given by

gradv = vi |k(gi ⊗ gk

), with vi |k= vi,k + vsΓi

sk. (5.2.35)

5.2.6 The Gradient in a 3-dim. Cartesian Basis of Euclidean Space

Let α be a scalar £eld in a space with the Cartesian basis e i,

α = α (x) , with x = xiei = xiei = xiei, (5.2.36)

than the gradient of the scalar £eld α in a Cartesian basis is given by

gradα = α,iei , and α,i =∂α

∂xi

. (5.2.37)

The nabla operator is introduced by

∇ = (. . .),i ei =∂ (. . .)

∂x1e1 +

∂ (. . .)

∂x2e2 +

∂ (. . .)

∂x3e3,

and £nally de£ned by

∇ =∂ (· · · )∂xi

ei. (5.2.38)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.2. Derivatives and Operators of Fields 147

This de£nition implies another notation for the gradient in a 3-dimensional Cartesian basis of theEuclidean vector space,

gradα = ∇α. (5.2.39)

Let v = v (x) be a vector £eld in a space with the Cartesian basis e i,

v = v (x) = viei = viei = viei , with vi = vi (x1, x2, x3) , (5.2.40)

than the gradient of the vector £eld in a Cartesian basis is given with the relation of equation(5.2.14) by

gradv = grad v (x) = grad (viei) = ei ⊗ grad vi + vi grad ei. (5.2.41)

Computing the second term of equation (5.2.41) implies, that all derivatives of base vectors w.r.t.the vector of position x, see the de£nition (5.2.38), are equal to zero,

grad ei = (ei),j ej =∂ (ei)

∂x1e1 +

∂ (ei)

∂x2e2 +

∂ (ei)

∂x3e3 =

∂ (· · · )∂xi

ei = 0. (5.2.42)

Than equation (5.2.41) simpli£es to

gradv = ei ⊗ grad vi + 0 = ei ⊗ vi,kek, (5.2.43)

and £nally the gradient of a vector £eld in a 3-dimensional Cartesian basis of the Euclideanvector space is given by

gradv = vi,k (ei ⊗ ek) . (5.2.44)

5.2.7 Divergence of Vector and Tensor Fields

The divergence of a vector £eld is de£ned by

div v = tr (gradv) = gradv : 1, (5.2.45)

and must be a scalar quantity, because the gradient of a vector is a second order tensor, and thetrace of a second order de£nes a scalar quantity. The divergence of a tensor £eld is de£ned by

divT = grad (T) : 1 , and ∀T ∈ V⊗ V (5.2.46)

and must be a vector-valued quantity, because the scalar product of the second order unit tensor1 and the third order tensor grad (T) is a vector-valued quantity . Another possible de£nition isgiven by

a · divT = div(TTa

)= grad

(TTa

): 1. (5.2.47)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 81: Introduction to Continuum Mechanics - Vector and Tensor Calculus

148 Chapter 5. Vector and Tensor Analysis

For an arbitrary scalar £eld α ∈ R, and arbitrary vector £elds v,w ∈ V, and arbitrary tensor£elds S,T ∈ V⊗ V the following identities hold,

div (αv) = v · gradα + α div v, (5.2.48)

div (αT) = T gradα + α divT, (5.2.49)

div (gradv)T = grad (div v) , (5.2.50)

div (v ×w) = (gradv ×w) : 1− (gradw × v) : 1 (5.2.51)

= w · rotv − v · rotw,

div (v ⊗w) = (gradv)w + (divw)v, (5.2.52)

div (Tv) =(divTT

)· v +TT : gradv, (5.2.53)

div (v ×T) = v × divT+ gradv ×T, (5.2.54)

div (TS) = (gradT)S+T divS. (5.2.55)

5.2.8 Index Notation of the Divergence of Vector Fields

Let v = v (x) = vigi ∈ V be a vector £eld, with v i = vi (Θ1,Θ2,Θ3, . . . ,Θn), than a basis isgiven by

gi = x,i =∂x

∂Θi. (5.2.56)

The de£niton of the divergence (5.2.45) of a vector £eld with using the index notation of thegradient (5.2.35) implies

div v = gradv : 1 =[vi |k

(gi ⊗ gk

)]: [δrs (gr ⊗ gs)]

= vi |k δrsgirgks = vi |k gisgks = vi |k δki ,

div v = vi |i . (5.2.57)

The divergence of a vector £eld is a scalar quantity and an invariant.

5.2.9 Index Notation of the Divergence of Tensor Fields

Let T be a tensor, given by

T = T (x) = T ik (gi ⊗ gk) ∈ V⊗ V, (5.2.58)

and

T ik = T ik(Θ1,Θ2,Θ3, . . . ,Θn

), (5.2.59)

and with equation (5.2.47) the divergence of this tensor is given by

divT = gradT : 1 , and 1 = δrsgr ⊗ gs. (5.2.60)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.2. Derivatives and Operators of Fields 149

The divergence of a second order tensor is a vector, see also equation (5.2.52),

divT = div(T ikgi ⊗ gk

)

=[grad

(T ikgi

)]gk + [div gk]T

ikgi, (5.2.61)

and with equation (5.2.14),

divT =[gi ⊗ gradT ik + T ik gradgi

]gk + gradgk · 1T ikgi

=[gi ⊗ T ik

,j gj + T ikΓs

ijgs ⊗ gj]gk +

(Γskjgs ⊗ gj

): δrl(gl ⊗ gr

)T ikgi

= giTik,j δ

jk + T ikΓs

ijgsδjk + Γs

kjδrl δ

lsδ

rjT

ikgi

= T ik,k gi + T ikΓs

ikgs + ΓjkjT

ikgi,

and £nally after renaming the dummy indices,

divT =(T kl,l + T lmΓk

lm + T kmΓlml

)gk. (5.2.62)

The term T kl |l de£ned by

T kl |l= T kl,l + T lmΓk

lm + T kmΓlml, (5.2.63)

is the so called covariant derivative of the tensor coef£cients w.r.t. the coordinates Θ l, than thedivergence is given by

divT = T kl |l gk. (5.2.64)

Other representations are possible, e.g. a mixed formulation is given by

divT = T kl |k gl, (5.2.65)

and with the covariant derivative T kl |k,

T kl |k=

(T ll,k − T k

nΓnlk + T n

l Γkkn

)(5.2.66)

5.2.10 The Divergence in a 3-dim. Cartesian Basis in Euclidean Space

A vector £eld v in a Cartesian basis e i ∈ E3 is represented by equation (5.2.40) and its gradientby equation (5.2.41). The divergence de£ned by (5.2.45) rewritten with using the de£nition ofthe gradient of a vector £eld in a Cartesian basis (5.2.44) is given by

div v = gradv : 1 = (vi,kei ⊗ ek) : (δrser ⊗ es)= vi,kδrsδirδks = vi,kδik

div v = vi,i

The divergence of a vector £eld in the 3-dimensional Euclidean space with a Cartesian basis E3

is a scalar invariant, and is given bydiv v = vi,i, (5.2.67)

or in its complete description by

div v =∂v1∂x1

+∂v2∂x2

+∂v3∂x3

. (5.2.68)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 82: Introduction to Continuum Mechanics - Vector and Tensor Calculus

150 Chapter 5. Vector and Tensor Analysis

5.2.11 Rotation or Curl of Vector and Tensor Fields

The rotation of a vector £eld v (x) is de£ned with the fundamental tensor3

E by

rotv =3

E (gradv)T . (5.2.69)

In English textbooks in most cases the curl operator curl instead of the rotation operator rot isused,

rotv = curlv. (5.2.70)

The rotation, resp. curl, of a vector £eld rotv (x) or curlv (x) is a unique vector £eld. Some-times another de£nition of the rotation of a vector £eld is given by

rotv = div (1× v) = 1× gradv. (5.2.71)

For an arbitrary scalar £eld α ∈ R, and arbitrary vector £elds v,w ∈ V, and an arbitrary tensor£eld T ∈ V⊗ V the following identities hold,

rot gradα = 0, (5.2.72)

div rotv = 0, (5.2.73)

rot gradv = 0, (5.2.74)

rot (gradv)T = grad rotv, (5.2.75)

rot (αv) = α rotv + gradα× v, (5.2.76)

rot (v ×w) = v divw − gradwv −w div v + gradvw

= div (v ⊗w −w ⊗ v) , (5.2.77)

div rotT = rot divTT , (5.2.78)

div (rotT)T = 0, (5.2.79)

(rot rotT)T = rot rotTT , (5.2.80)

rot (α1) = − [rot (α1)]T , (5.2.81)

rot (Tv) = rotTTv + (gradv)T ×T. (5.2.82)

Also important to notice is that, if the tensor £eld T is symmetric, then the following identityholds,

rotT : 1 = 0. (5.2.83)

5.2.12 Laplacian of a Field

The laplacian of a scalar £eld ∆α or the Laplace operator of a scalar £eld is de£ned by

∆α = grad gradα : 1. (5.2.84)

The laplacian of a scalar £eld ∆α is a scalar quantity. The laplacian of a vector £eld ∆v isde£ned by

∆v = (grad gradv)1, (5.2.85)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.2. Derivatives and Operators of Fields 151

and ∆v is a vector-valued quantity. The de£nition of the laplacian of a tensor £eld ∆T is givenby

∆T = (grad gradT)1, (5.2.86)

and ∆T is a tensor-valued quantity. For an arbitrary vector £eld v ∈ V, and an arbitrary tensor£eld T ∈ V⊗ V the following identities hold,

div[

gradv ± (gradv)T]

= ∆v ± grad div v, (5.2.87)

rot rotv = grad div v −∆v, (5.2.88)

∆trT = tr∆T (5.2.89)

rot rotT = −∆T+ grad divT+ (grad divT)T ,

− grad grad trT+ 1 [∆ (trT)− div divT] . (5.2.90)

Finally, if the tensor £eld T is symmetric and de£ned by T = S − 1 trS, with the symmetricpart given by S, then the following identity holds,

rot rotT = −∆S+ grad divS+ (grad divS)T − 1 div divS. (5.2.91)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 83: Introduction to Continuum Mechanics - Vector and Tensor Calculus

152 Chapter 5. Vector and Tensor Analysis

5.3 Integral Theorems

5.3.1 De£nitions

The surface integral of a tensor product of a vector £eld u (x) and an area vector da shouldbe transformed into a volume integral. The volume element dV with the surface dA is given ata point P by the position vector x in the Euclidean space E3. The surface dA of the volumeelement dV is described by the six surface elements represented by the aera vectors da1, . . .,and da6.

-

6

ª

e3

e2e1

O

3

xP

:

dΘ1g1

3

dΘ2g2

±

dΘ3g3

Θ1

Θ2Θ3

s u

dV

dA

1

yu1

)da1 2

3

4

:

u4 =

u1 +∂u1

∂Θ1dΘ1

56

Figure 5.6: The volume element dV with the surface dA.

da1 = dΘ2dΘ3g3 × g2 = −dΘ2dΘ3√gg1 = −da4, (5.3.1)

da2 = dΘ1dΘ3g1 × g3 = −dΘ1dΘ3√gg2 = −da5, (5.3.2)

and

da3 = dΘ1dΘ2g2 × g1 = −dΘ1dΘ2√gg3 = −da6. (5.3.3)

The volume of the volume element dV is given by the scalar triple product of the tangentialvectors gi associated to the curvilinear coordinates Θi at the point P ,

dV = (g1 × g2) · g3dΘ1dΘ2dΘ3,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.3. Integral Theorems 153

resp.

dV =√gg3 · g3dΘ1dΘ2dΘ3 =

√gdΘ1dΘ2dΘ3. (5.3.4)

Let ui be de£ned as the mean value of the vector £eld u in the area element i, for example fori = 1 a Taylor series is derived like this,

u1 = u+∂u

∂Θ2

dΘ2

2+

∂u

∂Θ3

dΘ3

2. (5.3.5)

The Taylor series for the area element i = 4 is given by

u4 = u1 +∂u1∂Θ1

dΘ1,

u4 = u1 +∂[

u+ ∂u∂Θ2

dΘ2

2+ ∂u

∂Θ3dΘ3

2

]

∂Θ1dΘ1,

and £nally

u4 = u1 +u

∂Θ1dΘ1 +

∂2u

∂Θ1∂Θ2

dΘ2

2dΘ1 +

∂2u

∂Θ1∂Θ3

dΘ3

2dΘ1. (5.3.6)

Considering only the linear terms for i = 4, 5, 6 implies

u4 = u1 +∂u

∂Θ1dΘ1, (5.3.7)

u5 = u2 +∂u

∂Θ2dΘ2, (5.3.8)

and

u6 = u3 +∂u

∂Θ3dΘ3. (5.3.9)

The surface integral is approximated by the sum of the six area elements,

dA

u⊗ da =6∑

i=1

ui ⊗ dai. (5.3.10)

This equation is rewritten with all six terms,

6∑

i=1

ui ⊗ dai = u1 ⊗ da1 + u2 ⊗ da2 + u3 ⊗ da3 + u4 ⊗ da4 + u5 ⊗ da5 + u6 ⊗ da6,

inserting equations (5.3.1)-(5.3.3)

= (u1 − u4)⊗ da1 + (u2 − u5)⊗ da2 + (u3 − u6)⊗ da3,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 84: Introduction to Continuum Mechanics - Vector and Tensor Calculus

154 Chapter 5. Vector and Tensor Analysis

and £nally with equations (5.3.7)-(5.3.9),

6∑

i=1

ui ⊗ dai = −∂u

∂Θ1dΘ1 ⊗ da1 −

∂u

∂Θ2dΘ2 ⊗ da2 −

∂u

∂Θ3dΘ3 ⊗ da3. (5.3.11)

Equations (5.3.1)-(5.3.3) inserted in (5.3.11) implies

6∑

i=1

ui ⊗ dai =

(∂u

∂Θ1⊗ g1 + ∂u

∂Θ2⊗ g2 + ∂u

∂Θ3⊗ g3

)√gdΘ1dΘ2dΘ3,

with the summation convention i = 1, . . . , 3,

=

(∂u

∂Θi⊗ gi

)

dV ,

and £nally with the de£nition of the gradient

6∑

i=1

ui ⊗ dai = gradu dV . (5.3.12)

Comparing this result with equation (5.3.10) yields

dA

u⊗ da =6∑

i=1

ui ⊗ dai =

(∂u

∂Θi⊗ gi

)

dV = gradu dV . (5.3.13)

Equation (5.3.13) holds for every subvolume dV with the surface dA. If the terms of the sub-

Figure 5.7: The Volume, the surface and the subvolumes of a body.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.3. Integral Theorems 155

volumes are summed over, then the dyadic products of the inner surfaces vanish, because everydyadic product appears twice. Every area vector da appears once with the normal direction nand once with the opposite direction −n. The vector £eld u is by de£nition continuous, i.e. foreach of the two sides of an inner surface the value of the vector £eld is equal. In order to solvethe whole problem it is only necessary to sum (to integrate) the whole outer surface dA with thenormal unit vector n. If the summation over all subvolumes dV with the surfaces da is rewrittenas an integral, like in equation (5.3.13), then for the whole volume and surface the followingrelation holds,

nV∑

i=1

dA

u⊗ da =

A

u⊗ da =

V

gradu dV , (5.3.14)

and with da = dannV∑

i=1

dA

u⊗ n da =

A

u⊗ n da =

V

gradu dV . (5.3.15)

With this integral theorems it would be easy to develop integral theorems for scalar £elds, vector£elds and tensor £elds.

5.3.2 Gauss’s Theorem for a Vector Field

The Gauss’s theorem is de£ned by∫

A

u · n da =

V

divu dV . (5.3.16)

Proof. Equation (5.3.15) is multiplied scalar with the unit tensor 1 from the left-hand side,∫

A

1 : u⊗ n da =

V

1 : gradu dV , (5.3.17)

with the mixed formulation of the unit tensor 1 = gj ⊗ gj ,

1 : u⊗ n =(gj ⊗ gj

):(ukgk ⊗ nigi

)

= uknigjkδji = ujn

j ,

1 : u⊗ n = u · n, (5.3.18)

and the scalar product of the unit tensor and the gradient of the vector £eld

1 : gradu = tr (gradu) ,

1 : gradu = divu, (5.3.19)

Finally inserting equations (5.3.18) and (5.3.19) in (5.3.17) implies∫

A

1 : u⊗ n da =

A

u · n da =

V

1 : gradu dV =

V

divu dV . (5.3.20)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 85: Introduction to Continuum Mechanics - Vector and Tensor Calculus

156 Chapter 5. Vector and Tensor Analysis

5.3.3 Divergence Theorem for a Tensor Field

The divergence theorem is de£ned by∫

A

T · n da =

V

divT dV . (5.3.21)

Proof. If the vector a is constant, then this implies

a ·∫

A

T · n da =

A

a ·T · n da =

A

n ·TT · a da =

A

(TT · a

)· n da. (5.3.22)

With TT · a = u being a vector it is possible to use Gauss’s theorem (5.3.16), this implies

a ·∫

A

T · n da =

A

(TT · a

)· n da

=

V

div(TT · a

)dV ,

with equation (5.2.47) and the vector a being constant,

div(TTa

)= (divT) a+T grad a = divTa+ 0 = a divT,

a ·

A

T · n da−∫

V

divT dV

= 0,

and £nally

A

T · n da =

V

divT dV .

5.3.4 Integral Theorem for a Scalar Field

The integral theorem for a scalar £eld α is de£ned by∫

A

α da =

A

αn da =

V

gradα dV . (5.3.23)

Proof. If the vector a is constant, then the following condition holds

a ·∫

A

αn da =

A

αa · n da.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

5.3. Integral Theorems 157

It is possible to use Gauss’s theorem (5.3.16), because αa is a vector, this implies

a ·∫

A

αn da =

A

div (αa) dV . (5.3.24)

Using the identity (5.2.48) and the vector a being constant yields

div (αa) = a gradα + α div a = a gradα + 0 = a gradα. (5.3.25)

Inserting relation (5.3.25) in equation (5.3.24) implies

a ·∫

A

αn da =

V

a · gradα dV ,

a ·

A

αn da−∫

V

gradα dV

= 0,

and £nally the identity

A

αn da =

V

gradα dV .

5.3.5 Integral of a Cross Product or Stokes’s Theorem

The Stoke’s theorem for the cross product of a vector £eld u and its normal vector n is de£nedby ∫

A

n× u da =

V

rotu dV .. (5.3.26)

Proof. Let the vector a be constant,

a ·∫

A

n× u da =

A

a · (n× u) da =

V

(u× a) · n da,

with the cross product u× a being a vector it is possible to use the Gauss’s theorem (5.3.16)

a ·∫

A

n× u da =

V

div (u× a) da. (5.3.27)

The identity (5.2.51) with the vector a being constant implies

div (u× a) = a rotu− u rot a = a rotu− 0 = a rotu, (5.3.28)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 86: Introduction to Continuum Mechanics - Vector and Tensor Calculus

158 Chapter 5. Vector and Tensor Analysis

inserting relation (5.3.28) in equation (5.3.27) yields

a ·∫

A

n× u da =

V

a rotu dV ,

and £nally∫

A

n× u da =

V

rotu dV .

5.3.6 Another Interpretation of Gauss’s Theorem

The Gauss’s theorem, see equation (5.3.16), could be established by inverting the de£nition ofthe divergence. Let u (x) be a continuous and differentiable vector £eld. The volume integral isapproximated by a limit, see also (5.3.10), where the whole surface is approximated by surfaceelements dai, ∫

V

u (x) dV = lim∆Vi→0

i

ui∆Vi. (5.3.29)

Let ui be the mean value in a subvolume ∆Vi. The volume integral of the divergence of thevector £eld u with inserting the relation of equation (5.3.29) is given by

V

divu dV = lim∆Vi→0

i

(

divui

)

∆Vi. (5.3.30)

The divergence (source density) is de£ned by

divu = lim∆V→0

∆a

u · da

∆V

. (5.3.31)

The mean value ui in equation (5.3.30) is replaced by the identity of equation (5.3.31),

(

divui

)

=

∆ai

ui · da

∆Vi

,

V

divu dV = lim∆Vi→0

i

∆ai

ui · da

∆Vi

.

and £nally with the suammation of all subvolumes, like at the begining of this section,∫

V

divu dV =

A

u · da =

A

u · nda. (5.3.32)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter 6

Exercises

159

Page 87: Introduction to Continuum Mechanics - Vector and Tensor Calculus

160 Chapter 6. Exercises

Chapter Table of Contents

6.1 Application of Matrix Calculus on Bars and Plane Trusses . . . . . . . . . 162

6.1.1 A Simple Statically Determinate Plane Truss . . . . . . . . . . . . . . 162

6.1.2 A Simple Statically Indeterminate Plane Truss . . . . . . . . . . . . . 164

6.1.3 Basisc Relations for bars in a Local Coordinate System . . . . . . . . . 165

6.1.4 Basic Relations for bars in a Global Coordinate System . . . . . . . . . 167

6.1.5 Assembling the Global Stiffness Matrix . . . . . . . . . . . . . . . . . 168

6.1.6 Computing the Displacements . . . . . . . . . . . . . . . . . . . . . . 170

6.1.7 Computing the Forces in the bars . . . . . . . . . . . . . . . . . . . . 171

6.1.8 The Principle of Virtual Work . . . . . . . . . . . . . . . . . . . . . . 172

6.2 Calculating a Structure with the Eigenvalue Problem . . . . . . . . . . . . 174

6.2.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

6.2.2 The Equilibrium Conditions after the Excursion . . . . . . . . . . . . . 175

6.2.3 Transformation into a Special Eigenvalue Problem . . . . . . . . . . . 177

6.2.4 Solving the Special Eigenvalue Problem . . . . . . . . . . . . . . . . . 178

6.2.5 Orthogonal Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

6.2.6 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

6.3 Fundamentals of Tensors in Index Notation . . . . . . . . . . . . . . . . . . 182

6.3.1 The Coef£cient Matrices of Tensors . . . . . . . . . . . . . . . . . . . 182

6.3.2 The Kronecker Delta and the Trace of a Matrix . . . . . . . . . . . . . 183

6.3.3 Raising and Lowering of an Index . . . . . . . . . . . . . . . . . . . . 184

6.3.4 Permutation Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

6.3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

6.3.6 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

6.4 Various Products of Second Order Tensors . . . . . . . . . . . . . . . . . . 190

6.4.1 The Product of a Second Order Tensor and a Vector . . . . . . . . . . . 190

6.4.2 The Tensor Product of Two Second Order Tensors . . . . . . . . . . . 190

6.4.3 The Scalar Product of Two Second Order Tensors . . . . . . . . . . . . 190

6.4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

6.4.5 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

6.5 Deformation Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

6.5.1 Tensors of the Tangent Mappings . . . . . . . . . . . . . . . . . . . . 194

6.5.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Chapter Table of Contents 161

6.5.3 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

6.6 The Moving Trihedron, Derivatives and Space Curves . . . . . . . . . . . . 198

6.6.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

6.6.2 The Base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

6.6.3 The Curvature and the Torsion . . . . . . . . . . . . . . . . . . . . . . 201

6.6.4 The Christoffel Symbols . . . . . . . . . . . . . . . . . . . . . . . . . 202

6.6.5 Forces and Moments at an Arbitrary sectional area . . . . . . . . . . . 203

6.6.6 Forces and Moments for the Given Load . . . . . . . . . . . . . . . . . 207

6.7 Tensors, Stresses and Cylindrical Coordinates . . . . . . . . . . . . . . . . 210

6.7.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

6.7.2 Co- and Contravariant Base Vectors . . . . . . . . . . . . . . . . . . . 212

6.7.3 Coef£cients of the Various Stress Tensors . . . . . . . . . . . . . . . . 213

6.7.4 Physical Components of the Contravariant Stress Tensor . . . . . . . . 215

6.7.5 Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

6.7.6 Principal Stress and Principal Directions . . . . . . . . . . . . . . . . . 221

6.7.7 Deformation Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

6.7.8 Normal and Shear Stress . . . . . . . . . . . . . . . . . . . . . . . . . 224

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 88: Introduction to Continuum Mechanics - Vector and Tensor Calculus

162 Chapter 6. Exercises

6.1 Application of Matrix Calculus on Bars and Plane Trusses

6.1.1 A Simple Statically Determinate Plane Truss

A very simple truss formed by two bars is given like in £gure (6.1), and loaded by an arbitraryforce F in negative y-direction. The discrete values of the various quantities, like the Young’s

- x

6

y2

3 1

bar II , l, A2, E2 bar I , l, A1, E1

?F

α α

α = 45o

Figure 6.1: A simple statically determinate plane truss.

mdoulus Ei or the sectional area Ai, are of no further interest at the moment. Only the forces indirection of the bars are to be computed. The equilibrium conditions of forces at the node 2 are

- x

6

y

µ

S1

I

S2

?F

α α

Figure 6.2: Free-body diagram for the node 2.

given in horizontal direction by∑

FH = 0 = −S2 sinα + S1 sinα, (6.1.1)

and in vertical direction by∑

FV = 0 = −F + S2 cosα + S1 cosα, (6.1.2)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.1. Application of Matrix Calculus on Bars and Plane Trusses 163

see also free-body diagram (6.2). This relations imply

S1 = S2 , and S1 + S2 =F

cosα, (6.1.3)

and £nally

S1 = S2 =F

2 cosα. (6.1.4)

The equilibrium conditions of forces at the node 1 are given in horizontal direction by

- x

6

y3 -

F3x

6F3y

RS2

1 -

F1x

6F1y

ª S1

α α

Figure 6.3: Free-body diagrams for the nodes 1 and 3.

FH = 0 = F1x − S1 cosα, (6.1.5)

and in vertical direction by

FV = 0 = F1y − S1 sinα, (6.1.6)

see also the right-hand side of the free-body diagram (6.3). The £rst one of this two relationsyields with equation (6.1.4)

F1x = S1 cosα =F cosα

2 cosα, and £nally F1x =

1

2F , (6.1.7)

and the second one implies

F1y = S1 sinα =F sinα

2 cosα, and £nally F1y =

1

2F . (6.1.8)

The equilibrium conditions of forces at the node 3 are given in horizontal direction by

FH = 0 = F3x − S2 cosα, (6.1.9)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 89: Introduction to Continuum Mechanics - Vector and Tensor Calculus

164 Chapter 6. Exercises

and in vertical direction by

FV = 0 = F3y − S2 sinα, (6.1.10)

see also the left-hand side of the free-body diagram (6.3). The £rst one of this two relationsyields with equation (6.1.4)

F3x = −S2 cosα = −F cosα

2 cosα, and £nally F3x = −1

2F , (6.1.11)

and the second one implies

F3y = S2 sinα =F sinα

2 cosα, and £nally F3y =

1

2F . (6.1.12)

The stresses in normal direction of the bars and the nodal displacements could be computedwith the relations described in one of the following sections, see also equations (6.1.17)-(6.1.19).The important thing to notice here is, that there are overall 6 equations to solve, in order tocompute 6 unknown quantities F1x, F1y, F3x, F3y, S1, and S2. This is characteristic for a staticallydeterminate truss, resp. system, and the other case is discussed in the following section.

6.1.2 A Simple Statically Indeterminate Plane Truss

- x

6

y

1

I

3x1

2

II

Á

x2

3

III

6x3

4 -

Fx = 10kN

6Fy = 2kN

α1 α2 α3

6

?

h

(EA)i = const. axial rigidity

α1 = 30

α2 = 60

α3 = 90

h = 5, 0m

Figure 6.4: A simple statically indeterminate plane truss.

The truss given in the sketch in £gure (6.4) is statically indeterminate, i.e. it is impossible tocompute all the reactive forces just with the equilibrium conditions for the forces at the different

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.1. Application of Matrix Calculus on Bars and Plane Trusses 165

- x

6

y

2

ÁSII

Á

x2

-F2x

6F2y

4 -

Fx = 10kN

6Fy = 2kN

+SI

ÀSII ?

SIIIα2

α3α2

Figure 6.5: Free-body diagrams for the nodes 2 and 4.

nodes! In order to get enough equations for computing all unknown quantities, it is necessaryto use additional equations, like the ones given in equations (6.1.17)-(6.1.19). For example theequilibrium condition of horizontal forces at node 4 is given by

FH = 0 = Fx − SII cosα2 − SI cosα1, (6.1.13)

and in vertical direction by∑

FV = 0 = Fy − SIII − SII sinα2 − SI sinα1. (6.1.14)

The moment equilibrium condition at this point is of no use, because all lines of action cross thenode 4 itself! The equilibrium conditions for every support contain for every node 1-3 two un-known reactive forces, one horizontal and one vertical, and one also unknown force in directionof the bar, e.g. for the node 2,

FH = 0 = F2x + SII cosα2, (6.1.15)

and∑

FV = 0 = F2y + SII sinα2. (6.1.16)

Finally summarizing all possible and useful equations and all unknown quantities implies, thatthere are overall 9 unknown quantities but only 8 equations! This result implies, that it is neces-sary to take another way to solve this problem, than using the equilibrium conditions of forces!

6.1.3 Basisc Relations for bars in a Local Coordinate System

An arbitrary bar with its local coordinate system x, y is given in £gure (6.6), with the nodal forcesfix, and fiy at a point i with i = 1, 2 in the local coordiante system. The following relations hold

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 90: Introduction to Continuum Mechanics - Vector and Tensor Calculus

166 Chapter 6. Exercises

1

2

bar, with A,E, lI f1y

I y

µ

f1x

ª S

µf2x

I f2yµS

µ

x

Figure 6.6: An arbitrary bar and its local coordinate system x, y.

in the local coordinate system x, y of an arbitrary single bar, see £gure (6.6),

stresses σx =S

A, (6.1.17)

kinematics εx =dux

dx=

∆ux

l, (6.1.18)

material law σx = Eεx. (6.1.19)

In order to consider the additional relations given above, it is useful to combine the nodal dis-placements and the nodal forces for the two nodes of the bar in the local coordinate system. Thenodal displacements in the local coordinate system x are given by

∆ux = qTu = −u1x + u2x, (6.1.20)

and the nodal forces are given by the equilibrium conditions of forces at the nodes of the bar,

f = q S, (6.1.21)

with

q =

−1010

, u =

u1xu1yu2xu2y

, and f =

f1xf1yf2xf2y

. (6.1.22)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.1. Application of Matrix Calculus on Bars and Plane Trusses 167

The relations between stresses and displacements, see equation (6.1.17), are given by

S = Aσx =EA

l∆ux, (6.1.23)

inserting equation (6.1.23) in (6.1.21) implies

f =EA

lq qTu, (6.1.24)

and £nally resumed as the symmetric local stiffness matrix,

f = K u , with K =EA

l

1 0 −1 00 0 0 0−1 0 1 00 0 0 0

. (6.1.25)

6.1.4 Basic Relations for bars in a Global Coordinate System

An arbitrary bar with its local coordinate system x, y and a global coordinate system x, y is givenin £gure (6.7). At a point i with i = 1, 2 the nodal forces f ix, fiy, and the nodal displacements uix,uiy are de£ned in the local coordiante system. It is also possible to de£ne the nodal forces f ix, fiy,and the nodal displacements vix, viy in the global coordiante system. In order to combine more

1

2

- x, vx

6

y, vy

6v1y

6f1y

-v1x

-f1x

I f1y

I u1y

I y

µu1x

µ

f1x

µx

α

Figure 6.7: An arbitrary bar in a global coordinate system.

than one bar, it is necessary to transform the quantities given in each local coordinate system into

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 91: Introduction to Continuum Mechanics - Vector and Tensor Calculus

168 Chapter 6. Exercises

one global coordinate system. This transformation of the local vector quantities, like the nodalload vector f , or the nodal displacement vector u, is given by a multiplication with the so called

transformation matrix∗Q, which is an orthogonal matrix, given by

a =∗Q a , with

∗Q =

[cosα sinα− sinα cosα

]

. (6.1.26)

Because it is necessary to transform the local quantities for more than one node, the transforma-

tion matrix Q is composed by one submatrix∗Q for every node,

Q =

∗Q 0

0∗Q

=

cosα sinα 0 0− sinα cosα 0 0

0 0 cosα sinα0 0 − sinα cosα

. (6.1.27)

The local quantities f , and u in equation (6.1.25) are replaced by the following expressions,

u = Q v , with v =

v1xv1yv2xv2y

, (6.1.28)

f = Q f , with f =

f1xf1yf2xf2y

. (6.1.29)

Inserting this relations in equation (6.1.25) and multiplying with the inverse of the transformationmatrix from the left-hand side yields in the global coordiante system,

f = Q−1K Q v = K v, (6.1.30)

with the symmetric local stiffness matrix given in the global coordinate system by

K =EA

l

cos2 α sinα cosα − cos2 α − sinα cosαsinα cosα sin2 α − sinα cosα − sin2 α− cos2 α − sinα cosα cos2 α sinα cosα− sinα cosα − sin2 α sinα cosα sin2 α

. (6.1.31)

6.1.5 Assembling the Global Stiffness Matrix

There are two different ways of assembling the global stiffness matrix. The £rst way considersthe boundary conditions at the beginning, the second one considers the boundary conditions notuntil the complete global stiffness matrix for all nodes is assembled. In the £rst way the global

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.1. Application of Matrix Calculus on Bars and Plane Trusses 169

stiffness matrix K is assembled by summarizing relations like in equation (6.1.28)-(6.1.31) forall used elements I-III , and all given boundary conditions, like the ones given by the supportsat the nodes 1-3,

v1x = v2x = v3x = 0, (6.1.32)

v1y = v2y = v3y = 0. (6.1.33)

With this conditions the rigid body movement is eliminated from the system of equations, resp.the assembled global stiffness matrix, than only the unknown displacements at node 4 remain,

v4x , and v4y. (6.1.34)

This implies, that it is suf£cient to determine the equilibrium conditions only at node 4. The

result is, that it is suf£cient to consider only one submatrix∗Ki for every element I-III . For

example the complete equilibrium conditions for bar III are given by

f3xf3yf4xf4y

III

=

[. . . . . .

. . .∗K3

]

v3xv3yv4xv4y

III

=

00v4xv4y

III

. (6.1.35)

Finally the equilibrium conditions at node 4 with summarizing the bars i = I-III is given by

3∑

i=1

[f i4x

f i4y

]

=

[Fx

Fy

]

=3∑

i=1

[ ∗Ki

] [v4xv4y

]

, (6.1.36)

and in matrix notation given by

P = K v, (6.1.37)

with the so called compatibility conditions at node 4 given by

vI4x = vII4x = vIII4x , (6.1.38)

vI4y = vII4y = vIII4y . (6.1.39)

By using this way of assembling the reduced global stiffness matrix the boundary conditions arealready implemented in every element stiffness matrix. The second way to assemble the reducedglobal stiffness matrix starts with the unreduced global stiffness matrix given by

3∑

i=1

f i1x

f i1y

f i2x

f i2y

f i3x

f i3y

f i4x

f i4y

=

K11 0 0 K14

0 K22 0 K24

0 0 K33 K34

K41 K42 K43 K

v1xv1yv2xv2yv3xv3yv4xv4y

=

000000Fx

Fy

, (6.1.40)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 92: Introduction to Continuum Mechanics - Vector and Tensor Calculus

170 Chapter 6. Exercises

and in matrix notation given by

K v = P . (6.1.41)

Each element resp. each bar could be described by a combination i, j of the numbers of nodesused in this special element. For example for the bar (element) III the submatrices K 33, K34,K43, and a part of K, like in equation (6.1.36) are implemented in the unreduced global stiffnessmatrix. After inserting the submatrices K ij for the various elements and considering the bound-ary conditions given by equations (6.1.32)-(6.1.33) the reduced global stiffness matrix is givenby

P = K v, (6.1.42)

resp.

[Fx

Fy

]

=EA

(3 +√3)

8h

[1 11 3

] [v4xv4y

]

, (6.1.43)

see also equation (6.1.36) and (6.1.37) of the £rst way. But this computed reduced global stiff-ness matrix is not the desired result, because the nodal displacements and forces are the desiredquantities.

6.1.6 Computing the Displacements

The nodal displacements are computed by inverting the relation (6.1.43),

v = K−1P . (6.1.44)

The inversion of a 2× 2-matrix is trivial and given by

K−1 =1

detKK =

(8h)2

2(3 +√3)2

(EA)2EA

(3 +√3)

8h

[3 −1−1 1

]

, (6.1.45)

and £nally the inverse of the stiffness matrix is given by

K−1 =4h

EA(3 +√3)

[3 −1−1 1

]

. (6.1.46)

The load vector P at the node 4, see £gure (6.4), is given by

P =

[102

]

, (6.1.47)

and by inserting equations (6.1.46), and (6.1.47) in relation (6.1.44) the nodal displacements atnode 4 are given by

v =

[v4xv4y

]

=4h

EA(3 +√3)

[28−8

]

. (6.1.48)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.1. Application of Matrix Calculus on Bars and Plane Trusses 171

6.1.7 Computing the Forces in the bars

The forces in the various bars are computed by solving the relations given by the equation(6.1.28)-(6.1.31) for each element i, resp. for each bar,

f i = K ivi. (6.1.49)

For example for the bar III , with α = 90, the symmetric local stiffness matrix and the nodaldisplacements are given by

K3 =EA

h

0 0 0 00 1 0 −10 0 0 00 −1 0 1

, v3 =

00v4xv4y

, (6.1.50)

with sin 90 = 1, and cos 90 = 0 in equation (6.1.31). The forces in the bars are given in theglobal coordinate system, see equation (6.1.30), by

f 3 = K3v3 =4

3 +√3

080−8

III

=

06, 7620

−6, 762

III

=

f3xf3yf4xf4y

III

, (6.1.51)

and in the local coordinate system associated to the bar III , see equation (6.1.29), by

f3= Q3f 3 =

f3xf3yf4xf4y

=

f3y−f3xf4y−f4x

=

6, 7620

−6, 7620

. (6.1.52)

Comparing this result with the relation (6.1.21) implies £nally the force S III in direction of thebar,

SIII = −f3x = f4x = −6, 762kN , (6.1.53)

and for the bars I and II ,

SI = 8, 56kN , and SII = 5, 17kN . (6.1.54)

Comparing this results as a probe with the equilibirum conditions given by the equations (6.1.13)-(6.1.14), in horizontal direction,

FH = 0 = Fx − SII cosα2 − SI cosα1

= 2, 0− 8, 56 · 12− 5, 17 ·

√3

2+ 6, 762 · 1 = −4, 7 · 10−3 ≈ 0, (6.1.55)

and in vertical direction,∑

FV = 0 = Fy − SIII − SII sinα2 − SI sinα1

= 10, 0− 8, 56 ·√3

2− 5, 17 · 1

2+ 6, 762 · 0 = 1, 8 · 10−3 ≈ 0. (6.1.56)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 93: Introduction to Continuum Mechanics - Vector and Tensor Calculus

172 Chapter 6. Exercises

6.1.8 The Principle of Virtual Work

This £nal section will give a small outlook on the use of the matrix calculus described above. Thevirtual work 1 for a bar under a constant line load, or also called the weak form of equilibrium,is given by

δW = −∫

Nδεxdx+

pδuxdx = 0. (6.1.57)

The force in normal direction of a bar, see also equations (6.1.17)-(6.1.19), is given by

N = EAεx. (6.1.58)

With this relation the equation (6.1.57) is rewritten,

δW = −∫

εTxEAδεxdx+

pδuxdx = 0. (6.1.59)

The vectors given by the equations (6.1.22), these are the various quantities w.r.t. the localvariable x in equation (6.1.57) could be described by

displacement ux = qT u, (6.1.60)

strain u,x = εx = qT,xu, (6.1.61)

virtual displacement δux = qT δu, (6.1.62)

virtual strain δu,x = δεx = qT,xδu, (6.1.63)

and the constant nodal values given by the vector u. The vectors q are the so called shapefunctions, but in this very simple case, they include only constant values, too. In general thisshape functions are dependent of the position vector, for example in this case the local variablex. This are some of the basic assumptions for £nite elements. Inserting the relations given by(6.1.60)-(6.1.63), in equation (6.1.59) the virtual work in one element, resp. one bar, is given by

δW = −∫(qT,xu)T

EAqT,xδudx+

pqT δudx = 0. (6.1.64)

= −uT

[∫

q,xEAqT,xdx

]

δu+

[∫

pqTdx

]

δu = 0. (6.1.65)

These integrals just describe one element, resp. one bar, but if a summation over more elementsis introduced like this,

3∑

i=1

δW i =3∑

i=1

[

−(ui)T(∫

q,xEAqT,xdx

)

δui +

(∫

piqTdx

)

δui

]

= 0, (6.1.66)

and £nally like this

3∑

i=1

[(ui)T(∫

q,xEAqT,xdx

)

δui

]

=3∑

i=1

[(∫

piqTdx

)

δui

]

. (6.1.67)

1See also the refresher course on strength of materials.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.1. Application of Matrix Calculus on Bars and Plane Trusses 173

It is easy to see, that the left-hand side is very similar to the stiffness matrices, and that the right-hand side is very similar to the load vectors in equations (6.1.36), and (6.1.37), or (6.1.42), and(6.1.43). This simple example shows the close relations between the principle of virtual works,the £nite element methods, and the matrix calculus described above.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 94: Introduction to Continuum Mechanics - Vector and Tensor Calculus

174 Chapter 6. Exercises

6.2 Calculating a Structure with the Eigenvalue Problem

6.2.1 The Problem

Establish the homogeneous system of equations for the structure, see sketch (6.8) of rigid bars,in order to determine the critical load Fc = Fcritical! Assume that the structure is geometricallylinear, i.e. the angles of excursion ϕ are so small that cosϕ = 1, and sinϕ = 0 are goodapproximations. The given values are

k1 = k , k2 =1

2k , k = 10

kN

cm, and l = 200cm.

- x

6

y

EJ =∞ EJ =∞ EJ =∞

?

x1

?

x2

¾Fcritical

k1 k2

¾ -32l

¾ -2l ¾ -32l

Figure 6.8: The given structure of rigid bars.

• Rewrite the system of equations so that the general eigenvalue problem for the critical loadFc is given by

A x = Fc B x.

• Transform the general eigenvalue problem into a special eigenvalue problem.

• Calculate the eigenvalues, i.e. the critical loads Fc, and the associated eigenvectors.

• Check if the eigenvectors are orthogonal to each other.

• Transform the equation system in such a way, that it is possible to compute the Rayleighquotient. What quantity could be estimated with the Rayleigh quotient?

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.2. Calculating a Structure with the Eigenvalue Problem 175

6.2.2 The Equilibrium Conditions after the Excursion

In a £rst step the relations between the variables x1, x2, and the reactive forces FAy, and FBy inthe supports are solved. For this purpose the equilibrium conditions of moments are establishedfor two subsystems, see sketch (6.9). After that in a second step the equilibrium conditions forthe whole system are established. The moment equation w.r.t. the node D of the subsystem on

- x

6

y

A

C ′ zSI D′

¼SIII

B-FAx 6

FAy

¾Fcritical6

FBy

6?x1 6

?x2

¾ -32l

¾ -32l

Figure 6.9: The free-body diagrams of the subsystems left of node C, and right of node D afterthe excursion.

the right-hand side of node D after the excursion implies

MD = 0 = FBy ·3

2l + Fc · x2 ⇒ FBy = −2Fc

3lx2, (6.2.1)

and the moment equation w.r.t. the node C of the subsystem on the left-hand side of node C afterthe excursion implies with the following relation (6.2.3),

MC = 0 = FAy ·3

2l + FAx · x1 ⇒ FAy = −2FAx

3lx1 = −

2Fc

3lx1. (6.2.2)

At any time, and any possible excursion, or for any possible load Fc the complete system, cf.(6.10), must satisfy the equilibrium conditions. The equilibrium condition of forces in horizontaldirection, cf. (6.10), after the excursion is given by

FH = 0 = FAx − Fc ⇒ FAx = Fc. (6.2.3)

The moment equation w.r.t. the node A for the complete system implies

MA = 0 = FBy · 5l + k2x2 ·7

2l + k1x1 ·

3

2l, (6.2.4)

with (6.2.1) and k2 =12k

0 = −2Fc

3lx2 · 5l + kx2 ·

7

4l + kx1 ·

3

2l,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 95: Introduction to Continuum Mechanics - Vector and Tensor Calculus

176 Chapter 6. Exercises

- x

6

y

A

CD

B-FAx 6

FAy

¾Fcritical6

FBy

6k1x16k2x2

6?x1 6

?

x2

¾ -32l

¾ -2l ¾ -32l

Figure 6.10: The free-body diagram of the complete structure after the excursion.

and £nally3

2kx1 +

7

4kx2 =

10

3

Fc

lx2. (6.2.5)

The equilibrium of forces in vertical direction implies

FV = 0 = FAy + FBy + k2x2 + k1x1, (6.2.6)

with (6.2.1), (6.2.2), k1 = k, and k2 =12k,

0 = −2Fc

3lx1 −

2Fc

3lx2 +

1

2kx2 + kx1,

and £nally

kx1 +1

2kx2 =

2

3

Fc

l(x1 + x2) . (6.2.7)

The relations (6.2.5), and (6.2.7) are combined in a system of equations, given by

[32k 7

4k

k 12k

] [x1x2

]

= Fc

[0 10

3l23l

23l

] [x1x2

]

, (6.2.8)

or in matrix notation

A · x = Fc ·B · x. (6.2.9)

This equation system is a general eigenvalue problem, with the Fci being the eigenvalues, andthe eigenvectors xi

0.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.2. Calculating a Structure with the Eigenvalue Problem 177

6.2.3 Transformation into a Special Eigenvalue Problem

In order to solve this general eigenvalue problem with the aid of the characteristic equation it isnecessary to transform the general eigenvalue problem into a special eigenvalue problem. Thusthe equation (6.2.9) is multiplied with the inverse of matrix B from the left-hand side,

B−1· | A x = Fc ·B x, (6.2.10)

B−1A x = FcB−1B x,

after this multiplication both terms are rewritten on one side and the vector x is factored out,

0 = B−1A x− Fc1 x,

and £nally the special eigenvalue problem is given by

0 = (C x− 1 Fc)x , with C = B−1A. (6.2.11)

The inverse of matrix B is assumed by

B−1 =

[a bc d

]

, (6.2.12)

and the following relations must hold

B−1B = 1 , resp.

[a bc d

] [0 10

3l23l

23l

]

=

[1 00 1

]

. (6.2.13)

This simple inversion implies

b =3

2l, (6.2.14)

d = 0, (6.2.15)10

3la+

2

3lb = 0 ⇒ a = −1

5b ⇒ a = − 3

10l, (6.2.16)

10

3lc+

2

3ld = 1 ⇒ c = − 3

10l, (6.2.17)

and £nally the inverse B−1 is given by

B−1 =

[− 310l 3

2l

310l 0

]

. (6.2.18)

The matrix C for the special eigenvalue problem is computed by the multiplication of the two2× 2-matrices B−1, and A like this,

C = B−1A =

[− 310l 3

2l

310l 0

] [32k 7

4k

k 12k

]

=

[2120kl 9

40kl

920kl 21

40kl

]

,

and £nally the matrix C for the special eigenvalue problem is given by

C =3

40kl

[14 36 7

]

. (6.2.19)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 96: Introduction to Continuum Mechanics - Vector and Tensor Calculus

178 Chapter 6. Exercises

6.2.4 Solving the Special Eigenvalue Problem

In order to solve the special eigenvalue problem, the characteristic equation is set up by comput-ing the determinant of equation (6.2.11),

det (C − Fc1) = 0, (6.2.20)

resp. in complete notation,

det

[2120kl − Fc

940kl

920kl 21

40kl − Fc

]

= 0. (6.2.21)

Computing the determinant yields(21

20kl − Fc

)(21

40kl − Fc

)

− 81

800k2l2 = 0,

441

800k2l2 − 21

20klFc −

21

40klFc + F 2

c −81

800k2l2 = 0,

and £nally implies the quadratic equation,

360

800k2l2 − 63

40klFc + F 2

c = 0. (6.2.22)

Solving this simple quadratic equation is no problem,

Fc1/2 =63

80kl ±

√(63

40

)2k2l2

4− 360

600k2l2,

Fc1/2 =63

80kl ±

1089

6400kl,

Fc1/2 =63

80kl ± 33

80kl, (6.2.23)

and £nally implies the two real eigenvalues,

Fc1 =6

5kl = 2400kN , and Fc2 =

3

8kl = 750kN . (6.2.24)

The eigenvectors x10, x20 are computed by inserting the eigenvalues Fc1, and Fc2 in equation

(6.2.11), given by

(C − Fci1) xi0 = 0, (6.2.25)

and in complete notation by

[C11 − Fci C12

C21 C22 − Fci

] [xi01

xi02

]

=

[00

]

. (6.2.26)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.2. Calculating a Structure with the Eigenvalue Problem 179

It is possible to choose for each eigenvalue Fci the £rst component of of the associated eigenvec-tor like this,

xi01 = 1, (6.2.27)

and after this to compute the second components of the eigenvectors,

xi02 = −

C11 − Fci

C12xi01 , resp. xi

02 = −C11 − Fci

C12, (6.2.28)

or

xi02 = −

C21C22 − Fci

xi01 , resp. xi

02 = −C21

C22 − Fci

. (6.2.29)

Inserting the £rst eigenvalue Fc1 in equation (6.2.28) implies the second component x102 of the£rst eigenvector,

x102 = −2120kl − 6

5kl

940kl

⇒ x102 =2

3, (6.2.30)

and for the second eigenvalue Fc2 the second component x202 of the second eigenvector is givenby

x202 = −2120kl − 3

8kl

940kl

⇒ x202 = −3. (6.2.31)

With this results the eigenvectors are £nally given by

x10 =

[123

]

, and x20 =

[1−3

]

. (6.2.32)

6.2.5 Orthogonal Vectors

It is suf£cient to compute the scalar product of two arbitrary vectors, in order to check, if thistwo vectors are orthogonal to each other, i.e.

x1⊥x2 , resp. x1 · x2 = 0. (6.2.33)

In this special case the scalar product of the two eigenvectors is given by

x10 · x20 =[123

]

· x20 =[1−3

]

= 1− 2 = −1 6= 0, (6.2.34)

i.e. the eigenvectors are not orthogonal to each other. The eigenvectors for different eigenvaluesare only orthogonal, if the matrix C of the special eigenvalue problem is symmetric 2. If thematrix of a special eigenvalue problem is symmetric all eigenvalues are real, and all eigenvectorsare orthogonal. In this case all eigenvalues Fc1, Fc2 are real, but the matrix C is not symmetric,and for that reason the eigenvectors x10, x

20 are not orthogonal.

2See script, section about matrix eigenvalue problems

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 97: Introduction to Continuum Mechanics - Vector and Tensor Calculus

180 Chapter 6. Exercises

6.2.6 Transformation

The special eigenvalue problem (6.2.11) includes an anti-symmetric matrix C. In order to deter-mine the Rayleigh quotient it is necessary to have a symmetric matrix in the special eigenvalueproblem 3,

(C − Fc1)x = 0, (6.2.35)

and in detail[

2120kl 9

40kl

920kl 21

40kl

]

− Fc

[1 00 1

][x1x2

]

=

[00

]

. (6.2.36)

The £rst step is to transform the matrix C into a symmetric matrix. Because the matrices aresuch simple it is easy to see, that if the second column of the matrix is multiplied with 2, thematrix becomes symmetric,

[2120kl 9

20kl

920kl 21

20kl

]

− Fc

[1 00 2

][x112x2

]

=

[00

]

, (6.2.37)

and in matrix notation with the new de£ned matrices D, and E 2, and a new vector q

(D − FcE

2)q = 0. (6.2.38)

Because the matrix E2 = ETE is a diagonal and symmetric matrix, the matrices E, and ET arediagonal and symmetric matrices, too,

E2 =

[1 00 2

]

⇒ E = ET =

[1 0

0√2

]

. (6.2.39)

Because the matrix E is a diagonal and symmetric matrix, the inverse E−1 is a diagonal andsymmetric matrix, too,

E−1 =

[a bc d

]

⇒ E−1E =

[a bc d

] [1 0

0√2

]

=

[1 00 1

]

= 1, (6.2.40)

and this implies £nally

E−1 =

[1 0

0 12

√2

]

, and E−1 =(E−1)T . (6.2.41)

The equation (6.2.38) is again a general eigenvalue problem, but now with a symmetric matrixD. But in order to compute the Rayleigh quotient, it is necessary to set up a special eigenvalueproblem again. In the next step the identity 1 = E−1E is inserted in equation (6.2.38), like this,

(D − FcE

2)E−1E q = 0, (6.2.42)

3See script, section about matrix eigenvalue problems

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.2. Calculating a Structure with the Eigenvalue Problem 181

than the whole equation is multiplied with E−1 from the left-hand side,(E−1D E−1 − FcE

−1ETE E−1)E q = 0, (6.2.43)

and with the relation E = ET for symmetric matrices,(E−1D E−1 − Fc1 1

)E q = 0, (6.2.44)

With relation (6.2.41) the £rst term in equation (6.2.44) describes a congruence transformation,so that the matrix F keeps symmetric 4,

F = E−1D E−1 =(E−1)T D E−1, (6.2.45)

and with the matrix F given by

F =

[1 0

0 12

√2

] [2120kl 9

20kl

920kl 21

20kl

] [1 0

0 12

√2

]

=

[2120kl 9

20√2kl

920

√2kl 21

40kl.

]

(6.2.46)

Furthermore a new vector p is de£ned by

p = Eq =

[1 0

0√2

] [x112x2

]

⇒ p =

[x1

12

√2x2

]

. (6.2.47)

Finally combining this results implies a special eigenvalue problem with a symmetric matrix Fand a vector p,

(F − Fc1) p = 0, (6.2.48)

Computing the characteristic equation like in equations (6.2.20), and (6.2.21) yields

det (F − Fc1) = 0, (6.2.49)

resp. in complete notation,

det

[2120kl − Fc

920

√2kl

920

√2kl 21

40kl − Fc

]

= 0, (6.2.50)

and this £nally implies the same characteristic equation like in (6.2.22),(21

20kl − Fc

)(21

40kl − Fc

)

− 81

800k2l2 = 0. (6.2.51)

Having the same characteristic equation implies, that this problem has the same eigenvalues, i.e.it is the same eigenvalue problem, but just another notation. With this symmetric eigenvalueproblem it is possible to compute the Rayleigh quotient,

Λ1 = R[

]

=pTνpν+1

pTνpν

=pTνF p

ν

pTνpν

, with Λ1 ≤ Fc1, (6.2.52)

with an approximated vector pν. The Rayleigh quotient Λ1 is a good approximation of a lower

bound for the dominant eigenvalue.4See script, section about the charateristics of congruence transformations.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 98: Introduction to Continuum Mechanics - Vector and Tensor Calculus

182 Chapter 6. Exercises

6.3 Fundamentals of Tensors in Index Notation

6.3.1 The Coef£cient Matrices of Tensors

Like in the lectures a tensor could be described by a coef£cient matrix f ij , and a basis given byϕij . First do not look at the basis, just look at the coef£cient matrices. In this exercise someof the most important rules to deal with the coef£cient matrices are recapitulated. The Einsteinsummation convention implies for the coef£cient matrix A .j

i , and an arbitrary basis ϕi.j ,

A.ji ϕ

i.j =

3∑

i=1

3∑

j=1

A.ji ϕ

i.j ,

with the coef£cient matrix A given in matrix notation by

A =[A.j

i

]=

A.11 A.2

1 A.31

A.12 A.2

2 A.32

A.13 A.2

3 A.33

.

The dot in the superscript index of the expression A.ji shows which one of the indices is the £rst

index, and which one is the second index. This dot represents an empty space, so in this case it iseasy to see, that the subscript index i is the £rst index, and the superscript index j is the secondindex, i.e. the index i is the row index, and the j is the column index of the coef£cient matrix!This is important to know for the multiplication of coef£cient matrices. For example, what is isthe difference between the following products of coef£cient matrices,

AijBjk = ? , and AijB

kj = ? .

The left-hand side of £gure (6.11) sketches the £rst product, and the right-hand side the second

AijBjk =?

A11 A12 A13A21 A22 A23A31 A32 A33

C .11 C .2

1 C .31

C .12 C .2

2 C .32

C .13 C .2

3 C .33

B11 B12 B13

B21 B22 B23

B31 B32 B33

-j ?

jAijB

kj =?

A11 A12 A13A21 A22 A23A31 A32 A33

D.11 D.2

1 D.31

D.12 D.2

2 D.32

D.13 D.2

3 D.33

B11 B12 B13

B21 B22 B23

B31 B32 B33

-j

-j

Figure 6.11: Matrix multiplication.

product. This implies the following important relations,

AijBjk = C .k

i ⇔ [Aij][Bjk]=[C .k

i

]⇔ A B = C,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.3. Fundamentals of Tensors in Index Notation 183

and

AijBkj = Aij

(Bjk)T

= D.ki ⇔ [Aij]

[Bkj]= [Aij]

[Bjk]T

=[D.k

i

]⇔ A BT = D,

because a matrix with exchanged columns and rows is the transpose of the matrix. As a shortrecap the product of a square matrix and a column matrix, resp. a (column) vector, is given by

A u = v ⇔

A11 A12 A13A21 A22 A23A31 A32 A33

u1

u2

u3

=

A11u1 + A12u

2 + A13u3

A21u1 + A22u

2 + A23u3

A31u1 + A32u

2 + A33u3

⇔ Aijuj = vi.

For example some products of coef£cient matrices in index and matrix notation,

AijBkjCkl = Aij

(Bjk)T

Ckl = Dil ⇔ A BTC = D,

A.ji BkjC

.kl D

ml = A.ji (Bjk)

T (Ck.l

)T (Dlm

)T= E .m

i ⇔ A BTCTDT = E,

AijBkjuk = Aij

(Bjk)T

uk = vi ⇔ A BTu = v,

uiBijuj = α ⇔ uTB v = α.

Furthermore it is important to notice, that the dummy indices could be renamed arbitrarily,

AijBkj = AimB

km , or Aklvl = Akjvj , etc.

6.3.2 The Kronecker Delta and the Trace of a Matrix

The Kronecker delta is de£ned by

δij = δij = δij = δij =

1 , iff i = j

0 , iff i 6= j.

The Kronecker deltas δij , and δij are only de£ned in a Cartesian basis, where they represent themetric coef£cients. The other ones are de£ned in every basis, and in order to use the summationconvention, it is useful to prefer this notation with super- and subscript indices. Because theKronecker delta is the symmetric identity matrix, it is not necessary to differentiate the columnand row indices in index notation. As a rule of thumb, multiplication with a Kronecker deltasubstitues an index in the same position,

vkδkj = vj ⇔ v I = v,

A.ki δ

jk = A.j

i ⇔ A I = A,

Aimδsi δkm = Ask ⇔ A I I = A.

But what is described byA.l

kδliδ

ik = A.i

i ?

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 99: Introduction to Continuum Mechanics - Vector and Tensor Calculus

184 Chapter 6. Exercises

This is the sum over all main diagonal elements, or just called the trace of the matrix,

trA = A11 + A22 + A33 = tr[Aj

i

],

and because the trace is a scalar quantity, it is independent of the basis, i.e. it is an invariant,

trA = tr[Aj

i

]= tr

[

Aji

]

, i.e. Aii = Ai

i.

For example in the 2-dimensional vector space E2 the Kronecker delta is de£ned by

zg1

Og2 º g2

:g1

Figure 6.12: Example of co- and contravariant base vectors in E2.

gi · gk = δki =

1 i = k

0 i 6= k,

than an arbitrary co- and contravariant basis is given by

g1 · g2 = 0 ⇔ g1⊥g2,g2 · g1 = 0 ⇔ g2⊥g1,g1 · g1 = 1,

g2 · g2 = 1.

6.3.3 Raising and Lowering of an Index

If the vectors gi and gk are in the same space V, it must be possible to describe gk by a productof gi and some coef£cient like Akm,

gk = Akmgm.

Both sides of the equations are multiplied with gi, and £nally the index i is renamed by m,

gk · gi = Akmgm · gi ⇔ gki = Akmδim ⇔ gki = Aki ⇔ gkm = Akm.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.3. Fundamentals of Tensors in Index Notation 185

Something similar for the coef£cients of a vector x is given by

x = xigi = xigi ⇔ xigijg

j = xigi ⇔ xigij = xj .

Thegki = gik are called the covariant metric coef£cients,

and thegki = gik are called the contravariant metric coef£cients.

Finally this implies for the base vectors and for the coef£cients or coordinates of vectors andtensors, too. This implies, raising an index with the contravariant metric coef£cients is given by

gk = gkigi , xk = gkixi , and Aik = gijA.kj ,

and lowering an index with the covariant metric coef£cients is given by

gk = gkigi , xk = gkix

i , and Aik = gijAj.k.

The relations between the co- and contravariant metric coef£cients are given by

gk = gkmgm ⇔ gk · gi = gkmgm · gi ⇔ δki = gkmgmi.

Comparing this with A−1A = I implies

1 =[gkm][gmi] ⇔

[gkm]= [gmi]

−1 ⇔ det[gik]=

1

det [gik].

Than the determinants of the co- and contravariant metric coef£cients are de£ned by

det [gik] = g , and det[gik]=

1

g.

6.3.4 Permutation Symbols

The cross products of the Cartesian base vectors ei in the 3-dimensional Euclidean vector spaceE3 are given by

e1 × e2 = e3 = e3 , e2 × e3 = e1 = e1 , and e3 × e1 = e2 = e2,

e2 × e1 = −e3 = −e3 , e3 × e2 = −e1 = −e1 , and e1 × e3 = −e2 = −e2.

Often the cross product is also described by a determinant,

u× v =

∣∣∣∣∣∣

e1 e2 e3u1 u2 u3v1 v2 v3

∣∣∣∣∣∣

= e1 (u2v3 − u3v2) + e2 (u3v1 − u1v3) + e3 (u1v2 − u2v1) .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 100: Introduction to Continuum Mechanics - Vector and Tensor Calculus

186 Chapter 6. Exercises

The permutation symbol in Cartesian coordinates is given by

eijk =

+1 , iff (i, j, k) is an even permutation of (1, 2, 3),

−1 , iff (i, j, k) is an odd permutation of (1, 3, 2),

0 , if two or more indices are equal.

The cross products of the Cartesian base vectors could be described by the permutaion symbolslike this,

ei × ej = eijkek,

and for example

e1 × e2 = e121 · e1 + e122 · e2 + e123 · e3 = 0 · e1 + 0 · e2 + 1 · e3 = e3

e1 × e3 = e131 · e1 + e132 · e2 + e133 · e3 = 0 · e1 + (−1) · e2 + 0 · e3 = −e2.

The general permutation symbol is given by the covariant ε symbol,

εijk =

+√g , iff (i, j, k) is an even permutation of (1, 2, 3),

−√g , iff (i, j, k) is an odd permutation of (3, 2, 1),

0 , if two or more indices are equal,

or by the contravariant ε symbol,

εijk =

+ 1√g

if (i, j, k) is an even permutation of (1, 2, 3),

− 1√g

if (i, j, k) is an odd permutation of (3, 2, 1),

0 if two or more indices are equal.

With this relations the cross products of covariant base vectors are given by

gi × gj = εijkgk,

and for the corresponding contravariant base vectors

gi × gj = εijkgk,

and the following relations between the Cartesian and the general permutation symbols hold

εijk =√geijk , and eijk =

1√gεijk,

and

εijk =1√geijk , and eijk =

√gεijk.

An important relation, in order to simplify expressions with permutation symbols, is given by

eijkemnk =(δimδ

jn − δinδ

jm

).

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.3. Fundamentals of Tensors in Index Notation 187

6.3.5 Exercises

1. Simplify the index notation expressions and write down the matrix notation form.

(a) A.ji BkjC

k.l =

(b) AijBikCkl =

(c) C .nmD.m

n =

(d) DmnEm.l u

l =

(e) uiD.ij E

.jk =

2. Simplify the index notation expressions.

(a) Aijgjk =

(b) Aijδjk =

(c) AijBjkδimδ

nk =

(d) Aklδijδ

jmg

ml =

(e) AijBkjgimgknδ

nm =

(f) A.lkB

kmgmiginδnj =

3. Rewrite these expressions in index notation w.r.t. a Cartesian basis, and describe what kindof quantity the result is.

(a) (a× b) · c =(b) a× b+ (a · d) c =(c) (a× b) · (c× d) =(d) a× (b× c) =

4. Combine the base vectors of a general basis and simplify the expressions in index notation.

(a) u · v = (uigi) · (vjgj) =

(b) u · v = (uigi) · (vjgj) =

(c) u× v = (uigi)× (vjgj) =

(d) u× v = (uigi)× (vjgj) =

(e) (u× v) ·w = [(uigi)× (vjg

j)] ·(wkgk

)=

(f) (u · v) (w × x) = [(uigi) · (vjgj)][(wkgk

)×(xlgl

)]=

(g) u× (v ×w) = (uigi)×[(vjg

j)×(wkgk

)]=

(h) (u× v) · (w × x) = [(uigi)× (vjgj)] ·

[(wkg

k)×(xlgl

)]=

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 101: Introduction to Continuum Mechanics - Vector and Tensor Calculus

188 Chapter 6. Exercises

6.3.6 Solutions

1. Simplify the index notation expressions and write down the matrix notation form.

(a) A.ji BkjC

k.l = A.j

i (Bjk)T Ck

.l = Dil ⇔ A BTC = D

(b) AijBikCkl = (Aji)T BikCkl = Djl ⇔ ATB C = D

(c) C .nmD.m

n = E .mm = α ⇔ tr (C D) = trE = α

(d) DmnEm.l u

l = (Dnm)T Em

.l ul = vn ⇔ DTE u = v

(e) uiD.ij E

.jk = ui

(Di

.j

)T (Ej

.k

)T= vk ⇔ uTDTET = vT

2. Simplify the index notation expressions.

(a) Aijgjk = A.k

i

(b) Aijδjk = Aik

(c) AijBjkδimδ

nk = AmjB

jn = C .nm

(d) Aklδijδ

jmg

ml = A.ik

(e) AijBkjgimgknδ

nm = Am

.j (Bj.n)

Tδnm = Am

.j (Bj.m)

T= α

(f) A.lkB

kmgmiginδnj =

(Al

.k

)TBkj = C lj

3. Rewrite these expressions in index notation w.r.t. a Cartesian basis, and describe what kindof quantity the result is.

(a) (a× b) · c = α ⇔ aibjeijkck = α

(b) a× b+ (a · d) c = v ⇔ aibjeijk + (aid

i) ck = vk

(c) (a× b) · (c× d) = β ⇔(aibje

ijk)(cmdnemnk) = aibjc

mdneijkemnk = aibjcmdn

(δimδ

jn − δinδ

jm

)

= ambncmdn − anbmc

mdn = β

⇔ (a · c) (b · d)− (a · d) (b · c) = β

(d) a× (b× c) = v ⇔

elkmal(eijkb

icj)= alb

icjelkmeijk = albicj(δliδ

mj − δmi δ

lj

)

= aibicm − ajb

mcj = vm

⇔ (a · b) c− (a · c)b = v

4. Combine the base vectors of a general basis and simplify the expressions in index notation.

(a) u · v = (uigi) · (vjgj) = uivjgi · gj = uivjgij = uivi = α

(b) u · v = (uigi) · (vjgj) = uivjgi · gj = uivjδji = uivi = α

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.3. Fundamentals of Tensors in Index Notation 189

(c) u× v = (uigi)× (vjgj) = uivjgi × gj = uivjεijkgk = wkg

k = w

(d) u× v = (uigi)× (vjgj) ⇔

uivjgi × gj = uivjgikgk × gj = ukvjε

kjlgl = uivjεijkgk

⇔ wkgk = w

(e) (u× v) ·w = [(uigi)× (vjg

j)] ·(wkgk

)⇔

uivjwkεijlgl · gk = uivjw

kεijlglk = uivjwlεijl

⇔ uivjwkεijk = α

(f) (u · v) (w × x) = [(uigi) · (vjgj)][(wkgk

)×(xlgl

)]⇔

(uivjgi · gj

) (wkxlgk × gl

)=(uivjgij

) (wkxlεklmg

m)

= uiviwkxlεklmg

m = ymgm

⇔ ymgm = y

(g) u× (v ×w) = (uigi)×[(vjg

j)×(wkgk

)]⇔

uigi ×(vjw

kgklgj × gl

)= uivjwlgi ×

(εjlmgm

)= uivjwlε

jlmεimngn

= uivjwl

(δji δ

ln − δjnδ

li

)gn = uiviwng

n − uiwivngn = xng

n

⇔ (u · v)w − (u ·w)v = x

(h) (u× v) · (w × x) = [(uigi)× (vjgj)] ·

[(wkg

k)×(xlgl

)]⇔

[uiv

jgim (gm × gj)]·[wkx

lgln(gk × gn

)]=[umvjεmjog

o]·[wkxnε

knpgp

]

= umvjεmjowkxnεknpgo · gp

= umvjεmjowkxnεknpδop

= umvjwkxnεmjpεknp

= umvjwkxn

(δkmδ

nj − δnmδ

kj

)

= ukvnwkxn − unvkwkxn

⇔ (u ·w) (v · x)− (u · x) (v ·w) = α

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 102: Introduction to Continuum Mechanics - Vector and Tensor Calculus

190 Chapter 6. Exercises

6.4 Various Products of Second Order Tensors

6.4.1 The Product of a Second Order Tensor and a Vector

The product of a second order tensor and a vector (, i.e. a £rst order tensor,) is computed by thescalar product of the last base vector of the tensor and the base vector of the vector. For examplea product of a second order tensor and a vector is given by

v = Tu =(Tijg

i ⊗p

gj

scalarproduct)(uk

q

gk)

= Tijuk

(gj · gk

)gi

= Tijukgjkgi

= Tijujgi

v = Tu = vigi , with vi = Tiju

j .

6.4.2 The Tensor Product of Two Second Order Tensors

The tensor product of two second order tensors is computed by the scalar product of the twoinner base vectors and the dyadic product of the two outer base vectors. For example a tensorproduct is given by

R = TS =(T ij p

gi

dyadic product

⊗gjx

)(Skl

scalarproduct

gky

⊗ q

gl

)

= T ijSkl (gj · gk) (gi ⊗ gl)

= T ijSklgjkgi ⊗ gl

= T ijS.lj gi ⊗ gl

R = TS = Rilgi ⊗ gl , with Ril = T ijS.lj .

6.4.3 The Scalar Product of Two Second Order Tensors

The scalar product of two second order tensors is computed by the scalar product of the £rst basevectors of the two tensors and the scalar product of the two second base vectors of the tensors,too. For example a scalar product is given by

α = T : S =(T ij p

scalar product

gi ⊗ gjxscalar product

):(Skl q

gk ⊗ gly

)

= T ijSkl (gi · gk) (gj · gl)

= T ijSklgikgjl

α = T : S = T ijSij .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.4. Various Products of Second Order Tensors 191

6.4.4 Exercises

1. Compute the tensor products.

(a) TS = (Tijgi ⊗ gj)

(Sklg

k ⊗ gl)=

(b) TS = (Tijgi ⊗ gj)

(Sklgk ⊗ gl

)=

(c) TS =(T .ji g

i ⊗ gj

) (S.lkg

k ⊗ gl

)=

(d) TS = (Tijgi ⊗ gj)

(S.lkg

k ⊗ gl

)=

(e) T1 = (Tijgi ⊗ gj)

(δlkg

k ⊗ gl

)=

(f) 11 =(δjig

i ⊗ gj

) (δlkg

k ⊗ gl

)=

(g) Tg = (Tijgi ⊗ gj)

(gklg

k ⊗ gl)=

(h) TT = (Tijgi ⊗ gj)

(Tklg

k ⊗ gl)=

(i) TTT = (Tijgi ⊗ gj)

(Tklg

k ⊗ gl)T

=

2. Compute the scalar products.

(a) T : S = (Tijgi ⊗ gj) :

(Sklg

k ⊗ gl)=

(b) T : S = (Tijgi ⊗ gj) :

(Sklgk ⊗ gl

)=

(c) T : S =(T .ji g

i ⊗ gj

):(S.lkg

k ⊗ gl

)=

(d) T : 1 = (Tijgi ⊗ gj) :

(δlkg

k ⊗ gl

)=

(e) 1 : 1 =(δjig

i ⊗ gj

):(δlkg

k ⊗ gl

)=

(f) T : g = (Tijgi ⊗ gj) :

(gklg

k ⊗ gl)=

(g) T : T = (Tijgi ⊗ gj) :

(Tklg

k ⊗ gl)=

(h) T : TT = (Tijgi ⊗ gj) :

(Tklg

k ⊗ gl)T

=

3. Compute the various products.

(a) (TS)v = TSv =(T .ji g

i ⊗ gj

) (S.lkg

k ⊗ gl

)(vmg

m) =

(b) (T : S)v = T : Sv =(T .ji g

i ⊗ gj

):(S.lkg

k ⊗ gl

)(vmg

m) =

(c) tr(TTT

)=(δjig

i ⊗ gj

):[(Tklg

k ⊗ gl)(Tmng

m ⊗ gn)T]

=

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 103: Introduction to Continuum Mechanics - Vector and Tensor Calculus

192 Chapter 6. Exercises

6.4.5 Solutions

1. Compute the tensor products.

(a) TS = (Tijgi ⊗ gj)

(Sklg

k ⊗ gl)

= TijSklgjkgi ⊗ gl = TijS

j.lg

i ⊗ gl = Rilgi ⊗ gl = R

(b) TS = (Tijgi ⊗ gj)

(Sklgk ⊗ gl

)= TijS

klδjkgi⊗gl = TijS

jlgi⊗gl = R.li g

i⊗gl = R

(c) TS =(T .ji g

i ⊗ gj

) (S.lkg

k ⊗ gl

)= T .j

i S.lk δ

kj g

i⊗gl = T .ji S

.lj g

i⊗gl = R.li g

i⊗gl = R

(d) TS = (Tijgi ⊗ gj)

(S.lkg

k ⊗ gl

)= TijS

.lk g

jkgi⊗gl = TijSjlgi⊗gl = R.l

i gi⊗gl = R

(e) T1 = (Tijgi ⊗ gj)

(δlkg

k ⊗ gl

)= Tijδ

lkg

jkgi⊗gl = Tijgjlgi⊗gl = Tijg

i⊗gj = T

(f) 11 =(δjig

i ⊗ gj

) (δlkg

k ⊗ gl

)= δji δ

lkδ

kj g

i ⊗ gl = δligi ⊗ gl = δjig

i ⊗ gj = 1

(g) Tg = (Tijgi ⊗ gj)

(gklg

k ⊗ gl)= Tijgklg

jkgi⊗gl = Tijδjl g

i⊗gl = Tijgi⊗gj = T

(h) TT = (Tijgi ⊗ gj)

(Tklg

k ⊗ gl)= TijTklg

jkgi ⊗ gl = TijTj.lg

i ⊗ gl = T2

(i) TTT = (Tijgi ⊗ gj)

(Tklg

k ⊗ gl)T

= (Tijgi ⊗ gj)

(Tklg

l ⊗ gk)= TijTklg

jlgi⊗gk = TijT.jk g

i⊗gk = TijT.jl g

i⊗gl, or

TTT = (Tijgi ⊗ gj)

(Tklg

k ⊗ gl)T

= (Tijgi ⊗ gj)

(Tlkg

k ⊗ gl)= TijTlkg

jkgi ⊗ gl = TijT.jl g

i ⊗ gl

2. Compute the scalar products.

(a) T : S = (Tijgi ⊗ gj) :

(Sklg

k ⊗ gl)= TijSklg

ikgjl = TijSij = α

(b) T : S = (Tijgi ⊗ gj) :

(Sklgk ⊗ gl

)= TijS

klδikδjl = TijS

ij = α

(c) T : S =(T .ji g

i ⊗ gj

):(S.lkg

k ⊗ gl

)= T .j

i S.lk g

ikgjl = T .ji S

i.j = α, or

T .ji S

.lk g

ikgjl = TilSil = α, or

T .ji S

.lk g

ikgjl = T kjSkj = α

(d) T : 1 = (Tijgi ⊗ gj) :

(δlkg

k ⊗ gl

)= Tijδ

lkg

ikδjl = Tijgij = T .i

i = trT

(e) 1 : 1 =(δjig

i ⊗ gj

):(δlkg

k ⊗ gl

)= δji δ

lkg

ikgjl = δji δij = δii = 3 = tr1

(f) T : g = (Tijgi ⊗ gj) :

(gklg

k ⊗ gl)= Tijgklg

ikgjl = Tijδilg

jl = Tijgji = T .i

i = trT

(g) T : T = (Tijgi ⊗ gj) :

(Tklg

k ⊗ gl)= TijTklg

ikgjl = TijTij = tr

(TTT

)

(h) T : TT = (Tijgi ⊗ gj) :

(Tklg

k ⊗ gl)T

= (Tijgi ⊗ gj) :

(Tklg

l ⊗ gk)= TijTklg

ilgjk = TijTji = tr (T)2, or

T : TT = (Tijgi ⊗ gj) :

(Tklg

k ⊗ gl)T

= (Tijgi ⊗ gj) :

(Tlkg

k ⊗ gl)= TijTlkg

ikgjl = TijTji = tr (T)2

3. Compute the various products.

(a) (TS)v = TSv =(T .ji g

i ⊗ gj

) (S.lkg

k ⊗ gl

)(vmg

m)

= T .ji S

.lk δ

kj vm (gi ⊗ gl)g

m = T .ki S.l

k vmδml g

i = T .ki S.l

k vlgi = uig

i = u

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.4. Various Products of Second Order Tensors 193

(b) (T : S)v = T : Sv =(T .ji g

i ⊗ gj

):(S.lkg

k ⊗ gl

)(vmg

m)

= T .ji S

.lk g

ikgjlvmgm = T kjSkjvmg

m = αv = w

(c) tr(TTT

)=(δjig

i ⊗ gj

):[(Tklg

k ⊗ gl)(Tmng

m ⊗ gn)T]

=(δjig

i ⊗ gj

):[TklT

.lmg

k ⊗ gm]= δjiTklT

.lmg

ikδmj = δjiTi.lT

.lj = T j

.lT.lj = T : T

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 104: Introduction to Continuum Mechanics - Vector and Tensor Calculus

194 Chapter 6. Exercises

6.5 Deformation Mappings

6.5.1 Tensors of the Tangent Mappings

The material deformation gradient FX is given by

FX := GradX ϕ = gi ⊗Gi, (6.5.1)

the local geometry gradient KΘ is given by

KΘ := GRADΘ ψ = Gi ⊗ Zi, (6.5.2)

and the local deformation gradient FΘ is given by

FΘ := GRADΘ ϕ = gi ⊗ Zi. (6.5.3)

The different tangent mappings of the various tangent mappings are given by

FX = gi ⊗Gi , FTX = Gi ⊗ gi , F−1

X = Gi ⊗ gi , F−TX = gi ⊗Gi , (6.5.4)

KΘ = Gi ⊗ Zi , KTΘ = Zi ⊗Gi , K−1

Θ = Zi ⊗Gi , K−TΘ = Gi ⊗ Zi, (6.5.5)

FΘ = gi ⊗ Zi , FTΘ = Zi ⊗ gi , F−1

Θ = Zi ⊗ gi , F−TΘ = gi ⊗ Zi . (6.5.6)

The identity tensors are introduced separately for the various coordinate system by

identity tensor of the parameter space - 1Θ := Zi ⊗ Zi , (6.5.7)

identity tensor of the undeformed space - 1X := Gi ⊗Gi, (6.5.8)

identity tensor of the deformed space - 1x := gi ⊗ gi . (6.5.9)

The various metric tensors of the different tangent spaces are introduced by

local metric tensor of the undeformed body - MΘ = KTΘKΘ = GijZ

i ⊗ Zj , (6.5.10)

local metric tensor of the deformed body - mΘ = FTΘFΘ = gijZ

i ⊗ Zj , (6.5.11)

material metric tensor of the undeformed body - MX = 1TX1X = GijG

i ⊗Gj , (6.5.12)

material metric tensor of the deformed body - mX = FTXFX = gijG

i ⊗Gj , (6.5.13)

spatial metric tensor of the undeformed body - Mx = F−TX F−1

X = Gijgi ⊗ gj , (6.5.14)

spatial metric tensor of the undeformed body - mx = 1Tx1x = gijg

i ⊗ gj . (6.5.15)

The local strain tensors is given by

EΘ :=1

2(mΘ −MΘ) =

1

2(gij −Gij)Z

i ⊗ Zj , (6.5.16)

the material strain tensors is given by

EX :=1

2(mX −MX) =

1

2(gij −Gij)G

i ⊗Gj , (6.5.17)

and £nally the spatial strain tensors is given by

Ex :=1

2(mx −Mx) =

1

2(gij −Gij)g

i ⊗ gj . (6.5.18)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.5. Deformation Mappings 195

6.5.2 Exercises

1. Compute the tensor products. What is represented by them?

(a) K−1Θ K

−TΘ =

(b) F−1Θ F

−TΘ =

(c) FXFTX =

(d) F−1X F−T

X =

(e) 1X1TX =

(f) 1x1Tx =

2. Compute the tensor products in index notation, and name the result with the correct name.

(a) K−TΘ MΘK

−1Θ =

(b) KTΘMXKΘ =

(c) F−TΘ mΘF

−1Θ =

(d) FTΘmxFΘ =

(e) F−TX EXF

−1X =

(f) FTXExFX =

3. Compute the tensor and scalar products in index notation. Rewrite the results in tensornotation.

(a) BΘ =M−1Θ mΘ =

(b) BX =

(c) Bx =

(d) BΘ : 1Θ =

(e) BTΘ : BΘ =

(f) BTΘB

TΘ : BΘ =

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 105: Introduction to Continuum Mechanics - Vector and Tensor Calculus

196 Chapter 6. Exercises

6.5.3 Solutions

1. Compute the tensor products. What is represented by them?

(a) K−1Θ K

−TΘ = (Zi ⊗Gi) (Gj ⊗ Zj) = GijZi ⊗ Zj =M−1

Θ =(KTΘKΘ

)−1

(b) F−1Θ F

−TΘ = (Zi ⊗ gi) (gj ⊗ Zj) = gijZi ⊗ Zj =m−1

Θ =(FTΘFΘ

)−1

(c) FXFTX = (gi ⊗Gi) (Gj ⊗ gj) = Gijgi ⊗ gj =M−1

x =(F−T

X F−1X

)−1

(d) F−1X F−T

X = (Gi ⊗ gi) (gj ⊗Gj) = gijGi ⊗Gj =m−1X =

(FT

XFX

)−1

(e) 1X1TX = (Gi ⊗Gi) (Gj ⊗Gj) = GijGi ⊗Gj =M−1

X

(f) 1x1Tx = (gi ⊗ gi) (gj ⊗ gj) = gijgi ⊗ gj =m−1

x

2. Compute the tensor products in index notation, and name the result with the correct name.

(a) K−TΘ MΘK

−1Θ =

=(Gk ⊗ Zk

)(GijZ

i ⊗ Zj)(Zl ⊗Gl

)= Gijδ

ikδ

jl

(Gk ⊗Gl

)= GijG

i⊗Gj =MX

(b) KTΘMXKΘ =

=(Zk ⊗Gk

)(GijG

i ⊗Gj)(Gl ⊗ Zl

)= Gijδ

ikδ

jl

(Zk ⊗ Zl

)= GijZ

i ⊗ Zj =MΘ

(c) F−TΘ mΘF

−1Θ =

=(gk ⊗ Zk

)(gijZ

i ⊗ Zj)(Zl ⊗ gl

)= gijδ

ikδ

jl

(gk ⊗ gl

)= gijg

i ⊗ gj =mx

(d) FTΘmxFΘ =

=(Zk ⊗ gk

)(gijg

i ⊗ gj)(gl ⊗ Zl

)= gijδ

ikδ

jl

(Zk ⊗ Zl

)= gijZ

i ⊗ Zj =mΘ

(e) F−TX EXF

−1X =

=(gk ⊗Gk

)(gij −Gij) (G

i ⊗Gj)(Gl ⊗ gl

)= (gij −Gij) δ

ikδ

jl

(gk ⊗ gl

)

= (gij −Gij)gi ⊗ gj = Ex

(f) FTXExFX =

=(Gk ⊗ gk

)(gij −Gij) (g

i ⊗ gj)(gl ⊗Gl

)= (gij −Gij) δ

ikδ

jl

(Gk ⊗Gl

)

= (gij −Gij)Gi ⊗Gj = EX

3. Compute the tensor and scalar products in index notation. Rewrite the results in tensornotation.

(a) BΘ =M−1Θ mΘ =

= (GijZi ⊗ Zj)(glkZ

l ⊗ Zk)= Gijgklδ

ljZi ⊗ Zl = GijgjkZi ⊗ Zk

(b) BX =M−1X mX =

= (GijGi ⊗Gj)(glkG

l ⊗Gk)= Gijgklδ

ljGi ⊗Gl = GijgjkGi ⊗Gk

(c) Bx =M−1x mx =

= (Gijgi ⊗ gj)(glkg

l ⊗ gk)= Gijgklδ

ljgi ⊗ gl = Gijgjkgi ⊗ gk

(d) BΘ : 1Θ ==(GijgjkZi ⊗ Zk

):(Zl ⊗ Zl

)= GijgjkZilZ

kl = Gijgjkδki = Gijgji = trBΘ

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.5. Deformation Mappings 197

(e) BTΘ : BΘ =

=(GijgjkZ

k ⊗ Zi

):(GlmgmnZl ⊗ Zn

)= GijgjkG

lmgmnδkl δ

ni

= GijgjkGkmgmi = trB2Θ

(f) BTΘB

TΘ : BΘ =

= (GprgrsZs ⊗ Zp)

(GijgjkZ

k ⊗ Zi

):(GlmgmnZl ⊗ Zn

)

=(GprgrsG

ijgjkδkpZ

s ⊗ Zi

):(GlmgmnZl ⊗ Zn

)=

= (GprgrsGijgjpZ

s ⊗ Zi) :(GlmgmnZl ⊗ Zn

)= GprgrsG

ijgjpGlmgmnδ

sl δ

ni

= GprgrsGsmgmnG

njgjp = trB3Θ

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 106: Introduction to Continuum Mechanics - Vector and Tensor Calculus

198 Chapter 6. Exercises

6.6 The Moving Trihedron, Derivatives and Space Curves

6.6.1 The Problem

A spiral staircase is given by the sketch in £gure (6.13). The relation between the gradient angle

e3¾e1

?e2

R

Θ1

¾ -r

bottom, £xed supporttop, height h/2

Figure 6.13: The given spiral staircase.

α and the overall height h of the spiral staircase is given by

tanα =h

2πr,

if h is the height of a 360o spiral staircase, here the spiral staircase is just about 180o. The spiralstaircase has a £xed support at the bottom, and the central line is represented by the variable Θ1,which starts at the top of the spiral staircase.

• Compute the tangent t, the normal n, and the binormal vector b w.r.t. the variable ϕ.

• Determine the curvature κ = 1ρ

and the torsion ω = 1τ

of the curve w.r.t. to the variable ϕ.

• Compute the Christoffel symbols

Γ ri1 , for i, r = 1, 2, 3.

• Describe the forces and moments in a sectional area w.r.t. to the basis given by the movingtrihedron, with the following conditions,

M = M iai , resp. N = N iai , with M i = Mi (ϕ) , resp. N i = Ni (ϕ)

a1, a2, a3 = t,n,b .

• Compute the resulting forces and moments at an sectional area given by ϕ = 130o. Con-sider a load vector given in the global Cartesian coordinate system by

R = −qϕr e3,at a point S given by the angle ϕ

2, and the radius rS . This load maybe a combination of the

self-weight of the spiral staircase and the payload of its usage.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.6. The Moving Trihedron, Derivatives and Space Curves 199

6.6.2 The Base vectors

The winding up of the spiral staircase is given by the sketch in £gure (6.14). With the Pythagoras

R

Θ1 angle between zero and a half rotation

0 ≤ Θ1 ≤ πrcosα

¾ -rϕ

?

6

h2πϕ = aϕ

Figure 6.14: The winding up of the given spiral staircase.

theorem, see also the sketch in £gure (6.14), the relationship between the variable ϕ and thevariable Θ1 along the central line is given by

Θ1 =√

a2ϕ2 + r2ϕ2 , and a =h

2π.

This implies

ϕ =1√

a2 + r2Θ1,

and with the de£nition of the cosine

cosα =rϕ

Θ1=

r√a2 + r2

,

£nally the relationship between the variables ϕ, and Θ1 is given by

ϕ =cosα

rΘ1 = cΘ1 , with c =

cosα

r=

1√r2 + a2

.

With this relation it is easy to see, that every expression depending on ϕ is also depending on Θ1,this is later on important for computing some derivatives. Any arbitrary point on the central lineof the spiral staircase could be represented by a vector of position x in the Cartesian coordinatesystem,

x = xiei , and xi = xi(Θ1)

,

orx = xiei , and xi = xi (r, ϕ) .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 107: Introduction to Continuum Mechanics - Vector and Tensor Calculus

200 Chapter 6. Exercises

The three components of the vector of position x are given by

x1 = r cosϕ,

x2 = r sinϕ,

x3 =h

2

(

1− ϕ

π

)

,

and the complete vector is given by

x = xiei = xi (r, ϕ) ei = (r cosϕ) e1 + (r sinϕ) e2 +h

2

(

1− ϕ

π

)

e3 , and ϕ = cΘ1.

The tangent vector t = a1 is de£ned by

t = a1 =dx

dΘ1=

∂x

∂ϕ· ∂ϕ∂Θ1

,

and with dϕdΘ1 = c this implies

t = a1 =

−r sinϕr cosϕ− h2π

c ⇒ t = a1 =

−cr sinϕcr cosϕ−ca

.

The absolute value of this vector is given by

|t| = |a1| = c

r2 sin2 ϕ+ r2 cos2 ϕ+ a2 = c√r2 + a2 = 1,

i.e. the tangent vector t is already an unit vector! For the normal unit vector n = a2 £rst thenormal vector n∗ is computed by

n∗ =dt

dΘ1=

da1dΘ1

=∂a1∂ϕ· ∂ϕ∂Θ1

=d2x

d2Θ1,

and this implies with the result for vector a1,

n∗ = −c2r

cosϕsinϕ0

.

The absolute value of this vector is given by

|n∗| = c2r

cos2 ϕ+ sin2 ϕ = c2r =1

ρ,

and with this £nally the normal unit vector is given by

n = a2 = ρn∗ =

− cosϕ− sinϕ

0

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.6. The Moving Trihedron, Derivatives and Space Curves 201

The binormal vector b = a3 is de£ned by

b = t× n , resp. a3 = a1 × a2,

and with the de£nition of the cross product, represented by the expansion about the £rst column,the binormal is given by

b =

∣∣∣∣∣∣

e1 −cr sinϕ − cosϕe2 cr cosϕ − sinϕe3 −ca 0

∣∣∣∣∣∣

=

−ca sinϕca cosϕ

cr

.

The absolute value of this vector is given by

|b| = c

a2 sin2 ϕ+ a2 cos2 ϕ+ r2 = c√a2 + r2 = 1,

and with this the binormal vector b is already an unit vector given by

b = a3 =c

−h sinϕh cosϕ2πr

⇒ b = a2 =

−ca sinϕca cosϕ

cr

.

6.6.3 The Curvature and the Torsion

The curvature κ = 1ρ

of a curve in space is given by

κ =1

ρ= |n∗| , and n∗ =

dt

dΘ1=

da1dΘ1

=d2x

d2Θ1,

and in this case it implies a constant valued curvature

κ =1

ρ= |n∗| = c2r = constant.

The torsion ω = 1τ

of a curve in space is de£ned by the £rst derivative of the binormal vector,

b = t× n , with b,1 = ωn =1

τn,

and here the derivative

b,1 =

−c2a cosϕ−c2a sinϕ

0

= −c2an,

implies a constant torsion, too,

ω =1

τ= −c2a = constant.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 108: Introduction to Continuum Mechanics - Vector and Tensor Calculus

202 Chapter 6. Exercises

6.6.4 The Christoffel Symbols

The Christoffel symbols are de£ned by the derivatives of the base vectors, here the movingtrihedron given by a1, a2, a3 = t,n,b,

ai,1 = Γ ri1 ar | ·ak

ai,1 · ak = Γ ri1 ar · ak = Γ r

i1 δkr

and £nally the de£nition of a single Christoffel symbol is given by

Γ ki1 = ai,1 · ak.

This de£nition implies, that it is only necessary to compute the scalar products of the base vectorsand their £rst derivatives, in order to determine the Christoffel symbols. For this reason in a £rststep all the £rst derivatives w.r.t. the variable ϕ of the base vectors of the moving trihedron aredetermined. The £rst derivative of the base vector a1 is given by

a1,1 =∂a1∂ϕ· ∂ϕ∂Θ1

=

−cr cosϕ−cr sinϕ

0

c = c2r

− cosϕ− sinϕ

0

= c2ra2,

the £rst derivative of the second base vector a2 is given by

a2,1 =∂a2∂ϕ· ∂ϕ∂Θ1

= c

sinϕ− cosϕ

0

,

and £nally the £rst derivative of the third base vector a 3 is given by

a3,1 =∂a3∂ϕ· ∂ϕ∂Θ1

= c

−ca cosϕ−ca sinϕ

0

= c2a

− cosϕ− sinϕ

0

= c2aa2.

Because the moving trihedron ai is an orthonormal basis, it is not necessary to differentiatebetween the co- and contravariant base vectors, i.e. ai = ai, and with this the de£nition of theChristoffel symbols is given by

Γ ki1 = ai,1 · ak = ai,1 · ak.

The various Christoffel symbols are computed like this,

Γ 111 = a1,1 · a1 = c2ra2 · a1 = 0,

Γ 211 = a1,1 · a2 = c2ra2 · a2 = c2r,

Γ 311 = a1,1 · a3 = c2ra2 · a3 = 0,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.6. The Moving Trihedron, Derivatives and Space Curves 203

Γ 121 = a2,1 · a1 = c

sinϕ− cosϕ

0

·

−cr sinϕcr cosϕ−ca

= −c2r,

Γ 221 = a2,1 · a2 = c

sinϕ− cosϕ

0

·

− cosϕ− sinϕ

0

= 0,

Γ 321 = a2,1 · a3 = c

sinϕ− cosϕ

0

·

−ca sinϕca cosϕ

cr

= −c2a,

Γ 131 = a3,1 · a1 =

−c2a cosϕ−c2a sinϕ

0

·

−cr sinϕcr cosϕ−ca

= 0,

Γ 231 = a3,1 · a2 =

−c2a cosϕ−c2a sinϕ

0

·

− cosϕ− sinϕ

0

= c2a,

Γ 331 = a3,1 · a3 =

−c2a cosϕ−c2a sinϕ

0

·

−ca sinϕca cosϕ

cr

= 0.

With this results the coef£cient matrix of the Christoffel symbols could be represented by

[Γ ri1 ] =

0 c2r 0−c2r 0 −c2a0 c2a 0

.

6.6.5 Forces and Moments at an Arbitrary sectional area

An arbitrary line element of the spiral staircase is given by the points P , and Q. These pointsare represented by the vectors of position x, and x+ dx. At the point P the moving trihedron isgiven by the orthonormal base vectors t, n, and b. The forces −N, N + dN, and the moments−M,M+dM at the sectional areas are given like in the sketch of £gure (6.15). The line elementis load by a vector fdΘ1. The equilibrium of forces in vector notation is given by

N+ dN−N+ fdΘ1 = 0 ⇒ dN+ fdΘ1 = 0,

with the £rst derivative of the force vector w.r.t. to the variable Θ1 represented by

dN =∂N

∂Θ1dΘ1 = N,1dΘ

1,

the equilibrium condition becomes

(N,1 + f) dΘ1 = 0.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 109: Introduction to Continuum Mechanics - Vector and Tensor Calculus

204 Chapter 6. Exercises

-

6

ª

e3

e2e1

O

P

Q

-Θ1

qt = a1 (Θ

1)

*n = a2 (Θ

1)

±b = a3 (Θ

1)

)−N

I −M

qN+ dN

RM+ dM

qfdΘ1

µ

x = x (Θ1)1

x+ dx = x (Θ1) + a1dΘ1

Rdx = a1dΘ

1

Figure 6.15: An arbitrary line element with the forces, and moments in its sectional areas.

This equation rewritten in index notation, with the force vector N = N iai given in the basis ofthe moving trihedron at point P , implies

((N iai

)

,1+ f iai

)

dΘ1 = 0,

with the chain rule,

N i,1ai +N iai,1 + f iai = 0

N i,1ai +N iΓ k

i1 ak + f iai = 0

and after renaiming the dummy indices,

(N i

,1 +NkΓ ik1

)ai + f iai = 0.

With the covariant derivative de£ned by

N i,1 +NkΓ i

k1 = N i |1 ,

the equilibrium condition could be rewritten in index notation only for the components,

N i |1 +f i = N i,1 +NkΓ i

k1 + f i = 0.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.6. The Moving Trihedron, Derivatives and Space Curves 205

This equation system represents three component equations, one for each direction of the basisof the moving trihedron, £rst for i = 1, and k = 1, . . . , 3,

N1,1 +N1Γ 1

11 +N2Γ 121 +N3Γ 1

31 + f 1 = 0,

with the computed values for the Christoffel symbols from above,

N1,1 + 0 +N 2

(−c2r

)+ 0 + f 1 = 0,

and £nally

N1,1 − c2rN2 + f 1 = 0.

The second case for i = 2, and k = 1, . . . , 3 implies

N2,1 +N1Γ 2

11 +N2Γ 221 +N3Γ 2

31 + f 2 = 0,

N2,1 +N1

(c2r)+ 0 +N 3

(c2a)+ f 2 = 0,

N2,1 + c2rN1 + c2aN3 + f 2 = 0.

The third case for i = 3, and k = 1, . . . , 3 implies

N3,1 +N1Γ 3

11 +N2Γ 321 +N3Γ 3

31 + f 3 = 0,

N3,1 + 0 +N 2

(−c2a

)+ 0 + f 3 = 0,

N3,1 − c2aN2 + f 3 = 0.

All together the coef£cient scheme of the equilibrium of forces in the basis of the moving trihe-dron is given by

N1,1 − c2rN2 + f 1

N2,1 + c2rN1 + c2aN3 + f 2

N3,1 − c2aN2 + f 3

=

000

.

The equilibrium of moments in vector notation w.r.t. the point P is given by

−M+M+ dM+ a1dΘ1 × (N+ dN) +

1

2a1dΘ

1 × fdΘ1 = 0,

⇒ dM+ a1dΘ1 ×N+ a1 × dNdΘ1 +

1

2a1 × fdΘ1dΘ1 = 0,

and in linear theory, i.e. neglecting the higher order terms, e.g. terms with dNdΘ1, and withdΘ1dΘ1, the equilibrium of moments is given by

dM+ a1dΘ1 ×N = 0.

With the £rst derivative of the moment vector w.r.t. to the variable Θ1 given by

dM =∂M

∂Θ1dΘ1 =M,1dΘ

1,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 110: Introduction to Continuum Mechanics - Vector and Tensor Calculus

206 Chapter 6. Exercises

the equilibrium condition becomes

(M,1 + a1 ×N) dΘ1 = 0.

The cross product of the £rst base vector a1 of the moving trihedron and the force vector N isgiven by

a1 ×N = a1 ×(N iai

)= N1a1 × a1 +N2a1 × a2 +N3a1 × a3

= 0 +N 2a3 +N3(−a2

)= N2a3 −N3a2,

a1 ×N = N 2a3 −N3a2,

because the base vectors ai form an orthonormal basis. The following steps are the same as theones for the force equilibrium equations,

(M,1 +N2a3 −N3a2

)dΘ1 = 0,

M i,1ai +M iai,1 +N2a3 −N3a2 = 0,

and £nally(M i

,1 +MkΓ ik1

)ai +N2a3 −N3a2 = M i |1 ai +N2a3 −N3a2 = 0.

Again this equation system represents three equations, one for each direction of the basis of themoving trihedron, £rst for i = 1, i.e. in the direction of the base vector a1, and k = 1, . . . , 3,

M1,1 +M1Γ 1

11 +M2Γ 121 +M3Γ 1

31 = 0,

M1,1 + 0 +M 2

(−c2r

)+ 0 = 0,

M1,1 − c2rM2 = 0,

in direction of the base vector a2, i.e. for i = 2, and k = 1, . . . , 3,

M2,1 +M1Γ 2

11 +M2Γ 221 +M3Γ 2

31 −N3 = 0,

M2,1 +M1

(c2r)+ 0 +M 3

(c2a)−N3 = 0,

M2,1 + c2rM1 + c2aM3 −N3 = 0.

and £nally in the direction of the third base vector a3, i.e. i = 3, and k = 1, . . . , 3,

M3,1 +M1Γ 3

11 +M2Γ 321 +M3Γ 3

31 +N2 = 0,

M3,1 + 0 +M 2

(−c2a

)+ 0 +N 2 = 0,

M3,1 − c2aM2 +N2 = 0.

All together the coef£cient scheme of the equilibrium of moments in the basis of the movingtrihedron is given by

M1,1 − c2rM2

M2,1 + c2rM1 + c2aM3 −N3

M3,1 − c2aM2 +N2

=

000

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.6. The Moving Trihedron, Derivatives and Space Curves 207

6.6.6 Forces and Moments for the Given Load

The unkown forces N and moments M w.r.t. the natural basis ai should be computed, i.e.

N = N iai , and M = M iai.

The load f is given in the global, Cartesian coordinate system by

f = f iei,

and is acting at a point S, given by an angle ϕ2

, and a radius rS , see also the free-body dia-gram given by £gure (6.16). The free-body diagram and the load vector are given in the global

e3¾e1

?e2

SR3

¾R1

?

R2

T

N3

RM3 ¾ N1

¾¾ M1?

N2

??

M2

®

rS

Rϕ2

µϕ

-¾ r¾ -R

bottom, £xed supporttop, height h/2

Figure 6.16: The free-body diagram of the loaded spiral staircase.

Cartesian basis, i.e. the load vector is given by

R = Riei , with

R1

R2

R3

=

00−qϕr

.

First the equilibrium conditions of forces in the directions of the base vectors ei of the globalCartesian coordinate system are established,

Fe1 = 0ÃN1 + R1 = 0ÃN1 = 0,∑

Fe2 = 0ÃN2 + R2 = 0ÃN2 = 0,∑

Fe3 = 0ÃN3 + R3 = 0ÃN3 = −R3.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 111: Introduction to Continuum Mechanics - Vector and Tensor Calculus

208 Chapter 6. Exercises

Than the equilibrium conditions of moments in the directions of the base vectors ei of the globalCartesian coordinate system w.r.t. the point T are established

M (T )e1

= 0Ã R3(

−r sin(

ϕ− π

2

)

+ rS sinϕ

2

)

+ M1 = 0

ÃM1 = R3(

r sin(

ϕ− π

2

)

− rS sinϕ

2

)

,∑

M (T )e2

= 0Ã − R3(

rS cosϕ

2+ r cos

(

ϕ− π

2

))

+ M2 = 0

ÃM2 = R3(

rS cosϕ

2− r cos

(

ϕ− π

2

))

,∑

M (T )e3

= 0ÃM3 = 0.

Finally the resulting equations of equilibrium are given by

N =

00−R3

, and M =

R3(r sin

(ϕ− π

2

)− rS sin

ϕ2

)

R3(rS cos

ϕ2− r cos

(ϕ− π

2

))

0

.

Now the problem is, that the equilibrium conditions are given in the global Cartesian coordinatesystem, but the results should be descirbed in the basis of the moving trihedron. For this reason itis necessary to transform the results from above, i.e. N = N iei, andM = M iei, intoN = N iai,and M = M iai! The Cartesian basis ei should be transformed by a tensor S into the basis ai,

S = Srser ⊗ es à ai = Sei = Srsδsier = Sr.ier,

i.e. in matrix notation

[ai] = [Sr.i]

T [er] , with [Sr.i] =

−cr sinϕ − cosϕ −ca sinϕcr cosϕ − sinϕ ca cosϕ−ca 0 cr

,

see also the de£nitions for the base vectors of the moving trihedron in the sections above! Thenthe retransformation from the basis of the moving trihedron into the global Cartesian basis shouldbe given by S−1 = T, with

ei = Tai , with T = T r.sar ⊗ as,

ei = (T r.sar ⊗ as) ai = T r

.sδsi ar = T r

.iar.

Comparing this transformation relations implies

ai = Sr.ier = Sr

.iTm.r am | ·ak,

δki = Sr.iT

m.r δ

km,

δki = T k.rS

r.i,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.6. The Moving Trihedron, Derivatives and Space Curves 209

and in matrix notation[δki]=[T k.r

][Sr

.i] Ã[T k.r

]= [Sr

.i]−1 .

Because both basis, i.e. ei, and ai, are orthonormal basis, the tensor T must describe an or-thonormal rotation! The tensor of an orthonormal rotation is characterized by

T−1 = TT , resp. T = S−1 = ST ,

i.e. in matrix notation

[T r.i] = [Sr

.i]−1 = [Sr

.i]T =

−cr sinϕ cr cosϕ −ca− cosϕ − sinϕ 0−ca sinϕ ca cosϕ cr

.

With this the relations between the base vectors of the different basis could be given by

ai = Sr.ier , resp. er = T k

.rak,

and e.g. in detail

[er] =[T k.r

]T[ak] =

−cr sinϕ − cosϕ −ca sinϕcr cosϕ − sinϕ ca cosϕ−ca 0 cr

a1a2a3

,

or

e1 = −cr sinϕ a1 − cosϕ a2 − ca sinϕ a3

e2 = cr cosϕ a1 − sinϕ a2 + ca cosϕ a3

e3 = −ca a1 + cr a3.

With this it is easy to compute the £nal results, i.e. to transform N = N iei, and M = M iei, intoN = N iai, and M = M iai. With the known transformation ei = T k

.iak the force vector couldbe represented by

N = N iei = N iT k.iak = Nkak.

Comparing only the coef£cients implies

Nk = T k.i N

i,

and with this the coef£cients of the force vector N w.r.t. the basis of the moving trihedron in thesectional area at point T are given by

N1

N2

N3

=

−cr sinϕ cr cosϕ −ca− cosϕ − sinϕ 0−ca sinϕ ca cosϕ cr

00−R3

N1

N2

N3

=

R3ca0

−R3cr

.

By the analogous comparison the coef£cients of the moment vector M w.r.t. the basis of themoving trihedron in the sectional area at point T are given by

M1

M2

M3

=

−cr sinϕ cr cosϕ −ca− cosϕ − sinϕ 0−ca sinϕ ca cosϕ cr

M1

M2

0

M1

M2

M3

=

cr(−M1 sinϕ+ M2 cosϕ

)

−(M1 cosϕ+ M2 sinϕ

)

ca(−M1 sinϕ+ M2 cosϕ

)

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 112: Introduction to Continuum Mechanics - Vector and Tensor Calculus

210 Chapter 6. Exercises

6.7 Tensors, Stresses and Cylindrical Coordinates

6.7.1 The Problem

The given cylinderical shell is described by the parameter lines Θi, with i = 1, 2, 3. The relations

-e3

6e1

¼e2(0, 0, 0)Θ (0, 2, 0)Θ (8, 2, 0)Θ (8, 0, 0)Θ

(8, 0, 4)ΘºΘ1

-Θ2*

Θ3

P*

g3

Rg1ª

g2

:n

3

x

¾ -2 ¾ -3 ¾ -3 ¾ -2 ¼

*

4

[cm]

Figure 6.17: The given cylindrical shell.

between the Cartesian coordinates and the parameters, i.e. the curvilinear coordinates, are givenby the vector of position x,

x1 =(5−Θ2

)sin(π

aΘ1)

,

x2 = −Θ3, and

x3 = −(5−Θ2

)cos(π

aΘ1)

,

where a = 8.0 is a constant length. At the point P de£ned by

P = P

(

Θ1 =8

10;Θ2 =

2

10;Θ3 = 2

)

,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 211

the stress tensor σ (, for the geometrically linear theory,) is given by

σ = σijgi ⊗ gj , with[σij]=

6 0 20 0 02 0 5

.

The vector of position x in the Cartesian coordinate system is given by

x = xiei,

and the normal vector n in the Cartesian coordinate system at the point P is given by

n =g1 + g3

|g1 + g3|= nrer.

• Compute the covariant base vectors gi and the contravariant base vectors gi at the point P .

• Determine the coef£cients of the tensor σ w.r.t. the basis in mixed formulation,(gi ⊗ gk

),

and w.r.t. the Cartesian basis, (ei ⊗ ek).

• Work out the physical components from the contravariant stress tensor.

• Determine the invariants,

Iσ = trσ,

IIσ =1

2

((trσ)2 − tr (σ)2

),

IIIσ = detσ,

for the three different representations of the stress tensor σ.

• Calculate the principal stresses and the principal stress directions.

• Compute the speci£c deformation energy Wspec =12σ : ε at the point P , with

ε =1

100(gik − δik)g

i ⊗ gk.

• Determine the stress vector tn at the point P w.r.t. the sectional area given by the normalvector n. Furthermore calculate the normal stress t⊥, and the resulting shear stress t‖ forthis direction.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 113: Introduction to Continuum Mechanics - Vector and Tensor Calculus

212 Chapter 6. Exercises

6.7.2 Co- and Contravariant Base Vectors

The vector of position x is given w.r.t. the Cartesian base vectors ei. The covariant base vectorsgi of the curvilinear coordinate system are de£ned by the £rst derivatives of the vector of positionw.r.t. the parameters Θi of the curvilinear coordinate lines in space, i.e.

gk =∂x

∂Θk=

∂xi

∂Θkei.

With this de£nition the covariant base vectors are computed by

g1 =∂x

∂Θ1=(5−Θ2

) π

acos(π

aΘ1)

e1 + 0 e2 +(5−Θ2

) π

asin(π

aΘ1)

e3,

g2 =∂x

∂Θ2= − sin

aΘ1)

e1 + 0 e2 + cos(π

aΘ1)

e3,

g3 =∂x

∂Θ3= 0 e1 + (−1) e2 + 0 e3,

and £nally the covariant base vectors of the curvilinear coordinate system are given by

g1 =(5−Θ2

) π

a

cos(πaΘ1)

0sin(πaΘ1)

, g2 =

− sin(πaΘ1)

0cos(πaΘ1)

, and g3 =

0−10

,

or by

g1 =1

b

cos c0

sin c

, g2 =

− sin c0

cos c

, and g3 =

0−10

,

with the abbreviations

b =a

(5−Θ2)π=

5

3π, and c =

π

aΘ1 =

π

10.

In order to determine the contravariant base vectors of the curvilinear coordinate system, it isnecessary to multiply the covariant base vectors with the contravariant metric coef£cients. Thecontravariant metric coef£cients g ik could be computed by the inverse of the covariant metriccoef£cients gik,

[gik] =[gik]−1

.

So the £rst step is to compute the covariant metric coef£cients g ik,

gik = gi · gk , i.e. [gik] =

1b2

0 00 1 00 0 1

.

The relationship between the co- and contravariant metric coef£cients is used in its inverse form,in order to compute the contravariant metric coef£cients,

[gik]= [gik]

−1 , resp.[gik]=

b2 0 00 1 00 0 1

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 213

The contravariant base vectors gi are de£ned by

gi = gikgk,

and for i = 1, . . . , 3 the relations between co- and contravariant base vectors are given by

g1 = b2g1 , g2 = g2 , and g3 = g3,

and £nally with abbreviations

g1 = b

cos c0

sin c

, g2 =

− sin c0

cos c

, and g3 =

0−10

,

or in detail

g1 =(5−Θ2)π

a

cos(πaΘ1)

0sin(πaΘ1)

, g2 =

− sin(πaΘ1)

0cos(πaΘ1)

, and g3 =

0−10

.

At the given point P the co- and contravariant base vectors gi, and gi, are given by

g1 =3π

5

cos(

π10

)

0sin(

π10

)

, g1 =5

cos(

π10

)

0sin(

π10

)

,

g2 = g2 =

− sin(

π10

)

0cos(

π10

)

, and

g3 = g3 =

0−10

.

6.7.3 Coef£cients of the Various Stress Tensors

The stress tensor σ is given by the covariant basis gi of the curvilinear coordinate system, andthe contravariant coef£cients σ ij . The stress tensor w.r.t. to the mixed basis is determined by

σ = σimgi ⊗ gm , and gm = gmkgk,

σ = σimgmkgi ⊗ gk = σikgi ⊗ gk,

i.e. the coef£cient matrix is given by

[σi

k

]=[σim][gmk] =

6 0 20 0 02 0 5

1b2

0 00 1 00 0 1

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 114: Introduction to Continuum Mechanics - Vector and Tensor Calculus

214 Chapter 6. Exercises

Solving this matrix product implies the coef£cient matrix [σ ik] w.r.t. the basis gi ⊗ gk, i.e. the

stress tensor w.r.t. to the mixed basis is given by

σ = σikgi ⊗ gk , with

[σi

k

]=

6b2

0 20 0 02b2

0 5

,

and £nally at the point P the stress tensor w.r.t. the mixed basis is given by

σ = σikgi ⊗ gk , with

[σi

k

]=

54π2

250 2

0 0 018π2

250 5

.

The relationships between the Cartesian coordinate system and the curvilinear coordinate systemare described by

gi = Bei , and B = Bmnem ⊗ en,

gi = (Bmnem ⊗ en) ei = Bmnδ

ni e

m = Bmiem,

and because of the identity of co- and contravariant base vectors in the Cartesian coordinatesystem,

gi = Bkiek = Bkiek.

This equation represented by the coef£cient matrices is given by

[gi] = [Bki]T [ek]

, with [Bki] =

1bcos c − sin c 00 0 −1

1bsin c cos c 0

,

see also the de£nition of the covariant base vectors above. The stress tensor σ w.r.t. the Cartesianbasis is computed by

σ = σikgi ⊗ gk , and gi = Brier , resp. gk = Bske

s,

σ = σikBriBsker ⊗ es,

and with the abbreviation

σrs = BriσikBsk,

the coef£cient matrix of the stress tensor w.r.t. the Cartesian basis is de£ned by

[σrs] = [Bri][σik][Bsk]

T =

1bcos c − sin c 00 0 −1

1bsin c cos c 0

6 0 20 0 02 0 5

1bcos c 0 1

bsin c

− sin c 0 cos c0 −1 0

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 215

Solving this matrix product implies the coef£cient matrix of the stress tensor w.r.t. the Cartesianbasis, i.e. the stress tensor w.r.t. the Cartesian basis is given by

σ = σikei ⊗ ek , with [σrs] =

6b2cos2 c −2

bcos c 6

b2sin c cos c

−2bcos c 5 −2

bsin c

6b2sin c cos c −2

bsin c 6

b2sin2 c

,

and £nally at the point P the stress tensor w.r.t. the Cartesian basis is given by

σ = σikei ⊗ ek , with [σrs] =

54π2

25cos2 π

10−6π

5cos π

1054π2

25sin π

10cos π

10

−6π5cos π

105 −6π

5sin π

1054π2

25sin π

10cos π

10−6π

5sin π

1054π2

25sin2 π

10

.

6.7.4 Physical Components of the Contravariant Stress Tensor

The physical components of a tensor are de£ned by

∗τ ik = τ ik

√g(k)(k)√

g(i)(i),

see also the lecture notes. The physical components∗τ ik of a tensor τ = τ ikgi ⊗ gk consider,

that the base vectors of an arbitrary curvilinear coordinate system do not have to be unit vectors!In Cartesian coordinate systems the base vectors ei do not in¤uence the physical value of thecomponents of the coef£cient matrix of a tensor, because the base vectors are unit vectors andorthogonal to each other. But in general coordinates the base vectors do in¤uence the physicalvalue of the components of the coef£cient matrix, because they are in general no unit vectors,and not orthogonal to each other. Here the contravariant stress tensor is given by

σ = σijgi ⊗ gk , with[σik]=

6 0 20 0 02 0 5

.

In order to compute the physical components of the stress tensor σ, it is necessary to solve thede£nition given above. The numerator and denominator of this de£nition are given by the squareroots of the co- and contravariant metric coef£cients g (i)(i), and g(i)(i), i.e.

√g11 =

1

b

√g22 = 1

√g33 = 1,

g11 = b√

g22 = 1√

g33 = 1.

Finally the coef£cient matrix of the physical components of the contravariant stress tensor σ =σikgi ⊗ gk is given by

∗σik = σik

√g(k)(k)√

g(i)(i), and

[ ∗σik]

=

6b2

0 2b

0 0 02b

0 5

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 115: Introduction to Continuum Mechanics - Vector and Tensor Calculus

216 Chapter 6. Exercises

6.7.5 Invariants

The stress tensor could be described w.r.t. the three following basis, i.e.

• σ = σikgi ⊗ gk, w.r.t. the covariant basis of the curvilinear coordinate system,

• σ = σikgi ⊗ gk, w.r.t. the mixed basis of the curvilinear coordinate system, and

• σ = σikei ⊗ ek, w.r.t. the basis of the Cartesian coordinate system.

The £rst invariant Iσ of the stress tensor is de£ned by the trace of the stress tensor, i.e.

Iσ = trσ = σ : 1.

• The £rst invariant w.r.t. the covariant basis of the curvilinear coordinate system is given by

Iσ = trσ =(σikgi ⊗ gk

):(gmlg

l ⊗ gm)= σikgmlδ

liδ

mk = σikgki = σi

i,

Iσ = trσ = σii =

6

b2+ 5.

• The £rst invariant w.r.t. the mixed basis of the curvilinear coordinate system is given by

Iσ = trσ =(σi

kgi ⊗ gk):(gmlg

l ⊗ gm)= σi

kgmlδlig

km = σikδliδlk = σi

i,

Iσ = trσ = σii =

6

b2+ 5.

• The £rst invariant w.r.t. the mixed basis of the Cartesian coordinate system is given by

Iσ = trσ = (σikei ⊗ ek) : (δmlem ⊗ el) = σikδmlδimδkl = σii,

Iσ = trσ = σii =6

b2+ 5.

The second invariant IIσ of the stress tensor is de£ned by the half difference of the trace to thesecond of the stress tensor, and the trace of the stress tensor to the second, i.e.

IIσ =1

2

((trσ)2 − trσ2

).

• First in order to determine trσ2 it is necessary to compute σ2, i.e.

σ2 =(σikgi ⊗ gk

)(σrsgr ⊗ gs) = σikσrsgkrgi ⊗ gs,

and then

trσ2 = σ2 : 1 =(σikσrsgkrgi ⊗ gs

):(gmlg

l ⊗ gm)

= σikσrsgkrgmlδliδ

ms = σikσrsgkrgsi = σi

rσri

trσ2 =

(6

b2

)2

+ 2

(

22

b2

)

+ 25 =36

b4+

8

b2+ 25,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 217

or in just one step

trσ2 = σT : σ =(σkigi ⊗ gk

): (σrsgr ⊗ gs) = σkiσrsgirgks = σk

rσrk.

Finally the second invariant w.r.t. the covariant basis of the curvilinear coordinate systemis given by

IIσ =1

2

((trσ)2 − trσ2

)

=1

2

[(6

b2+ 5

)2

−(36

b4+

8

b2+ 25

)]

=1

2

[36

b4+

60

b2+ 25− 36

b4− 8

b2− 25

]

,

IIσ =26

b2.

• Again £rst in order to determine trσ 2 it is necessary to compute σ2, i.e.

σ2 =(σi

kgi ⊗ gk)(σr

sgr ⊗ gs) = σikσ

rsδ

krgi ⊗ gs = σi

kσksgi ⊗ gs,

and then

trσ2 = σ2 : 1 =(σi

kσksgi ⊗ gs

):(gmlg

l ⊗ gm)= σi

kσksgmlδ

lig

sm = σikσ

ki,

or in just one step

trσ2 = σT : σ =(σ ik gi ⊗ gk

): (σr

sgr ⊗ gs) = σ ik σ

rsgirg

ks = σsrσ

rs.

This intermediate result is the same like above, and the £rst invariants, i.e. the trace trσand (trσ)2, too, are equal for the different basis, i.e. all further steps will be the same likeabove. Combining all this £nally implies, that the second invariant w.r.t. the mixed basisof the curvilinear coordinate system is given by

IIσ =26

b2, too.

• Again £rst in order to determine trσ 2 it is necessary to compute σ2, i.e.

σ2 = (σikei ⊗ ek) (σrser ⊗ es) = σikσrsδkrei ⊗ es = σikσksei ⊗ es,

and then

trσ2 = σ2 : 1 = (σikσksei ⊗ es) : (δlmel ⊗ em) = σikσksδlmδilδsm = σikσki,

or in just one step

trσ2 = σT : σ = (σikei ⊗ ek) : (σrser ⊗ es) = σkiσrsδirδks = σsrσrs.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 116: Introduction to Continuum Mechanics - Vector and Tensor Calculus

218 Chapter 6. Exercises

Solving this implies the same intermediate results,

trσ2 =36

b4cos4 c+ 2

(4

b2cos2 c

)

+ 2

(36

b4sin2 c cos2 c

)

+

(4

b2sin2 c

)

+36

b4sin4 c+ 25

=36

b4(cos4 c+ 2 sin2 c cos2 c+ sin4 c

)+

8

b2(cos2 c+ sin2 c

)+ 25

trσ2 =36

b4+

8

b2+ 25,

i.e. the further steps are the same like above. And with this £nally the second invariantw.r.t. the basis of the Cartesian coordinate system is given by

IIσ =26

b2, too.

The third invariant IIIσ of the stress tensor is de£ned by the determinant of the stress tensor, i.e.

IIIσ = detσ =1

6(trσ)3 − 1

2(trσ)

(trσ2

)+

1

3

(σTσT

): σ.

• In order to compute the third invariant it is necessary to solve the three terms in the sum-mation of the de£nition of the third invariant. The £rst term is given by

1

6(trσ)3 =

1

6

(6

b2+ 5

)3

=1

6

(216

b6+

540

b4+

450

b2+ 125

)

1

6(trσ)3 =

36

b6+

90

b4+

75

b2+

125

6,

and the second term is given by

−1

2(trσ)

(trσ2

)= −1

2

(6

b2+ 5

)(36

b4+

8

b2+ 25

)

−1

2(trσ)

(trσ2

)= −

(108

b6+

114

b4+

95

b2+

125

2

)

.

The third term is not so easy to compute, because it does not include the trace, but a scalarproduct and a tensor product with the transpose of the stress tensor, i.e.

1

3

(σTσT

): σ =

1

3υT : σ , with υT = σTσT = (σσ)T .

or in index notation(σTσT

): σ =

(σkigi ⊗ gk

)(σsrgr ⊗ gs) :

(σlmgl ⊗ gm

)

=(σkiσsrgkrgi ⊗ gs

):(σlmgl ⊗ gm

)=(σkiσs

kgi ⊗ gs

):(σlmgl ⊗ gm

)

= σkiσskσ

lmgilgsm = σklσ

skσ

ls,

(σTσT

): σ = σk

lσlsσ

sk.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 219

In order to solve this equation, £rst the new tensor υT is computed by

υT = σTσT =(σkigi ⊗ gk

)(σsrgr ⊗ gs) = σkiσsrgkrgi ⊗ gs

υsigi ⊗ gs = σkiσskgi ⊗ gs = σs

kσkigi ⊗ gs,

and the coef£cient matrix of this new tensor is given by[υsi]=[υis]T

= [σsk][σki]

,

[υsi] =

6b2

0 20 0 02b2

0 5

6 0 20 0 02 0 5

=

36b2

+ 4 0 12b2

+ 100 0 0

12b2

+ 10 0 4b2+ 25

.

In order to solve the scalar product υT : σ, given by

υT : σ =(σTσT

): σ =

(υsigi ⊗ gs

):(σlmgl ⊗ gm

),

υT : σ = υsiσlmgilgsm = υsigilσls = υs

lσls,

£rst the coef£cient matrix of the tensor υ w.r.t. the mixed basis is computed by

[υsl] =

[υsi][gil] ,

[υsl] =

36b2

+ 4 0 12b2

+ 100 0 0

12b2

+ 10 0 4b2+ 25

1b2

0 00 1 00 0 1

=

36b4

+ 4b2

0 12b2

+ 100 0 0

12b4

+ 10b2

0 4b2+ 25

,

and then the £nal result for the third term is given by

1

3

(σTσT

): σ =

1

3υT : σ =

1

3σk

lσlsσ

sk =

1

3υs

lσls

=1

3

[(36

b4+

4

b2

)6

b2+

(12

b2+ 10

)2

b2+ 2

(12

b4+

10

b2

)

+ 5

(4

b2+ 25

)]

1

3

(σTσT

): σ =

72

b6+

24

b4+

20

b2+

125

3.

Then the complete third invariant w.r.t. the covariant basis of the curvilinear coordinatesystem is given by

IIIσ = detσ =1

6(trσ)3 − 1

2(trσ)

(trσ2

)+

1

3

(σTσT

): σ

=

(36

b6+

90

b4+

75

b2+

125

6

)

−(108

b6+

114

b4+

95

b2+

125

2

)

+

(72

b6+

24

b4+

20

b2+

125

3

)

=1

b6(36− 108 + 72) +

1

b4(90− 114 + 24) +

1

b2(75− 95 + 20)

+

(125

6− 125

2+

125

3

)

IIIσ = detσ = 0.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 117: Introduction to Continuum Mechanics - Vector and Tensor Calculus

220 Chapter 6. Exercises

• For the third invariant w.r.t. the mixed basis of the curvilinear coordinate system the £rstand second term are already known, because all scalar quantities, like trσ, are still thesame, see also the £rst case of determining the third invariant. It is only necessary to havea look at the third term given by(σTσT

): σ =

(σi

kgk ⊗ gi

)(σr

sgs ⊗ gr) :

(σl

mgl ⊗ gm)

=(σi

kσrsδ

sig

k ⊗ gr

):(σl

mgl ⊗ gm)=(σs

kσrsg

k ⊗ gr

):(σl

mgl ⊗ gm

)

= σskσ

rsσ

lmδ

kl δ

mr = σs

lσmsσ

lm,

(σTσT

): σ = σs

lσlmσ

ms ,

i.e. this scalar product is the same like above. With all three terms of the summation givenby the same scalar quantities like above, the third invariant w.r.t. the mixed basis of thecurvilinear coordinate system is given by

IIIσ = detσ = 0.

• The third case, i.e. the third invariant w.r.t. to the basis of the Cartesian coordinate system,is very easy to solve, because in a Cartesian coordinate system it is suf£cient to computethe determinant of the coef£cient matrix of the tensor! For example the determinant of thecoef£cient matrix σrs is expanded about the £rst row

detσ = det [σrs] =

∣∣∣∣∣∣

6b2cos2 c −2

bcos c 6

b2sin c cos c

−2bcos c 5 −2

bsin c

6b2sin c cos c −2

bsin c 6

b2sin2 c

∣∣∣∣∣∣

=6

b2cos2 c

(30

b2sin2 c− 4

b2sin2 c

)

+2

bcos c

(

−12

b3sin2 c cos c+

12

b3sin2 c cos c

)

+6

b2sin c cos c

(4

b2sin c cos c− 30

b2sin c cos c

)

=156

b4sin2 c cos2 c− 0− 156

b4sin2 c cos2 c

detσ = det [σrs] = 0.

The third invariant w.r.t. the basis of the Cartesian coordinate system is given by

IIIσ = detσ = 0.

The £nal result is, that the invariants of every arbitrary tensor σ could be computed w.r.t. anybasis, and still keep the same, i.e.

Iσ = trσ = σ : 1 =6

b2+ 5,

IIσ =1

2

((trσ)2 − trσ2

)=

1

2

((σ : 1)2 − σT : σ

)=

26

b2,

IIIσ = detσ =1

6(trσ)3 − 1

2(trσ)

(trσ2

)+

1

3

(σTσT

): σ

=1

6(σ : 1)3 − 1

2(σ : 1)

(σT : σ

)+

1

3

(σTσT

): σ = 0.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 221

6.7.6 Principal Stress and Principal Directions

Starting with the stress tensor w.r.t. the Cartesian basis, and a normal unit vector, i.e.

σ = σikei ⊗ ek , and n0 = nrer = nrer,

the eigenvalue problem is given byσn0 = λn0,

and the left-hand side is rewritten in index notation,

(σikei ⊗ ek)nrer = σiknreiδkr = σirnrei.

The eigenvalue problem in index notation w.r.t. the Cartesian basis is given by

σiknkei = λnkek | ·el,σiknkδil = λnkδkl,

σlknk = λnkδlk,

and £nally the characteristic equation is given by

(σlk − λδlk)nk = 0,

det (σik − λδik) = 0.

The characteristic equation of the eigenvalue problem could be rewritten with the already knowninvariants, i.e.

det (σik − λδik) = IIIσ − IIσλ+ Iσλ2 − λ3 = 0,

λ3 − Iσλ2 + IIσλ− IIIσ = 0,

λ3 −(

6

b2+ 5

)

λ2 +26

b2λ− 0 = 0,

λ

(

λ2 −(

6

b2+ 5

)

λ+26

b2

)

= 0,

this implies the £rst eigenvalue, i.e. the £rst principal stress,

λ1 = 0.

The other eigenvalues are computed by solving the quadratic equation

λ2 −(

6

b2+ 5

)

λ+26

b2= λ2 − dλ+ e = 0,

λ2/3 =1

2

(6

b2+ 5

)

±√

1

4

(6

b2+ 5

)2

− 26

b2=

1

2d±

d2

4− e,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 118: Introduction to Continuum Mechanics - Vector and Tensor Calculus

222 Chapter 6. Exercises

this implies the second and third eigenvalue, i.e. the second and third principal stress,

λ2 =1

2d+

d2

4− e , and λ3 =

1

2d−

d2

4− e , with d =

6

b2+ 5 , and e =

26

b2.

In order to compute the principal directions the stress tensor w.r.t. the curvilinear basis, and anormal unit vector, i.e.

σ = σikgi ⊗ gk , and n = nrgr,

are used, then the eigenvalue problem is given by

σn = λn,

and the left-hand side is rewritten in index notation,

σn =(σikgi ⊗ gk

)nrgr = σiknrgkrgi = σi

rnrgi.

The eigenvalue problem in index notation w.r.t. the curvilinear basis is given by

σikn

kgi = nigi = nkδikgi,(σi

k − λδik)nkgi = 0,

(σi

k − λδik)nk = 0,

and in matrix notation

6b2− λi 0 20 −λi 02b2

0 5− λi

ni1

ni2

ni3

= 0.

Combining the £rst and the last row of this system of equations yields an equation to determinethe coef£cient ni3 depending on the associated principal stress λi, i.e.

[

(5− λi)

(6

b2− λi

)

− 4

b2

]

ni3 =

[26

b2− λi

(

5 +6

b2

)

+ λ2i

]

ni3 =[e− λid+ λ2i

]ni3 = 0.

The coef£cient ni2 could be computed by the second line of this system of equations, i.e.

−λini2 = 0,

and then the coef£cient ni1 depending on the associated principal stress λi and the already knowncoef£cient ni3 is given by

ni1 = −2ni3

6b2− λi

.

The associated principal direction to the principal stresses are computed by inserting the valuesλi in the equations above.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 223

• The principal stress λ1 = 0 implies

n13 = 0 ⇒ n11 = 0 ⇒ n12 = α ∈ R,

n1 = n1g1 + n2g2 + n3g3 = 0 + αg2 + 0 = αg2,

n1 = α

− sin c0

cos c

.

• The principal stress λ2 = 12d+

√d2

4− e implies

n13 = β ∈ R ⇒ n12 = 0 ⇒ n11 = −1

2b2 (5− λ2) β = γβ,

n2 = n1g1 + n2g2 + n3g3 = βγg1 + 0 + βg3,

n2 = β

γb cos c−1

γb sin c

, with γ = −1

2b2 (5− λ2) .

• The principal stress λ3 = 12d−

√d2

4− e implies

n3 = β

γb cos c−1

γb sin c

, with γ = −1

2b2 (5− λ3) .

6.7.7 Deformation Energy

The speci£c deformation energy is de£ned by

Wspec =1

2σ : ε , with σ = σikgi ⊗ gk , and ε =

1

100(gik − δik)g

i ⊗ gk,

and solving this product yields

Wspec =1

2σ : ε =

1

2

(σlmgl ⊗ gm

):

(1

100(gik − δik)g

i ⊗ gk

)

=1

200σlm (gik − δik) δ

ilδ

km =

1

200σik (gik − δik)

=1

200σik(gik − δki

),

Wspec =1

2σ : ε =

1

200

(σi

i − σii)

,

because the Kronecker delta δik is given w.r.t. to the Cartesian basis, i.e.

δik = δ ki = δik = δki .

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 119: Introduction to Continuum Mechanics - Vector and Tensor Calculus

224 Chapter 6. Exercises

With the trace of the stress tensor w.r.t. to the mixed basis of the curvilinear coordinate system,and the trace of the stress tensor w.r.t. to the covariant basis of the curvilinear coordinate systemgiven by

σii =

6

b2+ 5 , and σii = 11,

the speci£c deformation energy is given by

Wspec =1

2σ : ε =

1

200

(6

b2− 6

)

,

and £nally at the point P ,

Wspec =1

200

(54π2

25− 6

)

≈ 0.0766.

6.7.8 Normal and Shear Stress

The normal vector n is de£ned by

n =g1 + g3

|g1 + g3|,

and this implies with

g1 + g3 =

1bcos c−1

1bsin c

, and |g1 + g3| =√

1

b2+ 1,

£nally

n =1√

b2 + 1

cos c−bsin c

= nrer = nrer.

With the stress tensor σ w.r.t. the Cartesian basis,

σ = σikei ⊗ ek,

the stress vector tn at the point P is given by

tn = σn = σikei ⊗ eknrer = σiknrδkrei = σiknkei,

and in index notationtn = tiei = σiknkei ⇒ ti = σiknk.

The matrix multiplication of the coef£cient matrices,

[ti] = [σik] [nk] =

6b2cos2 c −2

bcos c 6

b2sin c cos c

−2bcos c 5 −2

bsin c

6b2sin c cos c −2

bsin c 6

b2sin2 c

1√

b2 + 1

cos c−bsin c

,

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

6.7. Tensors, Stresses and Cylindrical Coordinates 225

implies £nally the stress vector tn at the point P ,

tn =1√

b2 + 1

(6b2+ 2)cos c

−2b− 5b

(6b2+ 2)sin c

=1

b2√b2 + 1

(6 + 2b2) cos c−2b− 5b3

(6 + 2b2) sin c

.

The normal stress vector is de£ned by

t⊥ = σn , with σ = |t⊥| = tn · n,

and the shear stress vector is de£ned by

t‖ = tn − t⊥.

The absolute value of the normal stress vector t⊥ is computed by

σ = tn · n =1

b2√b2 + 1

(6 + 2b2) cos c−2b− 5b3

(6 + 2b2) sin c

1√

b2 + 1

cos c−bsin c

σ = tn · n =5b4 + 4b4 + 6

b2 (b2 + 1).

This implies the normal stress vector

t⊥ = σn =5b4 + 4b2 + 6

b2 (b2 + 1)√b2 + 1

cos c−bsin c

,

and the shear stress vector

t‖ = tn − t⊥ =1

b2√b2 + 1

(6 + 2b2) cos c−2b− 5b3

(6 + 2b2) sin c

− 5b4 + 4b2 + 6

b2 (b2 + 1)√b2 + 1

cos c−bsin c

,

t‖ =4− 3b2

(b2 + 1)√b2 + 1

cos c1b

sin c

.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 120: Introduction to Continuum Mechanics - Vector and Tensor Calculus

226 Chapter 6. Exercises

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Appendix A

Formulary

A.1 Formulary Tensor Algebra

A.1.1 Basis

ei − orthonormal cartesian base vectors ∈ En

gi − covariant base vectors ∈ En

gi − contravariant base vectors ∈ En

A.1.2 Metric Coef£cients, Raising and Lowering of Indices

metric coef£cients

gik = gi · gk

gik = gi · gk

δik = gi · gk

δik = ei · ek(δik = ei · ek)

raising and lowering of indices

gi = gikgk

gi = gikgk

gi = δikgk

ei = δikek

(ei = δikek)

A.1.3 Vectors in a General Basis

v = vigi = vigi

A.1.4 Second Order Tensors in a General Basis

T = T ikgi ⊗ gk = Tikgi ⊗ gk

= T ikgi ⊗ gk = T k

i gi ⊗ gk

227

Page 121: Introduction to Continuum Mechanics - Vector and Tensor Calculus

228 Appendix A. Formulary

A.1.5 Linear Mappings with Tensors

for the tensor A of rank 1

A = u⊗ vA ·w = (u⊗ v) ·w = (v ·w)u

=(uigi ⊗ vkgk

)· wmgm =

(vkgk · wmgm

)uigi

=(vkwmgkm

)· uigi = vkwku

igi

for the general tensor T of rank n(detT ik 6= 0

)

T = T ikgi ⊗ gk

T ·w =(T ikgi · gk

)· wmgm = T ikwkgi

A.1.6 Unit Tensor (Identity, Metric Tensor)

u = 1 · umit 1 = gi ⊗ gi = gj ⊗ gj = δjigj ⊗ gi

= gijgi ⊗ gj = gijgi ⊗ gj

u =(gi ⊗ gi

)· ukg

k = ukgikgi = ukg

k = uigi = u

A.1.7 Tensor Product

u = A ·w und w = B · và u = A ·B · v = C · vC = A ·B =

(Aikgi ⊗ gk

)· (Bmng

m ⊗ gn)

= AikBmnδmk gi ⊗ gn = AikBkngi ⊗ gk

A.1.8 Scalar Product or Inner Product

α = A : B

=(Aikgi ⊗ gk

): (Bmngm ⊗ gn) = AikBmn (gi · gm) (gk · gn)

= AikBmngimgkn = AikBik

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

A.1. Formulary Tensor Algebra 229

A.1.9 Transpose of a Tensor

u · (T · v) =(v ·TT

)· u

with T = T ikgi ⊗ gk

and TT = T ikgk ⊗ gi = T kigi ⊗ gk

(u⊗ v)T = v ⊗ u(A ·B)T = BT ·AT with

(AT)T

= A

A.1.10 Computing the Tensor Components

T ik = gi ·(T · gk

); Tik = gi · (T · gk) ; T i

k = gi · (T · gk)

T ik = gimgknTmn ; T ik = gimTmk ; etc.

A.1.11 Orthogonal Tensor, Inverse of a Tensor

orthonormal tensor

QT = Q−1 ; QT ·Q = Q−1 ·Q = 1 = Q ·QT ; detQ = ±1Qi

k =(Qk

i

)−1; Qmi ·Q′

mk = δikv = Q · u→ (Q · u) · (Q · u) = u · u ; i.e. v · v = u · u

A.1.12 Trace of a Tensor

tr (a⊗ b) := a · b resp. tr (a⊗ b) = aigi · bkgk = aibi

trT = T : 1 =(T ikgi ⊗ gk

): (gm ⊗ gm) = T ikgimδ

mk

= T ikgik = T ii

tr (A ·B) = A : BT or A ·B = tr(A ·BT

)= tr

(BT ·A

)

tr (A ·B) = tr (B ·A) = B : AT

tr (A ·B) = tr([Aikgi ⊗ gk

]· [Bmng

m ⊗ gn])= tr

(aikBkngi ⊗ gn

)

= AikBkngi · gn = AikBki etc.

A.1.13 Changing the Basis

transformation gi à gi ; gk à gk

gi = 1 · gi =(gk ⊗ gk

)· gi =

(gk · gi

)gk = Ak

i gk

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 122: Introduction to Continuum Mechanics - Vector and Tensor Calculus

230 Appendix A. Formulary

gi = A · gi with A =(gk · gm

)gk ⊗ gm = Ak

mgk ⊗ gm

gi =(gk · gi

)gk = Ak

i gk

gi = A · gi with A = (gk · gm)gk ⊗ gm = Akmg

k ⊗ gm

gi = (gk · gi)gk = Akig

k

gi = 1 · gi =(gk ⊗ gk

)· gi =

(gk · gi

)gk = Bi

kgk

gi = B · gi with B = (gk · gm)gk ⊗ gm = Bmk g

k ⊗ gm

gi =(gk · gi

)gk = Bi

kgk

gi = B · gi with B =(gk · gm

)gk ⊗ gm = Bkmgk ⊗ gm

gi =(gk · gi

)gk = Bkigk

inverse relations gi à gi ; gk à gk

gi = 1 · gi =(gk ⊗ gk

)· gi =

(gk · gi

)gk = A

k

i gk

gi = A · gi with A =(gk · gm

)gk ⊗ gm = A

k

mgk ⊗ gm

gi =(gk · gi

)gk = A

k

i gk

gi = A · gi with A = (gm · gk)gm ⊗ gk = Amkg

m ⊗ gk

gi = (gk · gi)gk = Akig

k

gi = 1 · gi =(gk ⊗ gk

)· gi =

(gk · gi

)gk = B

i

kgk

gi = B · gi with B = (gk · gm)gk ⊗ gm = Bm

k gk ⊗ gm

gi =(gk · gi

)gk = B

i

kgk

gi = B · gi with B =(gk · gm

)gk ⊗ gm = B

kmgk ⊗ gm

gi =(gk · gi

)gk = B

kigk

The following relations between the transformation tensors hold

A ·A = 1 or A ·A = 1

AmiA

k

m = δki etc. Am

iAkm = δki

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

A.1. Formulary Tensor Algebra 231

etc.

B ·B = 1 or B ·B = 1

B imB

m

k = δik etc. Bi

mBmk = δik

Furthermore

AmiB

km = δki ; B k

m = Ak

m ; B km = A

k

m

A.1.14 Transformation of Vector Components

v = vigi = vigi = vig

i = vigi

The components of a vector transform with the following rules of transformation

vi = Akivk = Akiv

k ,

vi = B ik v

k = Bkivk ,

etc.

vi = Ak

ivk = Akivk ,

vi = Bi

kvk = B

kivk ,

i.e. the coef£cients of the vector components transform while changing the coordinate systemslike the base vectors themselves.

A.1.15 Transformation Rules for Tensors

T = T ikgi ⊗ gk = T ikgi ⊗ gk = Tikg

i ⊗ gk = T ki g

i ⊗ gk

= Tikgi ⊗ gk = T

i

kgi ⊗ gk = T ikgi ⊗ gk = T

k

i gi ⊗ gk

the transformation relations between base vectors imply

Tik= A

i

mAk

nTmn ,

Ti

k = Ai

mAnkT

mn ,

T ik = AmiA

nkTmn ,

i.e. the coef£cients of the tensor components transform like the tensor basis. The tensor basistransform like the base vectors.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 123: Introduction to Continuum Mechanics - Vector and Tensor Calculus

232 Appendix A. Formulary

A.1.16 Eigenvalues of a Tensor in Euclidean Space

EWP : (T− λ1) = 0 ;(T i

k − λδik)xi = 0

conditions for non-trivial results

det (T− λ1)!= 0 ;

det(T i

k − λδik)xi = 0

characteristic polynomialf (λ) = I3 − λI2 + λ2I1 − λ3 = 0

IfT = TT ; T ik ∈ R, then the eigenvectors are orthogonal and the eigenvalues are real. invariants

of a tensor

I1 = λ1 + λ2 + λ3 = trT = T ik

I2 = λ1λ2 + λ2λ3 + λ3λ1 =1

2

[(trT)2 − trT2

]

=1

2

[T i

iTkk − T i

kTki

]

I3 = λ1λ2λ3 = detT = det(T i

k

)

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

A.2. Formulary Tensor Analysis 233

A.2 Formulary Tensor Analysis

A.2.1 Derivatives of Vectos and Tensors

scalars, vectors and tensors as functions of a vector of position

α = α (x) ; v = v (x) ; T = T (x)

A vector £eld v = v (x) is differentiable in x, if a linear mapping L (x) exists, such that

v (x+ y) = v (x) + L (x)y +O(y2)

, if |y| → 0.

The mapping L (x) is called the gradient or Frechet derivative v′ (x), also represented by theoperator

L (x) = gradv (x) .

Analogous for a scalar valued vector function α (x)

α (x+ y) = α (x) + gradα (x) · y +O(y2)

rules

grad (αβ) = α grad β + β gradα

grad (v ·w) = (gradv)T ·w + (gradw)T · vgrad (αv) = v ⊗ gradα + α gradv

grad (v ⊗w) = [(gradv)⊗w] · gradwThe gradient of a scalar valued vector function leads to a vector valued vector function. Thegradient of a vector valued vector function leads analogous to a tensor valued vector function.divergence of a vector

div = tr (gradv) = gradv : 1

divergence of a tensor

α · divT = div(TT ·α

)= grad

(TT ·α

): 1

rules

div (αv) = v gradα + α gradv

div (T · v) = v ·(divTT

)+TT : gradv

div (gradv)T = grad (div v)

A.2.2 Derivatives of Base Vectors

Chirtoffel tensorsΓ(k) := grad (gk) ; Γ(k) = Γi

kmgi ⊗ gm

componentsΓikm = Γi

(k)m = gi · Γ(k)gm

cartesian orthogonal coordinate systems the Christoffel tensors vanish.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 124: Introduction to Continuum Mechanics - Vector and Tensor Calculus

234 Appendix A. Formulary

A.2.3 Derivatives of Base Vectors in Components Notation

∂ (· · · )k∂θi

= (· · · )k,i with Γ(k) = grad (gk) = gk,i ⊗ gi

gi,k = Γ(i)gk =(Γsilgs ⊗ gl

)gk = Γs

ikgs ;

gi,k · gs = Γsik etc. gi

,k = −Γiskg

s

Γikl = glsΓsik etc. Γ =

1

2(gkl,i + gil,k − gik,l)

ei,k = 0

A.2.4 Components Notation of Vector Derivatives

gradv (x) =∂v

∂θk⊗ gk =

∂ (vigi)

∂θk⊗ gk

=∂vi

∂θkgi ⊗ gk + vi

∂gi

∂θk⊗ gk

︸ ︷︷ ︸

Γ(i)

= vi,kgi ⊗ gk + viΓ(i)

= vi,kgi ⊗ gk + vigi,k ⊗ gk

gradv (x) =(vi,k + vsΓi

sk

)gi ⊗ gk = vi|kgi ⊗ gk

div v (x) = tr (gradv) = vi,i + vsΓisi = vi|i

A.2.5 Components Notation of Tensor Derivatives

divT (x) =∂T

∂θkgk =

∂ (T ijgi ⊗ gj)

∂θkgk

= T ij,k (gi ⊗ gj)g

k + T ij

(∂gi

∂θk⊗ gj

)

· gk + T ij

(

gi ⊗∂gj

∂θk

)

· gk

= T ik,k gi + T ikΓs

ikgs + T ij(gi ⊗ Γs

jkgs

)· gk

=(T ik,k + TmkΓi

mk + T ijΓkjk

)gi = T ik|kgi

=(T k

i,k − T kmΓ

mik + Tm

i Γkkm

)gi = T k

i|kgi

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

A.2. Formulary Tensor Analysis 235

A.2.6 Integral Theorems, Divergence Theorems

A

u · nda =

V

divudV∫

A

uinida =

V

ui|idV∫

A

T · nda =

V

divTdV∫

A

T ki nkg

ida =

V

T ki |kgidV

with n normal vector of the surface element

A surface

V volume

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 125: Introduction to Continuum Mechanics - Vector and Tensor Calculus

236 Appendix A. Formulary

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Appendix B

Nomenclature

Notation Descriptionα, β, γ, . . . scalar quantities in Ra, b, c, . . . column matrices or vectors in Rn

aT , bT , cT , . . . row matrices or vectors in Rn

A, B, C, . . . matrices in Rn ⊗ Rn

a, b, c, . . . vectors or £rst order tensors in En

A, B, C, . . . second order tensors in En ⊗ En3

A,3

B,3

C, . . . third order tensors in En ⊗ En ⊗ En

A, B, C, . . . fourth order tensors in En ⊗ En ⊗ En ⊗ En

Notation Descriptiontr the trace operator of a tensor or a matrixdet the determinant operator of a tensor or a matrixsym the symmetric part of a tensor or a matrixskew the antisymmetric or skew part of a tensor or a matrixdev the deviator part of a tensor or a matrixgrad = ∇ the gradient operatordiv the divergence operatorrot the rotation operator∆ the laplacian or the Laplace operator

237

Page 126: Introduction to Continuum Mechanics - Vector and Tensor Calculus

238 Appendix B. Nomenclature

Notation DescriptionR the set of the real numbersR3 the set of real-valued triplesE3 the 3-dimensional Euclidean vector spaceE3 ⊗ E3 the space of second order tensors over the Euclidean vector spacee1, e2, e3 3-dimensional Cartesian basisg1,g2,g3 3-dimensional arbitrary covariant basisg1,g2,g3 3-dimensional arbitrary contravariant basisgij covariant metric coef£cientsgij contravariant metric coef£cientsg = gijgi ⊗ gj metric tensor

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Bibliography

[1] Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu. Manifolds, Tensor Analysis andApplications. Applied Mathematical Sciences. Springer-Verlag, Berlin, Heidelberg, NewYork, second edition, 1988.

[2] Albrecht Beutelspacher. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden,1998.

[3] Reint de Boer. Vektor- und Tensorrechnung für Ingenieure. Springer-Verlag, Berlin, Hei-delberg, New York, 1982.

[4] Gerd Fischer. Lineare Algebra. Vieweg Verlag, Braunschweig, Wiesbaden, 1997.

[5] Jimmie Gilbert and Linda Gilbert. Linear Algebra and Matrix Theory. Academic Press,San Diego, 1995.

[6] Paul R. Halmos. Finite-Dimensional Vector Spaces. Undergraduate Texts in Mathematics.Springer-Verlag, Berlin, Heidelberg, New York, 1974.

[7] Hans Karl Iben. Tensorrechnung. Mathematik für Ingenieure und Naturwissenschaftler.Teubner-Verlag, Stuttgart, Leipzig, 1999.

[8] Klaus Jänich. Lineare Algebra. Springer-Verlag, Berlin, Heidelberg, New York, 1998.

[9] Wilhelm Klingenberg. Lineare Algebra und Geometrie. Springer-Verlag, Berlin, Heidel-berg, New York, second edition, 1992.

[10] Allan D. Kraus. Matrices for Engineers. Springer-Verlag, Berlin, Heidelberg, New York,1987.

[11] Paul C. Matthews. Vector Calculus. Undergraduate Mathematics Series. Springer-Verlag,Berlin, Heidelberg, New York, 1998.

[12] James G. Simmonds. A Brief on Tensor Analysis. Undergraduate Texts in Mathematics.Springer-Verlag, Berlin, Heidelberg, New York, second edition, 1994.

[13] Erwin Stein. Unterlagen zur Vorlesung Mathematik V für konstr. Ingenieure – Matrizen-und Tensorrechnung SS 94. Institut für Baumechanik und Numerische Mechanik, Univer-sität Hannover, 1994.

239

Page 127: Introduction to Continuum Mechanics - Vector and Tensor Calculus

240 Bibliography

[14] Rudolf Zurmühl. Matrizen und ihre technischen Anwendungen. Springer-Verlag, Berlin,Heidelberg, New York, fourth edition, 1964.

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German

L1-norm - Integralnorm, L1-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

L2-norm - L2-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

l1-norm - Summennorm, l1-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

l2-norm, Euclidian norm - l2-Norm, euklidische Norm . . . . . . . . . . . . . . . . . . . . . . . . . . 19

n-tuple - n-Tupel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

p-norm - Maximumsnorm, p-Norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

absolute norm - Gesamtnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

absolute value - Betrag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

absolute value of a tensor - Betrag eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . 111, 113

additive - additiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

additive identity - additionsneutrales Element . . . . . . . . . . . . . . . . . . . . . . . 10, 42

additive inverse - inverses Element der Addition . . . . . . . . . . . . . . . . . . . . 10, 42

af£ne vector - af£ner Vektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28

af£ne vector space - af£ner Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 54

antisymmetric - schiefsymmetrisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

antisymmetric matrix - scheifsymmetrische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 41

antisymmetric part - antisymmetrischer Anteil . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

antisymmetric part of a tensor - antisymmetrischer Anteil eines Tensors . . . . . . . . . . . . . . 115

area vector - Flächenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

associative - assoziativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

associative rule - Assoziativgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10associative under matrix addition -

assoziativ bzgl. Matrizenaddition . . . . . . . . . . . . . . . . . . . . . 42

base vectors - Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 87

basis - Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

241

Page 128: Introduction to Continuum Mechanics - Vector and Tensor Calculus

242 Glossary English – German

basis of the vector space - Basis eines Vektorraums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

bijective - bijektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

bilinear - bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

bilinear form - Bilinearform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

binormal unit - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137

binormal unit vector - Binormaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Cartesian base vectors - kartesische Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . .88

Cartesian basis - kartesische Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149Cartesian components of a permutation tensor -

kartesische Komponenten des Permutationstensor . . . . . . 87

Cartesian coordinates - kartesische Koordinaten . . . . . . . . . . . . . . . . . . . . . 78, 82, 144

Cauchy stress tensor - Cauchy-Spannungstensor . . . . . . . . . . . . . . . . . . . . . . . 96, 120

Cauchy’s inequality - Schwarzsche oder Cauchy-Schwarzsche Ungleichung . . 21

Cayley-Hamilton Theorem - Cayley-Hamilton Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . .71

characteristic equation - charakteristische Gleichung . . . . . . . . . . . . . . . . . . . . . . . . . . 65

characteristic matrix - charakteristische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

characteristic polynomial - charakteristisches Polynom . . . . . . . . . . . . . . . . . . 56, 65, 123

Christoffel symbol - Christoffel-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

cofactor - Kofaktor, algebraisches Komplement . . . . . . . . . . . . . . . . . 51

column - Spalte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

column index - Spaltenindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

column matrix - Spaltenmatrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28, 40, 46

column vector - Spaltenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46, 59

combination - Kombination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54

commutative - kommutativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

commutative matrix - kommutative Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

commutative rule - Kommutativgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10commutative under matrix addition -

kommutativ bzgl. Matrizenaddition . . . . . . . . . . . . . . . . . . . 42compatibility of vector and matrix norms -

Verträglichkeit von Vektor- und Matrix-Norm. . . . . . . . . .22

compatible - verträglich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

complete fourth order tensor - vollständiger Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . .129

complete second order tensor - vollständige Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . . 99

complete third order tensor - vollständiger Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . 129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German 243

complex conjugate eigenvalues - konjugiert komplexe Eigenwerte . . . . . . . . . . . . . . . . . . . . 124

complex numbers - komplexe Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

components - Komponenten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65components of the Christoffel symbol -

Komponenten des Christoffel-Symbols . . . . . . . . . . . . . . 142components of the permutation tensor -

Komponenten des Permutationstensors . . . . . . . . . . . . . . . . 87

composition - Komposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106

congruence transformation - Kongruenztransformation, kontragrediente Transformation56, 63

congruent - kongruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56, 63

continuum - Kontinuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

contravariant ε symbol - kontravariantes ε-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

contravariant base vectors - kontravariante Basisvektoren . . . . . . . . . . . . . . . . . . . . 81, 139contravariant base vectors of the natural basis -

kontravariante Basisvektoren der natürlichen Basis . . . . 141

contravariant coordinates - kontravariante Koordinaten, Koef£zienten . . . . . . . . . 80, 84contravariant metric coef£cients -

kontravariante Metrikkoef£zienten . . . . . . . . . . . . . . . . 82, 83

coordinates - Koordinaten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

covariant ε symbol - kovariantes ε-Symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

covariant base vectors - kovariante Basisvektoren . . . . . . . . . . . . . . . . . . . . . . . . 80, 138covariant base vectors of the natural basis -

kovariante Basisvektoren der natürlichen Basis . . . . . . . 140

covariant coordinates - kovariante Koordinaten, Koef£zienten . . . . . . . . . . . . . 81, 84

covariant derivative - kovariante Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . 146, 149

covariant metric coef£cients - kovariante Metrikkoef£zienten . . . . . . . . . . . . . . . . . . . . 80, 83

cross product - Kreuzprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96

curl - Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

curvature - Krümmung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

curvature of a curve - Krümmung einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

curved surface - Raum¤äche, gekrümmte Ober¤äche . . . . . . . . . . . . . . . . . 138

curvilinear coordinate system - krummliniges Koordinatensystem . . . . . . . . . . . . . . . . . . . 139

curvilinear coordinates - krummlinige Koordinaten . . . . . . . . . . . . . . . . . . . . . . 139, 144

de£nite metric - de£nite Metrik. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

de£nite norm - de£nite Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 129: Introduction to Continuum Mechanics - Vector and Tensor Calculus

244 Glossary English – German

deformation energy - Formänderungsenergie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

deformation gradient - Deformationsgradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

derivative of a scalar - Ableitung einer skalaren Größe . . . . . . . . . . . . . . . . . . . . . 133

derivative of a tensor - Ableitung eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

derivative of a vector - Ableitung eines Vektors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133derivative w.r.t. a scalar variable -

Ableitung nach einer skalaren Größe . . . . . . . . . . . . . . . . 133

derivatives - Ableitungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

derivatives of base vectors - Ableitungen von Basisvektoren . . . . . . . . . . . . . . . . . 141, 145

determinant - Determinante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50, 65, 89determinant expansion by minors -

Determinantenentwicklungssatz mit Unterdeterminanten51

determinant of a tensor - Determinante eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . 112determinant of the contravariant metric coef£cients -

Determinante der kontravarianten Metrikkoef£zienten . . 83determinant of the contravariant metric coef£cients -

Determinante der kovarianten Metrikkoef£zienten. . . . . .83determinant of the Jacobian matrix -

Determinante der Jacobimatrix . . . . . . . . . . . . . . . . . . . . . . 140

deviator matrix - Deviatormatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

deviator part of a tensor - Deviator eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

diagonal matrix - Diagonalmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 43

differential element of area - differentielles Flächenelement . . . . . . . . . . . . . . . . . . . . . . 139

dimension - Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 14

direct method - direkte Methode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

direct product - direktes Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

directions of principal stress - Hauptspannungsrichtungen . . . . . . . . . . . . . . . . . . . . . . . . . 120

discrete metric - diskrete Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

distance - Abstand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

distributive - distributiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

distributive law - Distributivgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

distributive w.r.t. addition - Distributivgesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

divergence of a tensor £eld - Divergenz eines Tensorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147

divergence of a vector £eld - Divergenz eines Vektorfeldes . . . . . . . . . . . . . . . . . . . . . . . 147

divergence theorem - Divergenztheorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

domain - De£nitionsbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German 245

dot product - Punktprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

dual space - Dualraum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36, 97

dual vector space - dualer Vektoraum, Dualraum. . . . . . . . . . . . . . . . . . . . . . . . . 36

dummy index - stummer Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

dyadic product - dyadisches Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94–96

eigenvalue - Eigenwert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120

eigenvalue problem - Eigenwertproblem . . . . . . . . . . . . . . . . . . . . . . 22, 65, 122, 123

eigenvalues - Eigenwerte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 56

eigenvector - Eigenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65, 120

eigenvector matrix - Eigenvektormatrix, Modalmatrix . . . . . . . . . . . . . . . . . . . . . 70

elastic - elastisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

elasticity tensor - Elastizitätstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

elasticity theory - Elastizitätstheorie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

elements - Elemente . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

empty set - leere Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

equilibrium conditions - Gleichgewichtsbedingungen . . . . . . . . . . . . . . . . . . . . . . . . . 96equilibrium conditon of moments -

Momentengleichgewichtsbedingung . . . . . . . . . . . . . . . . . 120equilibrium system of external forces -

Gleichgewicht der äußeren Kräfte . . . . . . . . . . . . . . . . . . . . 96

equlibirum system of forces - Kräftegleichgewicht . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Euclidean norm - euklidische Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 30, 85

Euclidean space - Euklidischer Raum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Euclidean vector - euklidische Vektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Euclidean vector space - euklidischer Vektorraum . . . . . . . . . . . . . . . . . . . . . 26, 29, 143

Euklidian matrix norm - Euklidische Matrixnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

even permutation - gerade Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

exact differential - vollständiges Differential . . . . . . . . . . . . . . . . . . . . . . 133, 135

£eld - Feld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

£eld - Körper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

£nite - endlich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

£nite element method - Finite-Element-Methode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

£rst order tensor - Tensor erster Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

fourth order tensor - Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 130: Introduction to Continuum Mechanics - Vector and Tensor Calculus

246 Glossary English – German

Frechet derivative - Frechet Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

free indices - freier Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

function - Funktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

fundamental tensor - Fundamentaltensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150

Gauss transformation - Gaußsche Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Gauss’s theorem - Gauss’scher Integralsatz . . . . . . . . . . . . . . . . . . . . . . . 155, 158

general eigenvalue problem - allgemeines Eigenwertproblem . . . . . . . . . . . . . . . . . . . . . . . 69

general permutation symbol - allgemeines Permutationssymbol . . . . . . . . . . . . . . . . . . . . . 92

gradient - Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

gradient of a vector of position - Gradient eines Ortsvektors . . . . . . . . . . . . . . . . . . . . . . . . . . 144

higher order tensor - Tensor höherer Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

homeomorphic - homöomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97

homeomorphism - Homöomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

homogeneous - homogen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32homogeneous linear equation system -

homogenes lineares Gleichungssystem . . . . . . . . . . . . . . . . 65

homogeneous norm - homogene Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

homomorphism - Homomorphismus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Hooke’s law - Hookesche Gesetz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Hölder sum inequality - Höldersche Ungleichung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

identities for scalar products of tensors -Rechenregeln für Skalarprodukte von Tensoren . . . . . . . 110

identities for tensor products - Rechenregeln für Tensorprodukte . . . . . . . . . . . . . . . . . . . 106

identity element w.r.t. addition - neutrales Element der Addition . . . . . . . . . . . . . . . . . . . . . . . 10identity element w.r.t. scalar multiplication -

neutrales Element der Multiplikation . . . . . . . . . . . . . . . . . .10

identity matrix - Einheitsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 45

identity matrix - Einheitsmatrix, Identität . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

identity tensor - Einheitstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

image set - Bildbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

in£nitesimal - in£nitesimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

in£nitesimal tetrahedron - in£nitesimaler Tetraeder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

injective - injektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

inner prodcut space - innerer Produktraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German 247

inner product - inneres Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85

inner product of tensors - inneres Produkt von Tensoren . . . . . . . . . . . . . . . . . . . . . . . 110

inner product space - innerer Produktraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27

integers - ganze Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

integral theorem - Integralsatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

intersection - Schnittmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

invariance - Invarianz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

invariant - Invariante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148

invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

inverse - Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34

inverse of a matrix - inverse Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

inverse of a tensor - inverser Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

inverse relation - inverse Beziehung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

inverse transformation - inverse Transformation . . . . . . . . . . . . . . . . . . . . . . . . 101, 103

inverse w.r.t. addition - inverses Element der Addition . . . . . . . . . . . . . . . . . . . . . . . 10

inverse w.r.t. multiplication - inverses Element der Multiplikation . . . . . . . . . . . . . . . . . . 10

inversion - Umkehrung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

invertible - invertierbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

isomorphic - isomorph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35

isomorphism - Isomorphimus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

isotropic - isotrop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

isotropic tensor - isotroper Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

iterative process - Iterationsvorschrift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Jacobian - Jacobi-Determinante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

Kronecker delta - Kronecker-Delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52, 79, 119

l-in£nity-norm, maximum-norm -Maximumnorm,∞-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Laplace operator - Laplace-Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

laplacian of a scalar £eld - Laplace-Operator eines Skalarfeldes . . . . . . . . . . . . . . . . . 150

laplacian of a tensor £eld - Laplace-Operator eines Tensorfeldes . . . . . . . . . . . . . . . . .151

laplacian of a vector £eld - Laplace-Operator eines Vektorfeldes . . . . . . . . . . . . . . . . . 150

left-hand Cauchy strain tensor - linker Cauchy-Strecktensor . . . . . . . . . . . . . . . . . . . . . . . . . 117

line element - Linienelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 131: Introduction to Continuum Mechanics - Vector and Tensor Calculus

248 Glossary English – German

linear - linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

linear algebra - lineare Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

linear combination - Linearkombination . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 49, 71

linear dependence - lineare Abhängigkeit . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30, 62

linear equation system - lineares Gleichungssytem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

linear form - Linearform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

linear independence - lineare Unabhängigkeit . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30

linear manifold - lineare Mannigfaltigkeit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

linear mapping - lineare Abbildung . . . . . . . . . . . . . . . . . . . 32, 54, 97, 105, 121

linear operator - linearer Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

linear space - linearer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

linear subspace - linearerUnterraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

linear transformation - lineare Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

linear vector space - linearer Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

linearity - Linearität . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 34

linearly dependent - linear abhängig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 23, 49

linearly independent - linear unabhängig . . . . . . . . . . . . . . . . . . . . . 15, 23, 48, 59, 66

lowering an index - Senken eines Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

main diagonal - Hauptdiagonale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 65

map - Abbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

mapping - Abbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

matrix - Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

matrix calculus - Matrizenalgebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

matrix multiplication - Matrizenmultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 54

matrix norm - Matrix-Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 22

matrix transpose - transponierte Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41maximum absolute column sum norm -

Spaltennorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22maximum absolute row sum norm -

Zeilennorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

maximum-norm - Maximumsnorm, p-Norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

mean value - Mittelwert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

metric - Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

metric coef£cients - Metrikkoef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German 249

metric space - metrischer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17metric tensor of covariant coef£cients -

Metriktensor mit kovarianten Koef£zienten . . . . . . . . . . . 102

mixed components - gemischte Komponenten . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99mixed formulation of a second order tensor -

gemischte Formulierung eines Tensors zweiter Stufe . . . 99

moment equilibrium conditon - Momentengleichgewichtsbedingung . . . . . . . . . . . . . . . . . 120

moving trihedron - begleitendes Dreibein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

multiple roots - Mehrfachnullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

multiplicative identity - multiplikationsneutrales Element . . . . . . . . . . . . . . . . . . . . . 45

multiplicative inverse - inverses Element der Multiplikation . . . . . . . . . . . . . . . . . . 10

n-tuple - n-Tupel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

nabla operator - Nabla-Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

natural basis - natürliche Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

natural numbers - natürliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

naturals - natürliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

negative de£nite - negativ de£nit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Newton’s relation - Vietaschen Wurzelsätze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

non empty set - nicht leere Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

non-commutative - nicht-kommutativ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

noncommutative - nicht kommutativ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69, 106

nonsingular - regulär, nicht singulär . . . . . . . . . . . . . . . . . . . . . . . . 48, 59, 66

nonsingular square matrix - reguläre quadratische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . 55

nonsymmetric - unsymmetrisch, nicht symmetrisch . . . . . . . . . . . . . . . . . . . 69

nontrivial solution - nicht triviale Lösung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

norm - Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 65

norm of a tensor - norm eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

normal basis - normale Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

normal unit - Normaleneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

normal unit vector - Normaleneinheitsvektor . . . . . . . . . . . . . . . . . . . 121, 136, 138

normal vector - Normalenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 135

normed space - normierter Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

null mapping - Nullabbildung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

odd permutation - ungerade Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 132: Introduction to Continuum Mechanics - Vector and Tensor Calculus

250 Glossary English – German

one - Einselement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

operation - Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

operation addition - Additionsoperation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

operation multiplication - Multplikationsoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

order of a matrix - Ordnung einer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

origin - Ursprung, Nullelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

orthogonal - orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

orthogonal matrix - orthogonalen Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

orthogonal tensor - orthogonaler Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

orthogonal transformation - orthogonale Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 57

orthonormal basis - orthonormale Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

outer product - äußeres Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

overlined basis - überstrichene Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

parallelepiped - Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

partial derivatives - partielle Ableitungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134partial derivatives of base vectors -

partielle Ableitungen von Basisvektoren . . . . . . . . . . . . . 145

permutation symbol - Permutationssymbol . . . . . . . . . . . . . . . . . . . . . . . 87, 112, 128

permutation tensor - Permutationstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

permutations - Permutationen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

point of origin - Koordinatenursprung, -nullpunkt . . . . . . . . . . . . . . . . . . . . . 28

Poisson’s ratio - Querkontraktionszahl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

polar decomposition - polare Zerlegung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

polynomial factorization - Polynomzerlegung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

polynomial of n-th degree - Polynom n-ten Grades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

position vector - Ortsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135, 152

positive de£nite - positiv de£nit . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 61, 62, 111

positive metric - positive Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

positive norm - positive Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

post-multiplication - Nachmultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

potential character - Potentialeigenschaft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129

power series - Potenzreihe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

pre-multiplication - Vormultiplikation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

principal axes - Hauptachsen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German 251

principal axes problem - Hauptachsenproblem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

principal axis - Hauptachse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

principal stress directions - Hauptspannungsrichtungen . . . . . . . . . . . . . . . . . . . . . . . . . 122

principal stresses - Hauptspannungen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122

product - Produkt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

proper orthogonal tensor - eigentlich orthogonaler Tensor . . . . . . . . . . . . . . . . . . . . . . 116

quadratic form - quadratische Form . . . . . . . . . . . . . . . . . . . . . . . 26, 57, 62, 124

quadratic value of the norm - Normquadrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

raising an index - Heben eines Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

range - Bildbereich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

range - Urbild . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

rank - Rang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

rational numbers - rationale Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Rayleigh quotient - Rayleigh-Quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67, 68

real numbers - reelle Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

rectangular matrix - Rechteckmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

reduction of rank - Rangabfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Riesz representation theorem - Riesz Abbildungssatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36right-hand Cauchy strain tensor -

rechter Cauchy-Strecktensor . . . . . . . . . . . . . . . . . . . . . . . . 117

roots - Nullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

rotated coordinate system - gedrehtes Koordiantensystem . . . . . . . . . . . . . . . . . . . . . . . 119

rotation matrix - Drehmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

rotation of a vector £eld - Rotation eines Vektorfeldes . . . . . . . . . . . . . . . . . . . . . . . . . 150

rotation transformation - Drehtransformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

rotator - Rotor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

row - Zeile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

row index - Zeilenindex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

row matrix - Zeilenmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45

row vector - Zeilenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45

scalar £eld - Skalarfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

scalar function - Skalarfuntkion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

scalar invariant - skalare Invariante . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 133: Introduction to Continuum Mechanics - Vector and Tensor Calculus

252 Glossary English – German

scalar multiplication - skalare Multiplikation . . . . . . . . . . . . . . . . . . . . . . . . . 9, 12, 42

scalar multiplication identity - multiplikationsneutrales Element . . . . . . . . . . . . . . . . . . . . . 10

scalar product - Skalarprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 25, 85, 96

scalar product of tensors - Skalarprodukt von Tensoren . . . . . . . . . . . . . . . . . . . . . . . . 110

scalar product of two dyads - Skalarprodukt zweier Dyadenprodukte . . . . . . . . . . . . . . . 111

scalar triple product - Spatprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88, 90, 152scalar-valued function of multiple variables -

skalarwertige Funktion mehrerer Veränderlicher . . . . . . 134

scalar-valued scalar function - skalarwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . . 133

scalar-valued vector function - skalarwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143

Schwarz inequality - Schwarzsche Ungleichung . . . . . . . . . . . . . . . . . . . . . . 26, 111

second derivative - zweite Ableitung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

second order tensor - Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . . . . . 96, 97, 127

second order tensor product - Produkt von Tensoren zweiter Stufe . . . . . . . . . . . . . . . . . 105

section surface - Schnitt¤äche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

semide£nite - semide£nit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Serret-Frenet equations - Frenetsche Formeln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

set - Menge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

set theory - Mengenlehre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

shear stresses - Schubspannungen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

similar - ähnlich, kogredient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 69

similarity transformation - Ähnlichkeitstransformation, kogrediente Transformation55

simple fourth order tensor - einfacher Tensor vierter Stufe . . . . . . . . . . . . . . . . . . . . . . . 129

simple second order tensor - einfacher Tensor zweiter Stufe . . . . . . . . . . . . . . . . . . . . 94, 99

simple third order tensor - einfacher Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . . . 129

skew part of a tensor - schief- oder antisymmetrischer Anteil eines Tensors . . . 115

smmetric tensor - symmetrischer Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

space - Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

space curve - Raumkurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

space of continuous functions - Raum der stetige Funktionen . . . . . . . . . . . . . . . . . . . . . . . . . 14

space of square matrices - Raum der quadratischen Matrizen . . . . . . . . . . . . . . . . . . . . 14

span - Hülle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

special eigenvalue problem - spezielles Eigenwertproblem . . . . . . . . . . . . . . . . . . . . . . . . . 65

spectral norm - Spektralnorm, Hilbert-Norm . . . . . . . . . . . . . . . . . . . . . . . . . 22

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German 253

square - quadratisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

square matrix - quadratische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Stoke’s theorem - Stokescher Integralsatz, Integralsatz für ein Kreuzprodukt157

strain tensor - Verzerrungstensor, Dehnungstensor . . . . . . . . . . . . . . . . . . 129

stress state - Spannungszustand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

stress tensor - Spannungstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 129

stress vector - Spannungsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

subscript index - untenstehender Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

subset - Untermenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

summation convention - Summenkonvention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

superscript index - obenstehender Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

superset - Obermenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

supremum - obere Schranke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

surface - Ober¤äche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

surface element - Ober¤ächenelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

surface integral - Ober¤ächenintegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

surjective - surjektiv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

symbols - Symbole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

symmetric - symmetrisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 41

symmetric matrix - symmetrische Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

symmetric metric - symmetrische Metrik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

symmetric part - symmetrischer Anteil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

symmetric part of a tensor - symmetrischer Anteil eines Tensors . . . . . . . . . . . . . . . . . 115

tangent unit - Tangenteneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

tangent unit vector - Tangenteneinheitsvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

tangent vector - Tangentenvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Taylor series - Taylor-Reihe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133, 153

tensor - Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

tensor axioms - Axiome für Tensoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

tensor £eld - Tensorfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

tensor product - Tensorprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105, 106

tensor product of two dyads - Tensorprodukt zweier Dyadenprodukte . . . . . . . . . . . . . . 106

tensor space - Tensorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 134: Introduction to Continuum Mechanics - Vector and Tensor Calculus

254 Glossary English – German

tensor with contravariant base vectors and covariant coordinates -Tensor mit kontravarianten Basisvektoren und kovarantenKoef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

tensor with covariant base vectors and contravariant coordinates -Tensor mit kovarianten Basisvektoren und kontravarantenKoef£zienten . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

tensor-valued function of multiple variables -tensorwertige Funktion mehrerer Veränderlicher . . . . . . 134

tensor-valued scalar function - tensorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133

tensor-valued vector function - tensorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143

third order fundamental tensor - Fundamentaltensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . 128

third order tensor - Tensor dritter Stufe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

topology - Topologie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

torsion of a curve - Torsion einer Kurve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

total differential - vollständiges Differential . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

trace of a matrix - Spur einer Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

trace of a tensor - Spur eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

transformation matrix - Transformationsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

transformation of base vectors - Transformation der Basisvektoren . . . . . . . . . . . . . . . . . . . 101transformation of the metric coef£cients -

Transformation der Metrikkoef£zienten . . . . . . . . . . . . . . . 84

transformation relations - Transforamtionsformeln . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

transformation tensor - Transformationstensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101transformed contravariant base vector -

transformierte kontravarianter Basisvektor . . . . . . . . . . . 103transformed covariant base vector -

transformierte kovarianter Basisvektor . . . . . . . . . . . . . . . 103

transpose of a matrix - transponierte Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

transpose of a matrix product - transponiertes Matrizenprodukt . . . . . . . . . . . . . . . . . . . . . . 44

transpose of a tensor - Transponierter Tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

triangle inequality - Dreiecksungleichung . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 19

trivial solution - triviale Lösung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

union - Vereinigungsmenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

unit matrix - Einheitsmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

unitary space - unitärer Raum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

unitary vector space - unitärer Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

usual scalar product - übliches Skalarprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary English – German 255

vector - Vektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127

vector £eld - Vektorfeld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

vector function - Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

vector norm - Vektor-Norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22

vector of associated direction - Richtungsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

vector of position - Ortsvektoren . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

vector product - Vektorprodukt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

vector space - Vektorraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49

vector space of linear mappings - Vektorraum der linearen Abbildungen. . . . . . . . . . . . . . . . .33

vector-valued function - vektorwertige Funktion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138vector-valued function of multiple variables -

vektorwertige Funktion mehrerer Veränderlicher . . . . . . 134

vector-valued scalar function - vektorwertige Skalarfunktion . . . . . . . . . . . . . . . . . . . . . . . 133

vector-valued vector function - vektorwertige Vektorfunktion . . . . . . . . . . . . . . . . . . . . . . . 143

visual space - Anschauungsraum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

volume - Volumen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

volume element - Volumenelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152

volume integral - Volumenintegral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

volumetric matrix - Kugelmatrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

volumetric part of a tensor - Kugelanteil eines Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . .113

von Mises iteration - von Mises Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

whole numbers - natürliche Zahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Young’s modulus - Elastizitätsmodul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

zero element - Nullelement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

zero vector - Nullvektor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

zeros - Nullstellen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 135: Introduction to Continuum Mechanics - Vector and Tensor Calculus

256 Glossary English – German

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Glossary German – English

L2-Norm - L2-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

l2-Norm, euklidische Norm - l2-norm, Euclidian norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

n-Tupel - n-tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Abbildung - map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Abbildung - mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Ableitung einer skalaren Größe - derivative of a scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Ableitung eines Tensors - derivative of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Ableitung eines Vektors - derivative of a vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Ableitung nach einer skalaren Größe -

derivative w.r.t. a scalar variable . . . . . . . . . . . . . . . . . . . . . 133

Ableitungen - derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Ableitungen von Basisvektoren - derivatives of base vectors . . . . . . . . . . . . . . . . . . . . . 141, 145

Abstand - distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

additionsneutrales Element - additive identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42

Additionsoperation - operation addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

additiv - additive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

ähnlich, kogredient - similar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55, 69

Ähnlichkeitstransformation, kogrediente Transformation -similarity transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

äußeres Produkt - outer product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

af£ner Vektor - af£ne vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

af£ner Vektorraum - af£ne vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 54

allgemeines Eigenwertproblem - general eigenvalue problem . . . . . . . . . . . . . . . . . . . . . . . . . . 69allgemeines Permutationssymbol -

general permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . . .92

Anschauungsraum - visual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

257

Page 136: Introduction to Continuum Mechanics - Vector and Tensor Calculus

258 Glossary German – English

antisymmetrischer Anteil - antisymmetric part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44antisymmetrischer Anteil eines Tensors -

antisymmetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . 115

assoziativ - associative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42assoziativ bzgl. Matrizenaddition -

associative under matrix addition . . . . . . . . . . . . . . . . . . . . . 42

Assoziativgesetz - associative rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Axiome für Tensoren - tensor axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

Basis - basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31

Basis eines Vektorraums - basis of the vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Basisvektoren - base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31, 87

begleitendes Dreibein - moving trihedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Betrag - absolute value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Betrag eines Tensors - absolute value of a tensor . . . . . . . . . . . . . . . . . . . . . . 111, 113

bijektiv - bijective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Bildbereich - image set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

Bildbereich - range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

bilinear - bilinear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Bilinearform - bilinear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Binormaleneinheitsvektor - binormal unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Binormaleneinheitsvektor - binormal unit vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Cauchy-Spannungstensor - Cauchy stress tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 120

Cayley-Hamilton Theorem - Cayley-Hamilton Theorem. . . . . . . . . . . . . . . . . . . . . . . . . . .71

charakteristische Gleichung - characteristic equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65

charakteristische Matrix - characteristic matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

charakteristisches Polynom - characteristic polynomial . . . . . . . . . . . . . . . . . . . . 56, 65, 123

Christoffel-Symbol - Christoffel symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

de£nite Metrik - de£nite metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

de£nite Norm - de£nite norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

De£nitionsbereich - domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Deformationsgradient - deformation gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Determinante - determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50, 65, 89

Determinante der Jacobimatrix - determinant of the Jacobian matrix . . . . . . . . . . . . . . . . . . 140

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Glossary German – English 259

Determinante der kontravarianten Metrikkoef£zienten -determinant of the contravariant metric coef£cients . . . . . 83

Determinante der kovarianten Metrikkoef£zienten -determinant of the contravariant metric coef£cients . . . . . 83

Determinante eines Tensors - determinant of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Determinantenentwicklungssatz mit Unterdeterminanten -

determinant expansion by minors . . . . . . . . . . . . . . . . . . . . . 51

Deviator eines Tensors - deviator part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Deviatormatrix - deviator matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Diagonalmatrix - diagonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 43

differentielles Flächenelement - differential element of area . . . . . . . . . . . . . . . . . . . . . . . . . 139

Dimension - dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13, 14

direkte Methode - direct method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68

direktes Produkt - direct product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

diskrete Metrik - discrete metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

distributiv - distributive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Distributivgesetz - distributive law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Distributivgesetz - distributive w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Divergenz eines Tensorfeldes - divergence of a tensor £eld . . . . . . . . . . . . . . . . . . . . . . . . . 147

Divergenz eines Vektorfeldes - divergence of a vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . 147

Divergenztheorem - divergence theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Drehmatrix - rotation matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58

Drehtransformation - rotation transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Dreiecksungleichung - triangle inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16, 19

dualer Vektoraum, Dualraum - dual vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Dualraum - dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36, 97

dyadisches Produkt - dyadic product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94–96

eigentlich orthogonaler Tensor - proper orthogonal tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Eigenvektor - eigenvector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120Eigenvektormatrix, Modalmatrix -

eigenvector matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Eigenwert - eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65, 120

Eigenwerte - eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 56

Eigenwertproblem - eigenvalue problem . . . . . . . . . . . . . . . . . . . . . 22, 65, 122, 123

einfacher Tensor dritter Stufe - simple third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . .129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Page 137: Introduction to Continuum Mechanics - Vector and Tensor Calculus

260 Glossary German – English

einfacher Tensor vierter Stufe - simple fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 129

einfacher Tensor zweiter Stufe - simple second order tensor . . . . . . . . . . . . . . . . . . . . . . . 94, 99

Einheitsmatrix - identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 45

Einheitsmatrix - unit matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Einheitsmatrix, Identität - identity matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Einheitstensor - identity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Einselement - one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

elastisch - elastic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Elastizitätsmodul - Young’s modulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Elastizitätstensor - elasticity tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Elastizitätstheorie - elasticity theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Elemente - elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

endlich - £nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Euklidische Matrixnorm - Euklidian matrix norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

euklidische Norm - Euclidean norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22, 30, 85

euklidische Vektoren - Euclidean vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Euklidischer Raum - Euclidean space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

euklidischer Vektorraum - Euclidean vector space . . . . . . . . . . . . . . . . . . . . . . 26, 29, 143

Feld - £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Finite-Element-Methode - £nite element method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Flächenvektor - area vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Formänderungsenergie - deformation energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Frechet Ableitung - Frechet derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

freier Index - free indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78

Frenetsche Formeln - Serret-Frenet equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Fundamentaltensor - fundamental tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Fundamentaltensor dritter Stufe -

third order fundamental tensor . . . . . . . . . . . . . . . . . . . . . . 128

Funktion - function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

ganze Zahlen - integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Gauss’scher Integralsatz - Gauss’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155, 158

Gaußsche Transformation - Gauss transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59

gedrehtes Koordiantensystem - rotated coordinate system. . . . . . . . . . . . . . . . . . . . . . . . . . . 119

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Glossary German – English 261

gemischte Formulierung eines Tensors zweiter Stufe -mixed formulation of a second order tensor . . . . . . . . . . . . 99

gemischte Komponenten - mixed components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

gerade Permutation - even permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Gesamtnorm - absolute norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Gleichgewicht der äußeren Kräfte -

equilibrium system of external forces . . . . . . . . . . . . . . . . . 96

Gleichgewichtsbedingungen - equilibrium conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Gradient - gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Gradient eines Ortsvektors - gradient of a vector of position . . . . . . . . . . . . . . . . . . . . . . 144

Hauptachse - principal axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Hauptachsen - principal axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Hauptachsenproblem - principal axes problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Hauptdiagonale - main diagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41, 65

Hauptspannungen - principal stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Hauptspannungsrichtungen - directions of principal stress . . . . . . . . . . . . . . . . . . . . . . . . 120

Hauptspannungsrichtungen - principal stress directions . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Heben eines Index - raising an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

homogen - homogeneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

homogene Norm - homogeneous norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18homogenes lineares Gleichungssystem -

homogeneous linear equation system . . . . . . . . . . . . . . . . . 65

Homomorphismus - homomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

homöomorph - homeomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30, 97

Homöomorphismus - homeomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Hookesche Gesetz - Hooke’s law. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129

Höldersche Ungleichung - Hölder sum inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Hülle - span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

in£nitesimal - in£nitesimal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

in£nitesimaler Tetraeder - in£nitesimal tetrahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

injektiv - injective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

innerer Produktraum - inner prodcut space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

innerer Produktraum - inner product space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 27

inneres Produkt - inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 85

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Page 138: Introduction to Continuum Mechanics - Vector and Tensor Calculus

262 Glossary German – English

inneres Produkt von Tensoren - inner product of tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

Integralnorm, L1-Norm - L1-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Integralsatz - integral theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

invariant - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Invariante - invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120, 123, 148

Invarianz - invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Inverse - inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8, 34

inverse Beziehung - inverse relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

inverse Matrix - inverse of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

inverse Transformation - inverse transformation . . . . . . . . . . . . . . . . . . . . . . . . . 101, 103

inverser Tensor - inverse of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

inverses Element der Addition - additive inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 42

inverses Element der Addition - inverse w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10inverses Element der Multiplikation -

inverse w.r.t. multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 10inverses Element der Multiplikation -

multiplicative inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

invertierbar - invertible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

isomorph - isomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29, 35

Isomorphimus - isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

isotrop - isotropic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

isotroper Tensor - isotropic tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Iterationsvorschrift - iterative process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Jacobi-Determinante - Jacobian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

kartesische Basis - Cartesian basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

kartesische Basisvektoren - Cartesian base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88kartesische Komponenten des Permutationstensor -

Cartesian components of a permutation tensor . . . . . . . . . 87

kartesische Koordinaten - Cartesian coordinates . . . . . . . . . . . . . . . . . . . . . . . 78, 82, 144Kofaktor, algebraisches Komplement -

cofactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Kombination - combination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 34, 54

kommutativ - commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43kommutativ bzgl. Matrizenaddition -

commutative under matrix addition . . . . . . . . . . . . . . . . . . . 42

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Glossary German – English 263

kommutative Matrix - commutative matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Kommutativgesetz - commutative rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

komplexe Zahlen - complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Komponenten - components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 65Komponenten des Christoffel-Symbols -

components of the Christoffel symbol . . . . . . . . . . . . . . . .142Komponenten des Permutationstensors -

components of the permutation tensor . . . . . . . . . . . . . . . . . 87

Komposition - composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34, 54, 106

kongruent - congruent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56, 63Kongruenztransformation, kontragrediente Transformation -

congruence transformation . . . . . . . . . . . . . . . . . . . . . . . 56, 63konjugiert komplexe Eigenwerte -

complex conjugate eigenvalues . . . . . . . . . . . . . . . . . . . . . . 124

Kontinuum - continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

kontravariante Basisvektoren - contravariant base vectors . . . . . . . . . . . . . . . . . . . . . . . 81, 139kontravariante Basisvektoren der natürlichen Basis -

contravariant base vectors of the natural basis . . . . . . . . .141kontravariante Koordinaten, Koef£zienten -

contravariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . 80, 84kontravariante Metrikkoef£zienten -

contravariant metric coef£cients . . . . . . . . . . . . . . . . . . 82, 83

kontravariantes ε-Symbol - contravariant ε symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Koordinaten - coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Koordinatenursprung, -nullpunkt -

point of origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

kovariante Ableitung - covariant derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . .146, 149

kovariante Basisvektoren - covariant base vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 80, 138kovariante Basisvektoren der natürlichen Basis -

covariant base vectors of the natural basis . . . . . . . . . . . . 140kovariante Koordinaten, Koef£zienten -

covariant coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81, 84

kovariante Metrikkoef£zienten - covariant metric coef£cients . . . . . . . . . . . . . . . . . . . . . . 80, 83

kovariantes ε-Symbol - covariant ε symbol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Kreuzprodukt - cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87, 90, 96

Kronecker-Delta - Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52, 79, 119

krummlinige Koordinaten - curvilinear coordinates. . . . . . . . . . . . . . . . . . . . . . . . .139, 144krummliniges Koordinatensystem -

curvilinear coordinate system . . . . . . . . . . . . . . . . . . . . . . . 139

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Page 139: Introduction to Continuum Mechanics - Vector and Tensor Calculus

264 Glossary German – English

Kräftegleichgewicht - equlibirum system of forces . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Krümmung - curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Krümmung einer Kurve - curvature of a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Kugelanteil eines Tensors - volumetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Kugelmatrix - volumetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Körper - £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Laplace-Operator - Laplace operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Laplace-Operator eines Skalarfeldes -

laplacian of a scalar £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Laplace-Operator eines Tensorfeldes -

laplacian of a tensor £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 151Laplace-Operator eines Vektorfeldes -

laplacian of a vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

leere Menge - empty set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

linear - linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

linear abhängig - linearly dependent . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 23, 49

linear unabhängig - linearly independent . . . . . . . . . . . . . . . . . . . 15, 23, 48, 59, 66

lineare Abbildung - linear mapping . . . . . . . . . . . . . . . . . . . . . 32, 54, 97, 105, 121

lineare Abhängigkeit - linear dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30, 62

lineare Algebra - linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

lineare Mannigfaltigkeit - linear manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

lineare Transformation - linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

lineare Unabhängigkeit - linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23, 30

linearer Operator - linear operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

linearer Raum - linear space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

linearer Vektorraum - linear vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

linearerUnterraum - linear subspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

lineares Gleichungssytem - linear equation system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Linearform - linear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Linearität - linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 34

Linearkombination - linear combination . . . . . . . . . . . . . . . . . . . . . . . . . . . 15, 49, 71

Linienelement - line element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

linker Cauchy-Strecktensor - left-hand Cauchy strain tensor . . . . . . . . . . . . . . . . . . . . . . 117

Matrix - matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Glossary German – English 265

Matrix-Norm - matrix norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21, 22

Matrizenalgebra - matrix calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Matrizenmultiplikation - matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42, 54

Maximumnorm,∞-Norm - l-in£nity-norm, maximum-norm . . . . . . . . . . . . . . . . . . . . . . 19

Maximumsnorm, p-Norm - p-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Maximumsnorm, p-Norm - maximum-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Mehrfachnullstellen - multiple roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Menge - set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Mengenlehre - set theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Metrik - metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Metrikkoef£zienten - metric coef£cients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138Metriktensor mit kovarianten Koef£zienten -

metric tensor of covariant coef£cients . . . . . . . . . . . . . . . . 102

metrischer Raum - metric space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

von Mises Iteration - von Mises iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68

Mittelwert - mean value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Momentengleichgewichtsbedingung -

equilibrium conditon of moments . . . . . . . . . . . . . . . . . . . 120Momentengleichgewichtsbedingung -

moment equilibrium conditon . . . . . . . . . . . . . . . . . . . . . . . 120multiplikationsneutrales Element -

multiplicative identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45multiplikationsneutrales Element -

scalar multiplication identity . . . . . . . . . . . . . . . . . . . . . . . . . 10

Multplikationsoperation - operation multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

n-Tupel - n-tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Nabla-Operator - nabla operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

Nachmultiplikation - post-multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

natürliche Basis - natural basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140

natürliche Zahlen - natural numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

natürliche Zahlen - naturals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

natürliche Zahlen - whole numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

negativ de£nit - negative de£nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

neutrales Element der Addition - identity element w.r.t. addition . . . . . . . . . . . . . . . . . . . . . . . 10neutrales Element der Multiplikation -

identity element w.r.t. scalar multiplication . . . . . . . . . . . . 10

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Page 140: Introduction to Continuum Mechanics - Vector and Tensor Calculus

266 Glossary German – English

nicht kommutativ - noncommutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69, 106

nicht leere Menge - non empty set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

nicht triviale Lösung - nontrivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

nicht-kommutativ - non-commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Norm - norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 65

norm eines Tensors - norm of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

normale Basis - normal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Normaleneinheitsvektor - normal unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

Normaleneinheitsvektor - normal unit vector . . . . . . . . . . . . . . . . . . . . . . . . 121, 136, 138

Normalenvektor - normal vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 135

normierter Raum - normed space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18

Normquadrate - quadratic value of the norm . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Nullabbildung - null mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Nullelement - zero element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Nullstellen - roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65

Nullstellen - zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Nullvektor - zero vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

obenstehender Index - superscript index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78

obere Schranke - supremum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Ober¤äche - surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Ober¤ächenelement - surface element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Ober¤ächenintegral - surface integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Obermenge - superset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Operation - operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Ordnung einer Matrix - order of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

orthogonal - orthogonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

orthogonale Transformation - orthogonal transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

orthogonalen Matrix - orthogonal matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

orthogonaler Tensor - orthogonal tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

orthonormale Basis - orthonormal basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

Ortsvektor - position vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135, 152

Ortsvektoren - vector of position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Parallelepiped - parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Glossary German – English 267

partielle Ableitungen - partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134partielle Ableitungen von Basisvektoren -

partial derivatives of base vectors . . . . . . . . . . . . . . . . . . . . 145

Permutationen - permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Permutationssymbol - permutation symbol . . . . . . . . . . . . . . . . . . . . . . . . 87, 112, 128

Permutationstensor - permutation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

polare Zerlegung - polar decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Polynom n-ten Grades - polynomial of n-th degree . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Polynomzerlegung - polynomial factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

positiv de£nit - positive de£nite . . . . . . . . . . . . . . . . . . . . . . . . . 25, 61, 62, 111

positive Metrik - positive metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

positive Norm - positive norm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Potentialeigenschaft - potential character . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Potenzreihe - power series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Produkt - product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Produkt von Tensoren zweiter Stufe -

second order tensor product . . . . . . . . . . . . . . . . . . . . . . . . . 105

Punktprodukt - dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

quadratisch - square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

quadratische Form - quadratic form . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 57, 62, 124

quadratische Matrix - square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Querkontraktionszahl - Poisson’s ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Rang - rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Rangabfall - reduction of rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

rationale Zahlen - rational numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Raum - space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Raum der quadratischen Matrizen -

space of square matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Raum der stetige Funktionen - space of continuous functions . . . . . . . . . . . . . . . . . . . . . . . . 14Raum¤äche, gekrümmte Ober¤äche -

curved surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

Raumkurve - space curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Rayleigh-Quotient - Rayleigh quotient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67, 68Rechenregeln für Skalarprodukte von Tensoren -

identities for scalar products of tensors . . . . . . . . . . . . . . . 110

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Page 141: Introduction to Continuum Mechanics - Vector and Tensor Calculus

268 Glossary German – English

Rechenregeln für Tensorprodukte -identities for tensor products . . . . . . . . . . . . . . . . . . . . . . . . 106

Rechteckmatrix - rectangular matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

rechter Cauchy-Strecktensor - right-hand Cauchy strain tensor . . . . . . . . . . . . . . . . . . . . . 117

reelle Zahlen - real numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

regulär, nicht singulär - nonsingular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48, 59, 66

reguläre quadratische Matrix - nonsingular square matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Richtungsvektoren - vector of associated direction . . . . . . . . . . . . . . . . . . . . . . . 120

Riesz Abbildungssatz - Riesz representation theorem. . . . . . . . . . . . . . . . . . . . . . . . .36

Rotation - curl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Rotation eines Vektorfeldes - rotation of a vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Rotor - rotator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

scheifsymmetrische Matrix - antisymmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41schief- oder antisymmetrischer Anteil eines Tensors -

skew part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

schiefsymmetrisch - antisymmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Schnitt¤äche - section surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Schnittmenge - intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Schubspannungen - shear stresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Schwarzsche oder Cauchy-Schwarzsche Ungleichung -

Cauchy’s inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Schwarzsche Ungleichung - Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 111

semide£nit - semide£nite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Senken eines Index - lowering an index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

skalare Invariante - scalar invariant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

skalare Multiplikation - scalar multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 12, 42

Skalarfeld - scalar £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Skalarfuntkion - scalar function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Skalarprodukt - scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 25, 85, 96

Skalarprodukt von Tensoren - scalar product of tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Skalarprodukt zweier Dyadenprodukte -

scalar product of two dyads . . . . . . . . . . . . . . . . . . . . . . . . . 111skalarwertige Funktion mehrerer Veränderlicher -

scalar-valued function of multiple variables . . . . . . . . . . 134

skalarwertige Skalarfunktion - scalar-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Glossary German – English 269

skalarwertige Vektorfunktion - scalar-valued vector function . . . . . . . . . . . . . . . . . . . . . . . .143

Spalte - column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Spaltenindex - column index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Spaltenmatrix - column matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46

Spaltennorm - maximum absolute column sum norm. . . . . . . . . . . . . . . . . 22

Spaltenvektor - column vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28, 40, 46, 59

Spannungstensor - stress tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96, 129

Spannungsvektor - stress vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Spannungszustand - stress state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Spatprodukt - scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . 88, 90, 152

Spektralnorm, Hilbert-Norm - spectral norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

spezielles Eigenwertproblem - special eigenvalue problem . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Spur einer Matrix - trace of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Spur eines Tensors - trace of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Stokescher Integralsatz, Integralsatz für ein Kreuzprodukt -

Stoke’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

stummer Index - dummy index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78

Summenkonvention - summation convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Summennorm, l1-Norm - l1-norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

surjektiv - surjective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Symbole - symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

symmetrisch - symmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25, 41

symmetrische Matrix - symmetric matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

symmetrische Metrik - symmetric metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

symmetrischer Anteil - symmetric part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44symmetrischer Anteil eines Tensors -

symmetric part of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . 115

symmetrischer Tensor - smmetric tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Tangenteneinheitsvektor - tangent unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Tangenteneinheitsvektor - tangent unit vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Tangentenvektor - tangent vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Taylor-Reihe - Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 153

Tensor - tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96

Tensor dritter Stufe - third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Page 142: Introduction to Continuum Mechanics - Vector and Tensor Calculus

270 Glossary German – English

Tensor erster Stufe - £rst order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

Tensor höherer Stufe - higher order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Tensor mit kontravarianten Basisvektoren und kovaranten Koef£zienten -

tensor with contravariant base vectors and covariant coordi-nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Tensor mit kovarianten Basisvektoren und kontravaranten Koef£zienten -tensor with covariant base vectors and contravariant coordi-nates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

Tensor vierter Stufe - fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Tensor zweiter Stufe - second order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 96, 97, 127

Tensorfeld - tensor £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Tensorprodukt - tensor product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105, 106Tensorprodukt zweier Dyadenprodukte -

tensor product of two dyads . . . . . . . . . . . . . . . . . . . . . . . . . 106

Tensorraum - tensor space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94tensorwertige Funktion mehrerer Veränderlicher -

tensor-valued function of multiple variables . . . . . . . . . . 134

tensorwertige Skalarfunktion - tensor-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . . 133

tensorwertige Vektorfunktion - tensor-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143

Topologie - topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Torsion einer Kurve - torsion of a curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Transforamtionsformeln - transformation relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103Transformation der Basisvektoren -

transformation of base vectors . . . . . . . . . . . . . . . . . . . . . . 101Transformation der Metrikkoef£zienten -

transformation of the metric coef£cients . . . . . . . . . . . . . . .84

Transformationsmatrix - transformation matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Transformationstensor - transformation tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101transformierte kontravarianter Basisvektor -

transformed contravariant base vector . . . . . . . . . . . . . . . . 103transformierte kovarianter Basisvektor -

transformed covariant base vector . . . . . . . . . . . . . . . . . . . 103

transponierte Matrix - matrix transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

transponierte Matrix - transpose of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Transponierter Tensor - transpose of a tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114transponiertes Matrizenprodukt -

transpose of a matrix product . . . . . . . . . . . . . . . . . . . . . . . . 44

triviale Lösung - trivial solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Glossary German – English 271

überstrichene Basis - overlined basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

übliches Skalarprodukt - usual scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Umkehrung - inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

ungerade Permutation - odd permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

unitärer Raum - unitary space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

unitärer Vektorraum - unitary vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29unsymmetrisch, nicht symmetrisch -

nonsymmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

untenstehender Index - subscript index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Untermenge - subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

Urbild - range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Ursprung, Nullelement - origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Vektor - vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 28, 127

Vektor-Norm - vector norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 22

Vektorfeld - vector £eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Vektorfunktion - vector function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

Vektorprodukt - vector product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Vektorraum - vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12, 49Vektorraum der linearen Abbildungen -

vector space of linear mappings . . . . . . . . . . . . . . . . . . . . . . 33

vektorwertige Funktion - vector-valued function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138vektorwertige Funktion mehrerer Veränderlicher -

vector-valued function of multiple variables . . . . . . . . . . 134

vektorwertige Skalarfunktion - vector-valued scalar function . . . . . . . . . . . . . . . . . . . . . . . .133

vektorwertige Vektorfunktion - vector-valued vector function . . . . . . . . . . . . . . . . . . . . . . . 143

Vereinigungsmenge - union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

verträglich - compatible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Verträglichkeit von Vektor- und Matrix-Norm -

compatibility of vector and matrix norms . . . . . . . . . . . . . . 22Verzerrungstensor, Dehnungstensor -

strain tensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Vietaschen Wurzelsätze - Newton’s relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66vollständige Tensor zweiter Stufe -

complete second order tensor . . . . . . . . . . . . . . . . . . . . . . . . 99vollständiger Tensor dritter Stufe -

complete third order tensor . . . . . . . . . . . . . . . . . . . . . . . . . 129

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Page 143: Introduction to Continuum Mechanics - Vector and Tensor Calculus

272 Glossary German – English

vollständiger Tensor vierter Stufe -complete fourth order tensor . . . . . . . . . . . . . . . . . . . . . . . . 129

vollständiges Differential - exact differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133, 135

vollständiges Differential - total differential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Volumen - volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Volumenelement - volume element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Volumenintegral - volume integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Vormultiplikation - pre-multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Zeile - row. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Zeilenindex - row index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Zeilenmatrix - row matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45

Zeilennorm - maximum absolute row sum norm . . . . . . . . . . . . . . . . . . . . 22

Zeilenvektor - row vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40, 45

zweite Ableitung - second derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

TU Braunschweig, CSE – Vector and Tensor Calculus – 22. Oktober 2003

Index

L1-norm, 19L2-norm, 19l1-norm, 19l2-norm, Euclidian norm, 19n-tuple, 28p-norm, 19

absolute norm, 22absolute value, 85absolute value of a tensor, 111, 113addition, 12, 13additive, 32additive identity, 10, 12, 42additive inverse, 10, 12, 42af£ne vector, 28af£ne vector space, 28, 54antisymmetric, 41antisymmetric matrix, 41antisymmetric part, 44antisymmetric part of a tensor, 115area vector, 152associative, 42associative rule, 10, 12associative under matrix addition, 42

base vectors, 31, 87basis, 31basis of the vector space, 15bijective, 8bilinear, 25bilinear form, 26binormal unit, 137binormal unit vector, 137

Cantor, 6Cartesian base vectors, 88

Cartesian basis, 149Cartesian components of a permutation ten-

sor, 87Cartesian coordinates, 78, 82, 144Cauchy, 96Cauchy stress tensor, 96, 120Cauchy’s inequality, 21Cayley-Hamilton Theorem, 71characteristic equation, 65characteristic matrix, 70characteristic polynomial, 56, 65, 123Christoffel symbol, 142cofactor, 51column, 40column index, 40column matrix, 28, 40, 46column vector, 28, 40, 46, 59combination, 9, 34, 54commutative, 43commutative matrix, 43commutative rule, 10, 12commutative under matrix addition, 42compatibility of vector and matrix norms, 22compatible, 22complete fourth order tensor, 129complete second order tensor, 99complete third order tensor, 129complex conjugate eigenvalues, 124complex numbers, 7components, 31, 65components of the Christoffel symbol, 142components of the permutation tensor, 87composition, 34, 54, 106congruence transformation, 56, 63congruent, 56, 63

273

Page 144: Introduction to Continuum Mechanics - Vector and Tensor Calculus

274 Index

continuum, 96contravariant ε symbol, 93contravariant base vectors, 81, 139contravariant base vectors of the natural ba-

sis, 141contravariant coordinates, 80, 84contravariant metric coef£cients, 82, 83coordinates, 31covariant ε symbol, 92covariant base vectors, 80, 138covariant base vectors of the natural basis,

140covariant coordinates, 81, 84covariant derivative, 146, 149covariant metric coef£cients, 80, 83cross product, 87, 90, 96curl, 150curvature, 135curvature of a curve, 136curved surface, 138curvilinear coordinate system, 139curvilinear coordinates, 139, 144

de£nite metric, 16de£nite norm, 18deformation energy, 129deformation gradient, 118derivative of a scalar, 133derivative of a tensor, 134derivative of a vector, 133derivative w.r.t. a scalar variable, 133derivatives, 133derivatives of base vectors, 141, 145determinant, 50, 65, 89determinant expansion by minors, 51determinant of a tensor, 112determinant of the contravariant metric coef-

£cients, 83determinant of the Jacobian matrix, 140deviator matrix, 46deviator part of a tensor, 113diagonal matrix, 41, 43differential element of area, 139

dimension, 13, 14direct method, 68direct product, 94directions of principal stress, 120discrete metric, 17distance, 17distributive, 42distributive law, 10, 13distributive w.r.t. addition, 10distributive w.r.t. scalar addition, 13distributive w.r.t. vector addition, 13divergence of a tensor £eld, 147divergence of a vector £eld, 147divergence theorem, 156domain, 8dot product, 85dual space, 36, 97dual vector space, 36dummy index, 78dyadic product, 94–96

eigenvalue, 65, 120eigenvalue problem, 22, 65, 122, 123eigenvalues, 22, 56eigenvector, 65, 120eigenvector matrix, 70Einstein, 78elastic, 129elasticity tensor, 129elasticity theory, 129elements, 6empty set, 7equilibrium conditions, 96equilibrium conditon of moments, 120equilibrium system of external forces, 96equlibirum system of forces, 96Euclidean norm, 22, 30, 85Euclidean space, 17Euclidean vector, 143Euclidean vector space, 26, 29, 143Euklidian matrix norm, 60even permutation, 50exact differential, 133, 135

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Index 275

£eld, 10, 12, 143£nite, 13£nite element method, 57£rst order tensor, 127fourth order tensor, 129Frechet derivative, 143free indices, 78function, 8fundamental tensor, 150

Gauss, 60Gauss transformation, 59Gauss’s theorem, 155, 158general eigenvalue problem, 69general permutation symbol, 92gradient, 143gradient of a vector of position, 144

higher order tensor, 127homeomorphic, 30, 97homeomorphism, 30homogeneous, 32homogeneous linear equation system, 65homogeneous norm, 18homomorphism, 32Hooke’s law, 129Hölder sum inequality, 21

identities for scalar products of tensors, 110identities for tensor products, 106identity, 12identity element w.r.t. addition, 10identity element w.r.t. scalar multiplication,

10, 12identity matrix, 41, 45, 80identity tensor, 112image set, 8in£nitesimal, 96in£nitesimal tetrahedron, 96injective, 8inner prodcut space, 29inner product, 25, 85inner product of tensors, 110inner product space, 26, 27

integers, 7integral theorem, 156intersection, 7invariance, 57invariant, 57, 120, 123, 148inverse, 8, 34inverse of a matrix, 48inverse of a tensor, 115inverse relation, 103inverse transformation, 101, 103inverse w.r.t. addition, 10, 12inverse w.r.t. multiplication, 10inversion, 48invertible, 48isomorphic, 29, 35isomorphism, 35isotropic, 129isotropic tensor, 119iterative process, 68

Jacobian, 140

Kronecker delta, 52, 79, 119

l-in£nity-norm, maximum-norm, 19Laplace operator, 150laplacian of a scalar £eld, 150laplacian of a tensor £eld, 151laplacian of a vector £eld, 150left-hand Cauchy strain tensor, 117Leibnitz, 50line element, 139linear, 32linear algebra, 3linear combination, 15, 49, 71linear dependence, 23, 30, 62linear equation system, 48linear form, 36linear independence, 23, 30linear manifold, 15linear mapping, 32, 54, 97, 105, 121linear operator, 32linear space, 12linear subspace, 15

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 145: Introduction to Continuum Mechanics - Vector and Tensor Calculus

276 Index

linear transformation, 32linear vector space, 12linearity, 32, 34linearly dependent, 15, 23, 49linearly independent, 15, 23, 48, 59, 66lowering an index, 83

main diagonal, 41, 65map, 8mapping, 8matrix, 40matrix calculus, 28matrix multiplication, 42, 54matrix norm, 21, 22matrix transpose, 41maximum absolute column sum norm, 22maximum absolute row sum norm, 22maximum-norm, 20mean value, 153metric, 16metric coef£cients, 138metric space, 17metric tensor of covariant coef£cients, 102mixed components, 99mixed formulation of a second order tensor,

99moment equilibrium conditon, 120moving trihedron, 137multiple roots, 70multiplicative identity, 45multiplicative inverse, 10

n-tuple, 35nabla operator, 146natural basis, 140natural numbers, 6, 7naturals, 7negative de£nite, 62Newton’s relation, 66non empty set, 13non-commutative, 45noncommutative, 69, 106nonsingular, 48, 59, 66

nonsingular square matrix, 55nonsymmetric, 69nontrivial solution, 65norm, 18, 65norm of a tensor, 111normal basis, 103normal unit, 136normal unit vector, 121, 136, 138normal vector, 96, 135normed space, 18null mapping, 33

odd permutation, 50one, 10operation, 9operation addition, 10operation multiplication, 10order of a matrix, 40origin, 12orthogonal, 66orthogonal matrix, 57orthogonal tensor, 116orthogonal transformation, 57orthonormal basis, 144outer product, 87overlined basis, 103

parallelepiped, 88partial derivatives, 134partial derivatives of base vectors, 145permutation symbol, 87, 112, 128permutation tensor, 128permutations, 50point of origin, 28Poisson’s ratio, 129polar decomposition, 117polynomial factorization, 66polynomial of n-th degree, 65position vector, 135, 152positive de£nite, 25, 61, 62, 111positive metric, 16positive norm, 18post-multiplication, 45

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Index 277

potential character, 129power series, 71pre-multiplication, 45principal axes, 120principal axes problem, 65principal axis, 65principal stress directions, 122principal stresses, 122product, 10, 12proper orthogonal tensor, 116

quadratic form, 26, 57, 62, 124quadratic value of the norm, 60

raising an index, 83range, 8rank, 48rational numbers, 7Rayleigh quotient, 67, 68real numbers, 7rectangular matrix, 40reduction of rank, 66Riesz representation theorem, 36right-hand Cauchy strain tensor, 117roots, 65rotated coordinate system, 119rotation matrix, 58rotation of a vector £eld, 150rotation transformation, 58rotator, 116row, 40row index, 40row matrix, 40, 45row vector, 40, 45

scalar £eld, 143scalar function, 133scalar invariant, 149scalar muliplicative identity, 12scalar multiplication, 9, 12, 13, 42scalar multiplication identity, 10scalar product, 9, 25, 85, 96scalar product of tensors, 110scalar product of two dyads, 111

scalar triple product, 88, 90, 152scalar-valued function of multiple variables,

134scalar-valued scalar function, 133scalar-valued vector function, 143Schwarz inequality, 26, 111second derivative, 133second order tensor, 96, 97, 127second order tensor product, 105section surface, 121semide£nite, 62Serret-Frenet equations, 137set, 6set theory, 6shear stresses, 121similar, 55, 69similarity transformation, 55simple fourth order tensor, 129simple second order tensor, 94, 99simple third order tensor, 129skew part of a tensor, 115smmetric tensor, 124space, 12space curve, 135space of continuous functions, 14space of square matrices, 14span, 15special eigenvalue problem, 65spectral norm, 22square, 40square matrix, 40Stoke’s theorem, 157strain tensor, 129stress state, 96stress tensor, 96, 129stress vector, 96subscript index, 78subset, 7summation convention, 78superscript index, 78superset, 7supremum, 22surface, 152

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003

Page 146: Introduction to Continuum Mechanics - Vector and Tensor Calculus

278 Index

surface element, 152surface integral, 152surjective, 8symbols, 6symmetric, 25, 41symmetric matrix, 41symmetric metric, 16symmetric part, 44symmetric part of a tensor, 115

tangent unit, 135tangent unit vector, 135tangent vector, 135Taylor series, 133, 153tensor, 96tensor axioms, 98tensor £eld, 143tensor product, 105, 106tensor product of two dyads, 106tensor space, 94tensor with contravariant base vectors and

covariant coordinates, 100tensor with covariant base vectors and con-

travariant coordinates, 99tensor-valued function of multiple variables,

134tensor-valued scalar function, 133tensor-valued vector function, 143third order fundamental tensor, 128third order tensor, 127topology, 30torsion of a curve, 137total differential, 133trace of a matrix, 43trace of a tensor, 112transformation matrix, 55transformation of base vectors, 101transformation of the metric coef£cients, 84transformation relations, 103transformation tensor, 101transformed contravariant base vector, 103transformed covariant base vector, 103transpose of a matrix, 41

transpose of a matrix product, 44transpose of a tensor, 114triangle inequality, 16, 18, 19trivial solution, 65

union, 7unit matrix, 80unitary space, 27unitary vector space, 29usual scalar product, 25

vector, 12, 28, 127vector £eld, 143vector function, 135vector norm, 18, 22vector of associated direction, 120vector of position, 143vector product, 87vector space, 12, 49vector space of linear mappings, 33vector-valued function, 138vector-valued function of multiple variables,

134vector-valued scalar function, 133vector-valued vector function, 143visual space, 97volume, 152volume element, 152volume integral, 152volumetric matrix, 46volumetric part of a tensor, 113von Mises, 68von Mises iteration, 68

whole numbers, 7

Young’s modulus, 129

zero element, 10zero vector, 12zeros, 65

TU Braunschweig, CSE – Vector and Tensor Calculus – 22nd October 2003