Subido por Axouxere de Corbillon

Notas Geometria Diferencial

Anuncio
Lecture Notes for Differential
Geometry, MATH 624, Iowa State
University
Domenico D’Alessandro∗
Copyright by Domenico D’Alessandro, 2020
December 22, 2020
∗
Department of Mathematics, Iowa State University, Ames, Iowa, U.S.A.
less@iastate.edu
1
Electronic address:
da-
Contents
I
Calculus On Manifolds
5
1 Manifolds
1.1 Examples and further definitions . .
1.1.1 Manifolds of dimension m=1
1.1.2 Surfaces . . . . . . . . . . . .
1.1.3 n-dimensional Spheres . . . .
1.1.4 Product manifolds . . . . . .
1.1.5 Projective spaces . . . . . . .
1.1.6 Grassmann Manifolds . . . .
1.2 Maps between manifolds . . . . . . .
1.3 Exercises . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Tangent and cotangent spaces
2.1 Tangent vector and tangent spaces . . .
2.2 Co-tangent vectors and co-tangent space
2.3 Induced maps: Push-forward . . . . . .
2.3.1 Computational Example . . . . .
2.4 Induced maps: Pull-back . . . . . . . . .
2.4.1 Computational Example . . . .
2.5 Inverse functions theorem; Submanifolds
2.6 Exercises . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6
7
7
8
9
10
10
11
14
17
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
21
22
24
26
27
28
30
. . . . . . . . . .
. . . . . . . . . .
and tensor fields
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
32
34
36
4 Integral curves and flows
4.1 Relation with ODE’s. The problem ‘upstairs’ and ‘downstairs’ . . . . . . . . . . .
4.2 Definition and properties of the flow . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
38
39
43
5 Lie
5.1
5.2
5.3
44
44
50
52
3 Tensors and Tensor Fields
3.1 Tensors . . . . . . . . . . . .
3.2 Vector fields and tensor fields
3.2.1 f −related vector fields
3.3 Exercises . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Derivative
Lie derivative of a vector field . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Lie derivatives of co-vector fields and general tensor fields . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 Differential Forms Part I: Algebra on Tensors
6.1 Preliminaries: Permutations acting on tensors .
6.2 Differential forms and exterior product . . . . .
6.3 Characterization of the vector spaces Ωrp (M ) . .
6.4 Exercises . . . . . . . . . . . . . . . . . . . . . .
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
53
53
55
58
60
7 Differential Forms Part II: Fields and the
7.1 Fields . . . . . . . . . . . . . . . . . . . .
7.2 The exterior derivative . . . . . . . . . . .
7.2.1 Independence of coordinates . . . .
7.3 Properties of the exterior derivative . . . .
7.3.1 Examples . . . . . . . . . . . . . .
7.3.2 Closed and Exact Forms . . . . . .
7.4 Interior product . . . . . . . . . . . . . . .
7.4.1 Properties of the interior product .
7.5 Exercises . . . . . . . . . . . . . . . . . . .
Exterior Derivative
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
8 Integration of differential forms on manifolds part I:
8.1 Orientation on Manifolds . . . . . . . . . . . . . . . . .
8.2 Partition of Unity . . . . . . . . . . . . . . . . . . . . .
8.3 Orientation and existence of a nowhere vanishing form
8.4 Simplexes . . . . . . . . . . . . . . . . . . . . . . . . .
8.5 Singular r-chains, boundaries and cycles . . . . . . . .
8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Preliminary
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
.
.
.
.
.
.
.
.
.
61
61
61
62
63
64
65
66
67
69
Concepts
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
70
70
73
76
78
81
84
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9 Integration of differential forms on manifolds part II: Stokes theorem
9.1 Integration of differential r-forms over r−chains; Stokes theorem . . . . . . .
9.2 Integration of Differential forms on regular domains and the second version
Stokes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.1 Regular Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2.2 Orientation and induced orientation . . . . . . . . . . . . . . . . . . .
9.2.3 Integration of differential forms over regular domains . . . . . . . . . .
9.2.4 The second version of Stokes theorem . . . . . . . . . . . . . . . . . .
9.3 De Rham Theorem and Poincare’ Lemma . . . . . . . . . . . . . . . . . . . .
9.3.1 Consequences of the De Rham Theorem . . . . . . . . . . . . . . . . .
9.3.2 Poincare’ Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. .
of
. . 91
. . 91
. . 93
. . 94
. . 97
. . 99
. . 100
. . 100
. . 101
10 Lie groups Part I; Basic Concepts
10.1 Basic Definitions and Examples . . . .
10.2 Lie subgroups and coset spaces . . . .
10.3 Invariant vector fields and Lie algebras
10.3.1 The Lie algebra of a Lie group
10.3.2 Lie algebra of a Lie subgroup .
10.4 The Lie algebras of matrix Lie groups
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
85
.
.
.
.
.
.
102
102
103
104
106
107
108
11 Exercises
110
12 Lie groups Part II
111
13 Fiber bundles; Part I
112
14 Fiber bundles; Part II
113
3
Other resources:
1. D. Martin, Manifold Theory; an introduction for mathematical physicists, Woodhead Publishing, Cambridge, UK, 2012
2. J.M. Lee, Introduction to Smooth Manifolds, 2-nd Edition, Graduate Texts in Mathematics,
218, Springer, 2012.
3. W.M. Boothby, An Introduction to Differentiable Manifolds and Riemannian Geometry,
Academic Press, Pure and Applied Mathematics Series, Vol. 120, 1986.
4. M. Nakahara, Geometry, Topology and Physics (Graduate Student Series in Physics) 2nd
Edition, Taylor and Francis Group, New York, 2003.
5. M. Spivak, A Comprehensive Introduction to Differential Geometry, Vol. 1, Publish or
Perish; 3rd edition (January 1, 1999)
6. F. Warner, , Graduate Texts in Mathematics (Book 94), Springer 1983.
4
Part I
Calculus On Manifolds
5
1
Manifolds
A manifold is an object that locally looks like R
I m . More in detail:
Definition 1.0.1: Manifolds
an m−dimensional manifold M is a topological space together with a family of pairs
(Uj , φj ) called charts such that Uj are open sets in M and φj are homeomorphishms
0
φj : Uj → Uj ⊆ R
I m . Moreover, the Uj ’s cover all of M , i.e.,
[
Uj = M.
(1.0.1)
j
Additionally, the maps φj are assumed to satisfy the smooth
compatibility condiT
tion, T
which means that,
j, k such that Uj Uk 6= ∅, the map φk ◦ φ−1
:
j
T for any pair
∞
φj (Uj Uk ) → φk (Uj Uk ) is in C in the usual sense of calculus for maps R
Im→ R
I m.
The maps φk ◦ φ−1
are referred to as transition functions. (cf. Figure 1) Two more
j
conditions are usually assumed and we will do so as well:
1) M is assumed to be second countable, i.e., there exists a countable base in its topology.a
2) M is Hausdorff, that is, every two distinct points in M have disjoint neighborhoods.
a
A countable family of open sets such that every open set can be written as the union of sets in this
family.
Definition 1.0.2: Coordinates
The collection of all charts {(Uj , φj )} is called an atlas for M . Uj is called a coordinate
neighborhood while φj is called a coordinate system or coordinate function. Since
its image is in R
I m , it is often denoted by using the coordinates (x1 , . . . , xm ) in R
I m,
i.e., for p ∈ M , φj (p) := (x1 (p), . . . , xm (p)). The number m is called the dimension of
the manifold, and the manifold M is often denoted by Mm to emphasize its dimension.
Figure 1 describes the definition of a manifold and in particular the smooth compatibility
condition.
6
Figure 1: Definition of Manifold and the Smooth Compatibility Condition
.
1.1
1.1.1
Examples and further definitions
Manifolds of dimension m=1
Up to homeomorphisms , there are only two possible manifolds of dimension 1, R
I and S 1 . R
I
can be given the trivial structure of a manifold by using the atlas {Uj , φj } where Uj ’s are the
open sets in the standard topology of R
I , and all the φj ’s can be taken equal to the identity.
In practice, we only need one chart ( R
I , id). For S 1 , let U1 be the open set consisting of all
1
of S except the point (1, 0) and φ1 map every point (cos(θ), sin(θ)) to θ ∈ (0, 2π). Moreover,
consider U2 the circle except the point (−1,
S 0) and φ2 mapping∞every point (cos(u), sin(u)) to
−1
u ∈ (−π, π). φ2 ◦ φ1 is defined on (0, π) (π, 2π) and it is in C since u = θ for θ ∈ (0, π) and
∞
u = θ − 2π for θ ∈ (π, 2π). Analogously, one can see that φ1 ◦ φ−1
2 is in C .
7
Figure 2: Manifold structure and transition function for S 1 .
.
1.1.2
Surfaces
Much of our intuition about manifolds comes from curves and surfaces in R
I 3 , which we studied
in multivariable calculus. There,
Definition 1.1.1: Surface
a surface is described by a function z = f (x, y) , an open set D ⊆ R
I 2 or, more in general,
by parametric coordinates x = x(u, v), y = y(u, v), z = z(u, v), with (u, v) in an open
set D ⊆ R
I 2 , assuming that the coordinate functions are smooth as functions from R
I 2 to
3
R
I , and have a smooth inverse such an inverse can be taken as the coordinate function φ
of a unique chart defining the manifold structure of the surface.
Closed surfaces such as cylinders and spheres can be still described by parametric surfaces
locally. Therefore, they still can be given the structure of a manifold, but the corresponding atlas
contains more than one chart. The situation is similar to the one for the circle S m described
above.
8
1.1.3
n-dimensional Spheres
Definition 1.1.2: S n
The n−dimensional sphere S n is the set of points in R
I n+1 with coordinates
0
1
n
(x , x , . . . , x ) such that
n
X
(xj )2 = 1,
(1.1.1)
j=0
with the subset topology induced from R
I n+1 . Consider the charts (Uj+ , φj+ ) and
(Uj− , φj− ), for j = 0, 1, . . . , n, where the coordinate neighborhoods Uj+ ’s and Uj− ’s are
defined as
Uj+ := {(x0 , x1 , . . . , xn ) ∈ S n | xj > 0},
Uj− := {(x0 , x1 , . . . , xn ) ∈ S n | xj < 0},
and the coordinates functions, φj± : Uj± → R
I n , are defined as,
φj± (x0 , x1 , . . . , xn ) = (x0 , x1 , . . . , xj−1 , xj+1 , . . . , xn ).
The set of coordinate charts (Uj± , φj± ) is an atlas since the Uj± , j = 0, 1, 2, . . . , n form a
cover for S n (if (x0 , x1 , . . . , xn ) does not belong to any of the Uj± , then all the components,
x0 , x1 , . . . , xn , must be zero which contradicts (1.1.1)). Assume j < k. On Uj+ ∩ Uk+ , both
0 1
j−1 , xj+1 , . . . , xn ) ∈ R
xj and xk are strictly positive. φ−1
I n to the point
j+ maps (x , x , . . . , x
q
P
(x0 , x1 , . . . , xj−1 , 1 − l6=j (xl )2 , xj+1 , . . . , xn ) ∈ S n which is mapped to
0
1
(x , x , . . . , x
j−1
s
X
, 1−
(xl )2 , xj+1 , . . . , xk−1 , xk+1 . . . , xn ) ∈ R
In
l6=j
by φk+ . The transition function φk+ ◦ φ−1
j+ therefore acts as
0 1
j−1 j+1
φk+ ◦ φ−1
, x , . . . , xn ) =
j+ (x , x , . . . , x
(x0 , x1 , . . . , xj−1 ,
s
X
1−
(xl )2 , xj+1 , . . . , xk−1 , xk+1 . . . , xn )
l6=j
and it is therefore in C ∞ since xj 6= 0 implies
q
P
1 − l6=j (xl )2 6= 0. Analogously, one obtains
the smoothness of φk± ◦ φ−1
j± for all the combinations of + and −.
We also remark that the second countability axiom and Hausdorff property are automatically
verified since the topology is the subset topology of a space R
I n+1 which satisfies these properties.
9
1.1.4
Product manifolds
Definition 1.1.3: Product manifold
Suppose M and N are manifolds. Then M × N has a natural manifold structure under
the product topology and charts (Um × Un , φm × φn ) with (Um , φm ) and (Un , φn ) the
respective charts of M and N . We define this to be the product maifold.
It is a known fact from topology that the product of second countable Hausdorff spaces is a
second countable Hausdorff space in the product topology. Then under the product topology we
now naturally consider (Um × Un , φm × φn ). This function is still a homeomorphism to its image.
Finally, for overlapping charts, (φmi × φni ) ◦ ((φmj × φnj )−1 ) is still smooth. This is verified by
showing the product of continuous functions is continuous, the product of open maps is an open
map, and the product of differentiable functions is differentiable.
1.1.5
Projective spaces
Definition 1.1.4: Projective space
The real projective space RP n is the quotient space ( R
I n+1 − {0})/ ∼, where ∼ is the
equivalence relation which associates P with λP , for arbitrary λ 6= 0 and P ∈ R
I n+1 .
Geometrically RP n is the space of lines in R
I n+1 or, equivalently, the space of 1−dimensional
n+1
subspaces in R
I
. The manifold structure of RP n follows as a special case of the one of the
Grassmann manifolds Grk,n ( R
I ) (cf. Exercise 1.2 below) since RP n := Gr1,n+1 . We treat the
general Grassmann manifolds next.
10
Figure 3: Hausdorff property in RP 1 . U0 (U1 ) is the set of lines whose points have nonzero
components along the x0 (x1 ) axis. All pairs of lines such as ([B], [A1 ]) or ([B], [A2 ]) are such
that both elements are in U0 or U1 . The only exception in ([A1 ], [A2 ]). To handle this case we
introduce a new Uθ by rotating the x0 axis by an angle θ and obtaining the axis represented with
thicker pointed line. The elements in Uθ are all the lines whose point have non zero component
along the θ-rotated axis. This way, both [A1 ] and [A2 ] are in Uθ .
1.1.6
Grassmann Manifolds
We shall treat the case of Grassmann manifolds in some detail as it also gives us a glimpse to
some of the concepts we shall discuss later in the course such as actions and Lie transformation
groups.
Definition 1.1.5: Grassmann Manifold
We shall assume k < n. We start by defining the open Stiefel space Stk,n ( R
I ) of, k × n
kn
matrices of rank k. If Mk,n ' R
I denotes the space of k ×n matrices, then Stk,n ( R
I ) is an
open subset of Mk,n since it consists of all elements in Mk,n( R
I ) except the ones satisfying
n
, where Dl is the l-th k × k
the polynomial equations det(Dl ) = 0, for l = 1, . . . ,
k
submatrix in an element in Mk,n . So Stk,n ( R
I ) has a natural topology as a subspace of
Mk,n ( R
I ). On Stk,n ( R
I ) we define an equivalence relation ∼ by saying the A is equivalent
to B if and only if there exists a k × k nonsingular matrix g such that
A = gB.
The space of equivalence classes under this equivalence relation is called the Grassmannian, or Grassmann Manifold, Grk,n ( R
I ), that is Grk,n ( R
I ) := Stk,n ( R
I )/ ∼. This
can be thought of as the space of k-dimensional subspaces of R
I n , each subspace being
11
spanned by the rows of a representative in the given equivalence class. Since Stk,n ( R
I)
has a given (natural) topology, Grk,n ( R
I ) is equipped with the quotient topology, that is,
if π : Stk,n ( R
I ) → Grk,n ( R
I ) is the natural projection mapping an element A ∈ Stk,n ( R
I)
to its equivalence class [A] in Grk,n ( R
I ), then U is open in Grk,n ( R
I ) iff π −1 (U ) is open in
Stk,n ( R
I ).
We now do two things: 1: We define a differentiable structure on Grk,n ( R
I ); 2: We prove
that Grk,n ( R
I ) with the given topology is Hausdorff.
n
For l = 1, ...,
, let Ul ⊆ Grk,n ( R
I ) be the equivalence classes represented by elements
k
A ∈ Stk,n ( R
I ) such that the l−th, k × k, submatrix is nonsingular. Notice that this definition is
well posed since if [A] ∈ Ul then [gA] ∈ Ul , for non singular g, that is, multiplication by g does
not change the property that the l-th submatrix has nonzero determinate. Moreover Ul is an
open set in the quotient topology since the preimage under π is open as these are the matricies
whose l-th submatrix is nonsingular. It is also clear that {Ul } form a cover of Grk,n ( R
I ) since if
[A] ∈ Grk,n ( R
I ) then at least one of the k × k minors of A is nonzero. On each Ul we define a
coordinate map φl as follows: Take the element [A] ∈ Ul and select the l−th k×k submatrix of A,
which is nonsingular by definition. Call it Q. Let N denote the remaining k × (n − k) submatrix
in A, obtained by removing the columns corresponding to the l−th minor. For instance if l = 1,
we will have
A := [Q N ].
Then φl ([A]) := Q−1 N ∈ Mk,n−k ( R
I)' R
I k(n−k) . This map is well defined since its definition
does not depend on the representative A of the equivalence class [A]. In fact, if P is a k × k
nonsingular matrix, φl ([P A]) = Q−1 P −1 P N = Q−1 N . The map φl is a homeomorphism between
Ul and Rk(n−k) . To show that {Ul , φl } give a differentiable structure on Grk,n ( R
I ) we need to
show
n
−1
k(n−k)
k(n−k)
.
that the transition functions φl ◦φj : R
I
→ R
I
are smooth for every l, j = 1, . . . ,
k
Assume for simplicity of notation j = 1. We have
φ−1
φ
l
1
W ∈ R
I k(n−k) −−
→ [(1 W )] −→
Q−1 N,
where Q is the l-th submatrix of the matrix [1 W ] and N is the remaining k × (n − k) matrix.
This is clearly a C ∞ map R
I k(n−k) → R
I k(n−k) .
We now turn to the proof that Grk,n ( R
I ) is Hausdorff. Given two points [A1 ] and [A2 ] in
Grk,n ( R
I ) we need to exhibit two disjoint neighborhoods of [A1 ] and [A2 ]. If [A1 ] and [A2 ] are
both in the same Ul , for some l, this is easily obtained since the coordinate map φl gives a
homeomorphism that establishes a one to one and onto correspondence between Ul and R
I k(n−k) .
k(n−k)
Since R
I
is Hausdorff we can choose two disjoint neighborhoods N1 and N2 of φl ([A1 ]) and
−1
φl ([A2 ]), respectively and φ−1
l (N1 ) and φl (N2 ) will give two disjoint neighborhoods of [A1 ] and
[A2 ] in Grk,n ( R
I ). This is the generic case since the set of pairs ([A1 ], [A2 ]) in Grk,n ( R
I )×Grk,n ( R
I)
that do not share any coordinate neighborhood Ul is a closed set. (it corresponds to the closed
12
P
set in Stk,n ( R
I ) × Stk,n ( R
I ) given by the zeros of the function l (det(D1,l ))2 (det(D2,l )))2 , where
D1,l (D2,l ) is the l−th submatrix f the k × n matrix in the first (second) factor of the pair
(A, B) ∈ Stk,n ( R
I ) × Stk,n ( R
I )) In order to extend this proof in general, we define a richer set of
coordinate charts.
Consider an action of Gl(n, R
I ) on1 Grk,n ( R
I ), that is, for every g ∈ Gl(n, R
I ) a map σg :
Grk,n ( R
I ) → Grk,n ( R
I ), defined by σg ([A]) = [Ag]. It is easily seen that σg is well defined and
continuous map. Moreover Gl(n, R
I ) acts transitively2 on Grk,n ( R
I ), that is, for every pair [A]
and [B] there exists a g ∈ Gl(n, R
I ) such that σg ([A]) = [B].3
Consider now the chart {U1 , φ1 } defined above. From this we can define a new neighborhood
σg (U1 ) and a new coordinate function φg := φ1 ◦ σg−1 on σg (U1 ) for any g ∈ GL(n, R). In
particular the above described coordinate neighborhoods Ul and coordinate maps φl can be
obtained as a special case of this construction by choosing for g a matrix which moves the first
k columns to the positions identified by the l−th permutation. In general, we can denote by Ug
and φg the neighborhood and coordinate map, respectively, corresponding to a certain g
(U1 and φ1 correspond to the identity matrix). If [A1 ] and [A2 ] belong to the same Ug , we
can repeat the argument we have done for Ul and show that they have disjoint neighborhoods.
Therefore the result is proven if we show that there exists such a g. Summarizing, we need to prove
that for any pair [A1 ], [A2 ] in Grk,n ( R
I ), there exists a g ∈ Gl(n, R
I ) such that [A1 ] ∈ Ug = σg (U1 )
and [A2 ] ∈ Ug = σg (U1 ).
We first give an alternative description of Ug , for a given g (Step 1 below) and then (Step 2
below) we use this characterization to show that there exists a g so that [A1 ] and [A2 ] are both
in Ug .
Step 1: Define the (n − k) × n matrix W0 := [0n−k,k
(n − k) × (n − k) identity. Given g ∈ Gl(n, R
I)
1n−k,n−k ], where 1n−k,n−k is the
[A] ∈ Ug ⇔ span(A) ∩ span(W0 g) = {0},
(1.1.2)
where span(A) denotes the span of the rows of A, that is the row space of A.
Proof Assume [A] ∈ Ug and by contradiction there exists a non zero vector in span(A) ∩
span(W0 g), and therefore nonzero k−vector, vk , and (n − k)-vector, wn−k , such that
T
vkT A = wn−k
W0 g,
or, equivalently,
T
W0 .
vkT Ag −1 = wn−k
the first k entries of the right hand side are equal to zero. Moreover since [Ag −1 ] ∈ U1 ,
Ag −1 = [K M ] with the k × k matrix K nonsingular. This implies, since vkT K = 0, that
1
The notation Gl(n, R
I ) stands for the ‘linear group’ of n × n nonsingular matrices with entries in R
I.
We shall elaborate more on ‘actions’ and ‘transitivity’ when we will talk about Lie transformation groups
later in the course.
A
3
To see this, take A and complete it with an (n − k) × n matrix à so that the n × n matrix
is nonsingular;
Ã
B
Analogously, take B and complete it with an (n − k) × n matrix B̃ so that the n × n matrix
is nonsingular.
B̃
−1 A
B
The matrix g =
is such that σg ([A]) = [B].
Ã
B̃
2
13
vk = 0, which is a contradiction.
To show the converse implication (⇐ in (1.1.2)), assume span(A) ∩ span(W0 g) = {0} or, equivalently, span(Ag −1 ) ∩ span(W0 ) = {0}, the matrix Ag −1 has the form Ag −1 = [K M ]. The
submatrix matrix K is not singular. If it was singular, there would be a nonzero k−vector vk ,
such that vkT K = 0 which would still give vkT Ag −1 6= 0 since Ag −1 has rank k. The (row) vector
vkT Ag −1 6= 0 would therefore be of the form [0 0 . . . 0 a1 , a2 , . . . an−k ] and therefore in span(W0 )
and different from zero which contradicts the right hand side of (1.1.2). Since Ag −1 = [K M ]
with K (k × k) nonsingular, [Ag −1 ] ∈ U1
.
Step 2 The key observation is the separation property, that is, given two k-dimensional
subspaces V1 and V2 of R
I n there exists an (n − k)−dimensional subspace W , such that V1 ∩ W =
V2 ∩ W = {0} (Geometrically this is clear by doing examples in R
I 2 or R
I 3 ). We construct such
a subspace as the span of vectors not included in V1 ∪ V2 . If V1 = V2 , then we clearly can take a
vector w 6= 0 with w ∈
/ V1 and w ∈
/ V2 . We can do this even if V1 and V2 are different. In fact,
if V1 and V2 are different spaces with the same dimension, there exist nonzero v2 ∈ V2 , v2 ∈
/ V1 ,
and nonzero v1 ∈ V1 , v1 ∈
/ V2 . The vector v := av1 + bv2 with a and b different from zero is
not in V1 ∪ V2 . Consider now the spaces V1 ⊕ span(v) and V2 ⊕ span(v). To these two spaces
we can re-apply the same argument and find one vector which lies outside the union of the two
spaces. Continuing this way, we find n − k vectors w1 = v, w2 ,...,wn−k . These vector are linearly
independent (because at each step the new vector is not a linear combination of the previous
ones), and therefore form a a basis of an (n − k)-dimensional vector space W whose intersection
with V1 and V2 , by construction, is {0}. Given this fact, we are now ready to conclude the proof.
Given [A1 ] and [A2 ] apply the above fact with V1 = span(A1 ) and V2 = span(A2 ) and find
the corresponding W , then select g so that W = span(W0 g). This is always possible since we
know that Gl(n, R
I ) is transitive on Grn−k,n ( R
I ). Since span(A1 ) ∩ W = span(A2 ) ∩ W = {0}
with W = span(W0 g), condition (1.1.2) applies for both A1 and A2 . Therefore both [A1 ] and
[A2 ] are in Ug and this completes the proof. The geometric idea of the proof is illustrated in
Figure 3
The proof of the Hausdorff property follows mainly the argument of Prof. Schlichtkrull
(University of Copenhagen) http://www.math.ku.dk/∼schlicht/Geom2/Grassmann.pdf, which
was somehow streamlined and adapted to our notations.
Remark 1.1.1: Notation
We shall use the Einstein summation convention in that upper and lowerPcorresponding indexes in an expression
indicate a sum so that for instance X i ωi := i X i ωi , and
P
X i1 ,i2 ,...,in ,k ωi1 ,i2 ,...,in := i1 ,i2 ,...,in X i1 ,i2 ,...,in ,k ωi1 ,i2 ,...,in , which depends on k.
1.2
Maps between manifolds
We shall consider maps f between manifolds f : Mm → Nn (cf. Figure 4).
Definition 1.2.1: Smooth Map
Given a chart (U, φ) for Mm and a chart (V, ψ)) for Nn such that f (U ) ⊆ V 6= ∅, it is
possible to define the map ψ ◦ f ◦ φ−1 : R
Im → R
I n which is the (local) coordinate
14
representation of f . The map f is called smooth at p ∈ M if ψ ◦ f ◦ φ−1 is smooth
as a function R
Im → R
I n at φ(p) ∈ R
I m . Such a property is independent of the system
of coordinates chosen on M or on N . For instance, consider two charts (U1 , φ1 ), (U2 , φ2 )
with p ∈ U1 ∩ U2 . Then if ψ ◦ f ◦ φ−1
1 is smooth, so is
−1
−1
ψ ◦ f ◦ φ−1
2 = ψ ◦ f ◦ φ1 ◦ (φ1 ◦ φ2 ),
from the compatibility condition. The same argument holds for coordinate charts (V1 , ψ1 ),
(V2 , ψ2 ), on N , such that f (p) ∈ V1 ∩ V2 .
Figure 4: Maps between manifolds and their local representation
Definition 1.2.2: Curves and Functions
Another special case of a smooth map is an open curve, c, which is a map from an
open interval (a, b) in R
I (with possibly a = −∞ and-or b = +∞) to a manifold N .
It is assumed that the map is one to one so that the image f ((a, b)) ⊆ N has no self
intersections. Furthermore, it is often assumed that 0 ∈ (a, b). The manifold structure on
(a, b) is the natural one inherited from R
I with only one coordinate chart ((a, b), ψ) and a
coordinate map ψ equal to the identity. In a coordinate neighborhood of N , U the image
of c is mapped by a coordinate map φ to R
I n . The map φ ◦ c is a map from (a, b) to R
In
which is sometimes referred to as the coordinate representation of c. A closed curve is a
smooth, one to one, map from the circle S 1 to a manifold N . Finally, a smooth function
is a smooth map f : M → R
I.
Smooth functions also have a local coordinate representation f ◦ φ−1 : R
Im→ R
I where φ is
15
a coordinate map. Open curves and functions on a manifolds form the basic building blocks of
many of the constructions in differential geometry. The space of smooth functions on a manifold
is denoted by F(M ).
16
1.3
Exercises
Exercise 1.1: Consider the self intersecting curve of part a) of Figure 5 and the open square
of part b) of the same figure, with the subset topology derived by the standard topology on R
I 2.
∞
Show that a) cannot be given the structure of a C manifold (of dimension 1) and b) can be
given the structure of a C ∞ manifold.
Figure 5: Self intersecting curve (a) and open square (b)
.
Exercise 1.2: Prove that Grk,n ( R
I ) above defined is second countable.
Exercise 1.3: Derive the atlas, i.e., the differentiable structure for R
I P n as a special case
of what described for Grk,n , that is, describe the coordinate neighborhood U0 , . . . , Un the corresponding coordinate maps φ0 , φ1 , . . . , φn , and the transition functions φk ◦ φ−1
j , and show that
the transition functions are in C ∞ .
Exercise 1.4: Reconsider the differentiable structure of Grassmann manifold for Gr1,3 = R
I P 2.
Consider g1 and g2 in Gl(3, R
I ),




1 1 0
1 1 1
g1 := −1 1 0 ,
g2 := 0 1 0 .
0 0 1
0 0 1
Describe Ug1 and Ug2 and calculate the transition function φg1 ◦ φ−1
I2→ R
I 2.
g2 : R
Exercise 1.5: Prove that a manifold is locally compact.
(Hint: for chart (U, φ) of point p, find a closed ball in φ(U ) containing φ(p))
Exercise 1.6: Consider the map S 1 → R
I which associates p ∈ S with 1 if p is to the right of
the y-axis, f (p) = −1 if p is to the left of the y axis and f (p) = 0 if p is on the y axis. Write the
coordinate representations of this map ( R
I → R
I ) with respect to the atlas defined in subsection
1.1. Use these coordinate representations to show that this map is not smooth.
17
2
Tangent and cotangent spaces
2.1
Tangent vector and tangent spaces
In multivariable calculus we have learned the definition of a tangent vector to a curve c in R
I 3,
x = x(t), y = y(t), z = z(t) at a point c(0) := (x(0), y(0), z(0)). This is given by
~v = ẋ(0)~i + ẏ(0)~j + ż(0)~k.
If c is a curve with image belonging to a surface M , we can think of the tangent vector ~v as a
tangent vector to the surface (manifold) M (cf. Figure 6)
Figure 6: A tangent vector
If f = f (x, y, z) is a function f : R
I3 → R
I the tangent vector ~v determines a directional
derivative for f at the point p = c(0), given by
∇f |p · ~v = fx (p)ẋ(0) + fy (p)ẏ(0) + fz (p)ż(0) =
d
f (x(t), y(t), z(t))|t=0 .
dt
(2.1.1)
We remark that to be defined, the right hand side of (2.1.1) does not require that f is defined
on all of R
I 3 but only that f is defined (and smooth) on M , in fact in a neighborhood of p ∈ M .
Generalizing this picture to a general manifold M , a curve c : (−a, a) → M determines a
tangent vector at p = c(0), in that, for every function f : M → R
I , it gives a value in R
I , given by
d
d
dt f ◦c(t)|t=0 . Notice that there may be several curves which give the same value for dt f ◦c(t)|t=0 .
Therefore,
Definition 2.1.1: Tangent vector
A tangent vector at p ∈ M is an equivalence class of curves such that c1 ∼ c2 if and
only if
18
1. c1 (0) = c2 (0) = p.
2.
d
dt (f
◦ c1 (t))|t=0 =
d
dt (f
◦ c2 (t))|t=0 , for every f ∈ F(M ).
Now consider a coordinate chart (U, φ) with p ∈ U , and calculate for a curve c
d
d
∂f
dxµ
(f ◦ c(t))|t=0 = (f ◦ φ−1 ◦ φ ◦ c(t))|t=0 =
|
|t=0 .
φ(p)
dt
dt
∂xµ
dt
(2.1.2)
Here we use (for the first time) the Einstein summation convention. See that xµ is the µ
d
(xµ ◦ c(t))|t=0 , which
component of the coordinate map φ := (x1 , x2 , . . . , xm ). Call X µ := dt
alone in (2.1.2) contains the information about the equivalence class of c. Then we can look at
a tangent vector as a linear operator acting on F(M ) written as
X = Xµ
where
∂
∂xµ
∂
,
∂xµ
(2.1.3)
is the operator F(M ) → R
I,
∂
∂
f :=
(f ◦ φ−1 )|φ(p) .
µ
∂x
∂xµ
(2.1.4)
Notice there is some ambiguity in the notation ∂x∂ µ used in (2.1.4). This is a usual issue in
differential geometry and we shall keep the ambiguity as the meaning of notation should be clear
from the context. Notice ∂x∂ µ on the left hand side denotes a linear operator F(M ) → R
I . Here
the reference to the point p is (sometimes) omitted. This operator is defined as in the right hand
side where now ∂x∂ µ is the standard partial derivative of multivariable calculus.
Definition 2.1.2: Tangent Space
From formula (2.1.3), all tangent vectors can be written as linear combinations of the
operators ∂x∂ µ , with µ = 1, 2, . . . , m, which shows that the space of tangent vectors at
p ∈ M is a vector space. It is called the tangent space at p and denoted by Tp M .
The basis of the vector space { ∂x∂ µ }|p at p, of course depends on the coordinate map φ in the
coordinate chart (U, φ) considered. If two overlapping charts are given, (U, φ), (V, ψ), such that
p ∈ U ∩ V and denote by xµ (y ν ) the coordinates associated with φ (ψ), then the same tangent
vector X can be expanded in terms of { ∂x∂ µ } or in terms of { ∂y∂ ν }, that is,
X = Xµ
∂
∂
= Yν ν.
∂xµ
∂y
(2.1.5)
To understand the relation between the components X µ and Y ν , we apply X to the coordinate
function xk which gives the k−th component of φ. We have, using (2.1.5)
X(xk ) = X µ
∂
∂
(xk ) = Y ν ν (xk ).
µ
∂x
∂y
19
(2.1.6)
Since
∂
k
∂xµ (x )
= δk,µ , where δk,µ is the Kronecker delta, we have
X(xk ) = X µ δk,µ = X k ,
and, using this in (2.1.6), we obtain the relation between the components
Xk = Y µ
∂ k
x .
∂y µ
(2.1.7)
Definition 2.1.3: Jacobian
Recall that ∂y∂µ xk is the partial derivative (in the usual sense of multivariable calculus)
with respect to y µ of the k−th component of the function φ ◦ ψ −1 calculated at ψ(p) (cf.
Figure 1). The matrix ∂y∂µ xk |ψ(p) is called the Jacobian associated with the given change
of coordinates at p.
Definition 2.1.4: Derivation
An equivalent definition of a tangent vector is that of a derivation, that is, a linear
operator Vp : F(M ) → R
I , satisfying the Liebnitz condition
Vp (f g) = Vp (f )g(p) + f (p)Vp (g).
(2.1.8)
In the definition (2.1.8), Vp is assumed to give the same value on each function f which
belongs to the same germ at p. A germ is a subset of F(M ) of functions which are equal in a
neighborhood of p. Belonging to the same germ is an equivalence relation ‘ ∼0 and Vp can be
seen as acting on equivalence classes of functions that have the same value in a neighborhood of
p ∈ M . A tangent vector Vp is a derivation defined on elements of the same germ, that is, it can
be seen as acting on F(M )/ ∼.
The definition in terms of derivation is equivalent to the one in terms of equivalence classes
of curves, that is, there exists a one to one and onto correspondence between equivalence classes
of curves and derivations defined on F(M )/ ∼. The proof (not difficult) can be found in Spivak
pg. 79, ff. The lengthier part is where one wants to show that given a derivation Vp , it is possible
to find the corresponding equivalence class of curves. The crucial step is to show that every
derivation Vp can be written as
∂
Vp = X j j |p ,
(2.1.9)
∂x
for some coefficients X j (which are necessarily given by X j = Vp (xj )). Then one chooses a
d
(equivalence class of) curve c such that dt
[xj ◦ c(t)]|t=0 = X j , and this will give the curve
4
corresponding to the derivation Vp .
4
Just choose a curve in R
I m whose tangent vector has components X j and then map it back to the manifold
M via the inverse coordinate transformation φ−1 .
20
2.2
Co-tangent vectors and co-tangent space
Definition 2.2.1: Contangent Vector
Like any vector space Tp M has a dual space defined as the space of linear maps ω : Tp M →
R
I . An element ω ∈ Tp∗ M is called a cotangent vector or a one form.
Definition 2.2.2: Differential
A simple but important example of a one form is the differential of a function f ∈ F(M )
at p, df |p . It is defined as,
df |p Vp := Vp (f ),
(2.2.1)
for any Vp ∈ Tp M .
Notice that, if f is a function on R
I 3 , df coincides with our calculus definition of differential
df := fx dx+fy dy+fz dz. In that df (Vp ) where Vp = vx~i+vy~j +vz~k, gives fx vx +fy vy +fz vz which
is exactly Vp (f ). This analogy will be more clear expressing one forms in terms of coordinates.
Recall that in general the dual space of a vector space has the same dimension as the original
space and if {~e1 , ~e2 , . . . , ~en } is a basis of the original space, the dual basis in the dual vector space
has the form {f~1 , f~2 , . . . , f~n }, with f~j (~ek ) = δkj . Now consider a coordinate system (U, φ), with
coordinates x1 , x2 , ..., xn , and the associated basis of Tp M ,
∂
∂
,..., µ .
(2.2.2)
∂x1
∂x
Let dxj denote the differential of the function xj as defined above. If we calculate, using the
definition of a differential of a function,
∂
∂xj
j
dx
(2.2.3)
:=
= δkj ,
∂xk
∂xk
we find that {dx1 , . . . , dxm } is the dual basis of the basis of Tp M in (2.2.2). Every element
ω ∈ Tp∗ M can be written as
ω := ωµ dxµ ,
(2.2.4)
and, in general, if ω can be written as in (2.2.4) and V ∈ Tp M is given by V = V j ∂x∂ j , we have
∂
j
µ
j ∂
µ
= ωµ V dx
= ωµ V j δjµ = ωµ V µ ,
(2.2.5)
ω(V ) = ωµ dx V
∂xj
∂xj
which can be interpreted as a product between elements in Tp∗ M and elements in Tp M . If there
are two overlapping coordinate systems x and y, a one form can be written as ω = ωi dxi or
ω = ω̃j dy j . To see the relation between the ωi and the ω̃j , fix an index k and apply ω to ∂y∂ k .
We obtain
∂
∂
∂xi
i
ω
=
ω
dx
=
ω
=
(2.2.6)
i
i
∂y k
∂y k
∂y k
∂
∂y j
j
ω̃j dy
=
ω̃
= ω̃j δkj = ω̃k .
j
∂y k
∂y k
21
Therefore, the Jacobian of the transition map gives the desired transformation between the
expression of a one form in one coordinate system and the other, that is,
ω̃k = ωi
∂xi
.
∂y k
(2.2.7)
Remark 2.2.1: Equation 2.2.7
In order to remember formula (2.2.7) one may start from the equality ω̃k dy k = ωi dxi and
divide ‘formally’ by dy k , and then change the d in ∂. Notice also the Jacobian of the
∂xi
transition function φ ◦ ψ −1 , J = ∂y
k in (2.2.7) is multiplied as follows: If one thinks of
ω̃ and ω as rows, equation (2.2.7) reads as ω̃ = ωJ. Analogously, formula (2.1.7) can
be remembered by using X k ∂x∂ k = Y µ ∂y∂µ , neglecting the ∂ signs on top and ‘formally’
k
∂x
multiplying by ∂xk . If we think of X and Y as vectors and J := ∂y
µ as a matrix, the
relation writes as X = JY ; consistently with this we get that ω̃Y = ωJ(J −1 X) = ωX,
independently of the coordinates.
2.3
Induced maps: Push-forward
Definition 2.3.1: Push-forward
Let f : M → N be a smooth map from a manifold M to a manifold N (here N might be
equal to M ). It induces a map f∗ : Tp M → Tf (p) N , called the push forward of f and it
is defined for any function g ∈ F(N ) and vector V ∈ Tp M as
(f∗ V )(g) = V (g ◦ f ).
(2.3.1)
Notice in order not to confuse this concept with the concept of differential of a function
defined when discussing one forms, the term ’push-forward’ is used instead of ‘differential’. This
term is justified by the fact that the map f∗ maps elements in Tp M ‘forward’ to elements in
Tf (p) N .
The definition (2.3.1) is given in terms of the ‘derivation definition’ of a tangent vector and
it is straightforward to verify that if V is a derivation at p ∈ M then f∗ V defined in (2.3.1) is a
derivation at f (p) ∈ N . Moreover, if we use the definition in terms of equivalence class of curves
d
and c is a curve corresponding to V , that is, for h ∈ F(M ), V (h) = dt
(h ◦ c)|t=0 , then we have,
for g ∈ F(N )
d
f∗ V (g) := V (g ◦ f ) = (g ◦ f ◦ c)|t=0 .
(2.3.2)
dt
Therefore, f ◦ c is the curve associated with f∗ V (cf. Figure (7)).
22
Remark 2.3.1: Linearity of Push-forward
From our real vector space Tp M , we also see that the pushforward of a smooth map f is
linear. Take V1 , V2 in Tp M and real numbers a, b. Remembering that these vectors are
derivations, which are linear operators, we have f∗ (aV1 + bV2 )(g) = (aV1 + bV2 )(g(f )) =
(aV1 )(g(f )) + (bV2 )(g(f )) = aV1 (g(f )) + bV2 (g(f )) = af∗ (V1 ) + bf∗ (V2 ).
Remark 2.3.2: Push-forward Under Inclusion
The notion of push-forward allows us to formalize some familiar concepts. Consider a
surface S in R
I 3 . We typically describe a tangent vector geometrically as a vector tangent
to p ∈ S. However, what we really ‘see’ is a vector in R
I 3 . In fact if V is the tangent vector
and corresponding curve c on S, what we really see is the push-forward of V under the
d
inclusion map i : S → R
I 3 , i.e., i∗ V . In fact if g ∈ F( R
I 3 ), i∗ V (g) = dt
(g ◦i◦c)|t=0 = ∇g ·~r,
d(i◦c)
where ~r = dt |t=0 .
Remark 2.3.3: Tangent Vector Corresponding to a Curve
So far we have always referred to a tangent vector corresponding to a curve c simply as
the ‘tangent vector corresponding to c’. However, using the concept of push-forward, we
can introduce a notation for this tangent vector. Consider c a smooth map c : R
I → M,
d
with c(0) = p, and consider dt |t=0 , the elementary derivation on R
I . For any function g
on M ,
d
d
(c∗ |t=0 )g := (g ◦ c)|t=0 .
dt
dt
d
Therefore, c∗ dt
|t=0 is the tangent vector (derivation) corresponding to c.
Figure 7: Curve associated to the push-forward
We now investigate how to express the push-forward in terms of coordinates. Let x denote
23
a coordinate system at p ∈ M and y a coordinate system at f (p) ∈ N . Then we can write
V = V µ ∂x∂ µ and f∗ V = Y ν ∂y∂ ν . To discover the relation between the Y ν coefficients and the V µ
coefficients, we recall that Y k = f∗ V (y k ). Therefore, using the definition of f∗ V , we get
Y k = (f∗ V )(y k ) = V (y k ◦ f ) = V µ
Recall that
∂
∂xµ (g)
∂(y k ◦ f )
.
∂xµ
(2.3.3)
is really the partial derivative with respect to xµ of the function g ◦ φ−1 :=
k
R
In → R
I . Therefore J := ∂(y∂xµ◦f ) is really the {ν, µ} entry of the Jacobian associated with the
map ψ ◦ f ◦ φ−1 : R
Im → R
I n (cf. Figure 4). Also notice that thinking of Y and V as vectors,
they (2.3.3) can be related as follows
Y = JV .
(2.3.4)
2.3.1
Computational Example
We provide now a simple computational example in order to illustrate some of the concepts
above introduced. Consider the two dimensional sphere described in subsection 1.1 and the
curve c : (−1, 1) ∈ R
I → S 2 described by
p
x0 = 1 − t2 cos(10πt),
p
x1 = 1 − t2 sin(10πt),
x2 = t.
d
|t=0
The curve is a spiral that wraps the sphere. At time t = 0, c(0) = (1, 0, 0). Let Vp := c∗ dt
2
identify a tangent vector on S at the point (1, 0, 0). We do the following:
1. Express this tangent vector in terms of the coordinates associated with the chart (U0 , φ0+ ).
2. Express such a tangent vector in terms of the spherical coordinates, (θ, φ) ∈ (−π, π)×(0, π),
which are (recall)
x0 = sin(φ) cos(θ),
x1 = sin(φ) sin(θ),
x2 = cos(φ).
(2.3.5)
3. Calculate i∗ Vp where i is the standard inclusion map S 2 → R
I 3 expressing it in the basis
∂ ∂ ∂
, ,
∂x ∂y ∂z
in R
I 3 at (1, 0, 0).
4. Calculate f∗ Vp where f is the smooth map on S 2 with f (x0 , x1 , x2 ) → (x0 − x1 , x1 − x2 ).
As for 1) we use the coordinates (x1 , x2 ) and the corresponding basis in the tangent space
so that the tangent vector can be written as
{ ∂x∂ 1 , ∂x∂ 2 },
Vp = X 1
∂
∂
+ X2 2 .
1
∂x
∂x
24
Recall the X 1 (X 2 ) is the result of applying the tangent vector to the coordinate function. If we
do that for x1 , we get
X 1 = Vp x1 :=
p
d
d
|t=0 (x1 ◦ c(t)) = |t=0 ( 1 − t2 sin(10πt)) = 10π.
dt
dt
For the same reason, we get X 2 = 1, so that
Vp = 10π
∂
∂
+ 2.
1
∂x
∂x
(2.3.6)
∂
∂
To do 2) now we want to express Vp in terms of { ∂θ
, ∂φ
}. We know that this change of coordinates
involves the Jacobian as in (2.1.7). However, instead of remembering and applying the formula,
per se, it is more common among differential geometers to re-act the calculation the brought to
the formula. The transition functions (θ, φ) → (x1 , x2 ) are the last two of (2.3.5). Calculating
X 1 = Vp x1 (θ, φ) = Y 1
∂
∂ 1
x (θ, φ) + Y 2 x1 (θ, φ),
∂θ
∂θ
we get:
10π = Y 1 (sin(φ) cos(θ))|φ= π2 ,θ=0 + Y 2 (cos(φ) sin(θ))|φ= π2 ,θ=0 = Y 1 .
Analogously, calculating
X 2 = Vp x2 (θ, φ) = Y 1
∂ 2
∂
x (θ, φ) + Y 2 x2 (θ, φ),
∂φ
∂φ
we get
1 = Y 1 (0) + Y 2 (−sin(φ))φ= π2 ,θ=0 = −Y 2 .
So we have
Vp = 10π
∂
∂
−
.
∂θ ∂φ
∂
∂
∂
To do 3) we write i∗ Vp as i∗ Vp = X̃ 1 ∂x
+ X̃ 2 ∂y
+ X̃ 3 ∂z
(we have switched to notation (x, y, z)
instead of (x0 , x1 , x2 ) to emphasize the distinction between coordinates on R
I 3 and coordinates
for S 2 ). We have
X̃1 = (i∗ Vp )(x) := Vp (x ◦ i) :=
p
d
d
|t=0 (x ◦ i ◦ c(t)) = |t=0 ( 1 − t2 cos(10πt)) = 0.
dt
dt
and analogously we get X̃2 = 10π and X̃3 = −1, so that
i∗ Vp = 10π
∂
∂
−
.
∂y ∂z
Finally for 4) we can calculate f∗ Vp as described in part 4) with coordinates x and y. By writing
∂
∂
f∗ Vp := Ỹ1 ∂x
+ Ỹ2 ∂y
, we know that Ỹ1 = f∗ Vp (x) = Vp (x ◦ f ), Ỹ2 = f∗ Vp (y) = Vp (y ◦ f ).
Moreover, in the coordinates (x, y), f ◦ c(t) is
0
√
1 −√
t2 (cos(10πt) − sin(10πt))
x (t) − x1 (t)
f ◦ c(t) :=
=
.
x1 (t) − x2 (t)
1 − t2 sin(10πt) + t
25
Therefore,
Vp (x ◦ f ) :=
p
d
d
|t=0 (x ◦ f ◦ c(t)) = |t=0 ( 1 − t2 (cos(10πt) − sin(10πt)) = −10π.
dt
dt
Analogously,
Ỹ2 = f∗ Vp (y) = Vp (y ◦ f ) =
p
d
|t=0
1 − t2 sin(10πt) + t = 10π + 1.
dt
So we get
f∗ Vp = −10π
∂
∂
+ (1 + 10π) .
∂x
∂y
(2.3.7)
We could have also used formula (2.3.4). Here, with ψ the coordinate map on R
I 2 (which is the
identity map) and φ0+ the coordinate map on S 2 , we have
p
1 − (x1 )2 − (x2 )2 − x1
−1
ψ◦f ◦φ =
,
x1 − x2
so that the Jacobian matrix is
1
2
√
J(x , x ) =
−x1
1−(x1 )2 −(x2 )2
−1
√
−x2
1−(x1 )2 −(x2 )2
−1
1
!
,
which at the point x1 = 0 x2 = 0 gives
−1 0
,
J(0, 0) =
1 −1
which multiplied by the components of Vp , i.e., (10π, −1)T gives the components in (2.3.7), that
is, (−10π, 10π + 1)T .
2.4
Induced maps: Pull-back
Definition 2.4.1: Pull-back
Given f : M → N smooth, we can also define a map f ∗ : Tf∗(p) N → Tp∗ M , which is called
the pull-back of f (as it goes in the opposite direction of f ) and it is defined as the dual
map of f∗ , that is, for ω ∈ Tf∗(p) N ,
f ∗ ω(V ) = ω(f∗ V ),
(2.4.1)
for any V ∈ Tp M .a
In general let V and V ∗ a vector space and its dual, respectively, and W and W ∗ another vector
space and its dual, let A be a linear map A := V → W , then the dual map A∗ is the defined as the map
A∗ : W ∗ → V ∗ such that for every w∗ ∈ W ∗ and v ∈ V , w∗ (Av) = (A∗ w∗ )(v). Definition (2.4.1) is a
special case with A = f∗ , v = V , f ∗ = A∗ , ω = w∗ and the vector spaces identified accordingly.
a
26
To see how f ∗ transform ω in local coordinates, let us choose coordinates x in a neighborhood
of p ∈ M and coordinates y in a neighborhood of f (p) ∈ N . Write ω := ωi dy i and f ∗ ω := λj dxj .
We know that, for fixed k, λk is the result of applying f ∗ ω to ∂x∂ k . Therefore we have
λk = f ∗ ω(
∂
∂
∂
) = ωi dyi (f∗ k ) = ωi k (y i ◦ f ),
k
∂x
∂x
∂x
(2.4.2)
which gives the transformation rule among the coefficients in terms of the Jacobian of f . Once
again, it is convenient to think of ω and λ as rows and given the Jacobian J := ∂x∂ k (y i ◦ f ) on
the right hand side of (2.4.2), the formula simply says λ = ωJ.
Remark 2.4.1: Linearity of Pull-back
Linearity of the pullback similarly holds using the fact that our contangent vectors are
functionals. We will use this fact in the next example.
2.4.1
Computational Example
(Example 2.3.1 ctd) Reconsider the Example 2.3.1 and the map f : S 2 → R
I 2 , with f (x0 , x2 , x2 ) =
(x0 −x1 , x1 −x2 ) at the point p = (1, 0, 0) ∈ S 2 , f (p) = (1, 0). Consider the one form ω := dx+dy
at the point (1, 0). We want to
1. Calculate f ∗ ω in the (x1 , x2 ) coordinates.
2. Calculate f ∗ ω in the (θ, ψ) coordinates.
3. Calculate f ∗ ω(Vp ) where Vp is given (2.3.6).
To do 1) We write f ∗ (dx + dy) = ω1 dx1 + ω2 dx2 . We have
∂
∂
∂
∗
∗
∗
ω1 = f ω
= f dx
+ f dy
=
∂x1
∂x1
∂x1
∂
∂
∂
∂
∂
∂
dx f∗ 1 + dy f∗ 1 = f∗ 1 (x) + f∗ 1 (y) =
(x ◦ f ) + 1 (y ◦ f ) =
1
∂x
∂x
∂x
∂x
∂x
∂x
∂ p
∂
−x1
1
2
1 )2 − (x2 )2 − x1 ) +
p
(
1
−
(x
(x
−
x
)
=
= 0,
∂x1
∂x1
1 − (x1 )2 − (x2 )2
where in the last formula we used the fact that we are at the point corresponding to x1 = 0,
x2 = 0. Analogously, we get
∂
∗
ω2 = (f (dx + dy))
= −1.
∂x2
Therefore, f ∗ ω = −dx2 .
To write f ∗ ω in the spherical coordinates (θ, φ) we use f ∗ ω = −dx2 and (2.3.5) which gives
dx2 = − sin(φ)|θ=0,φ= π2 dφ,
27
so that
f ∗ ω = dφ,
where in the last one we used φ = π2 . Finally for 3), from the expression of Vp in (2.3.6), we get
∂
∂
∗
2
(f ω)(Vp ) = −dx 10π 1 + 2 = −1.
∂x
∂x
2.5
Inverse functions theorem; Submanifolds
The push-forward f∗ is often used to test if f : M → N is locally a diffeomorphism.
Theorem 2.5.1: Inverse Functions Theorem
A smooth map f : M → N is a diffeomorphism from a neighborhood of p ∈ M to a
neighborhood of f (p) ∈ N if and only if f∗ has full rank m(=dimension of M ) and is
surjective, i.e., it is an isomorphism of vector spaces Tp M and Tf (p) N .
Since we have already discussed linearity of the pullback, showing it is a bijection is sufficent
for the forward direction specifically, f∗−1 = (f −1 )∗ . For the reverse direction, we consider
f = ψf φ−1 with coordinate charts of p and f (p) respectively. Now we use the inverse function
theorem from calculus on f to get the desired result. For details see ?? page 79
Definition 2.5.1: Immersions
Let f be a smooth map f : M → N , with dim M ≤ dim N . f is an immersion if, at every
point p ∈ M , f∗ is an injection, that is rank(f∗ ) = dim(M ). Consider now an immersion
f which is also injective, the image f (M ) can be made a manifold by ‘borrowing’ the
topological and differentiable structure of M , that is open sets in f (M ) are the images of
open sets in M , and charts on f (M ) can be defined using the charts of M . The manifold
f (M ) is diffeomorphic to M and it is called an immersed submanifold of N .
With this differentiable structure, the coordinate neighborhoods of f (M ) do not necessarily
coincide with U ∩ f (M ) where U is a coordinate neighborhood on N . Figure 8 b) illustrates this
situation. If the point p is the image of 0 ∈ R
I , we can take a coordinate neighborhood V of 0
which is mapped to a coordinate neighborhood f (V ) of p. However, this does not coincide with
f (M ) ∩ U where U is a coordinate neighborhood of R
I 2 containing p no matter how we choose
U.
Definition 2.5.2: Embeddings
If the injective immersion f has the additional property that the images of coordinate
neighborhoods in M f (V ) are equal to f (M )∩U for some U coordinate neighborhood of N
(subset topology), then f is called an embedding and f (M ) an embedded submanifold
or simply a submanifold.
Often f is taken as the inclusion map so that for example S 1 is a submanifold of R2 , etc.
Figure 8 shows examples of immersed and embedded submanifolds.
28
Figure 8: The image of an immersion a); An immersed submanifold b); An embedded submanifold
c)
29
2.6
Exercises
Exercise 2.1 Show that two diffeomorphic manifolds have the same dimension.
Exercise 2.2 The tangent space at p, Tp M is a subspace of the space of linear operators on
F(M ). Show that it has dimension m = dim(M ) and ∂x∂ µ , µ = 1, 2, . . . , m, form a basis for
Tp M .
Exercise 2.3 Consider the sphere S 2 as defined in subsection 1.1, with the given atlas, and
the point p ≡ ( √13 , √13 , √13 ) ∈ S 2 . Find two overlapping coordinate charts (U, φ), (V, ψ), with
p ∈ U ∩ V , and the Jacobian giving the change of coordinates from the φ coordinates to the ψ
coordinates.
Exercise 2.4 Consider the Grassmannian Gr2,3 ( R
I ) as described in subsection 1.1, with the
given atlas, and the point
2 0 2
p≡
.
0 1 −1
Find two overlapping coordinate charts (U, φ), (V, ψ), with p ∈ U ∩ V , and the Jacobian giving
the change of coordinates from the φ coordinates to the ψ coordinates.
30
3
Tensors and Tensor Fields
3.1
Tensors
Definition 3.1.1: Tensor
Given vector spaces over R
I , V1 , . . . , Vn , we consider the vector space generated by elements
in V1 × V2 × · · · × Vn , and denote it by V1 ⊗ V2 ⊗ · · · ⊗ Vn . A tensor T on V1 ⊗ V2 ⊗ · · · ⊗ Vn
is a multilinear map (linear is each variable separately) T : V1 ⊗ V2 ⊗ · · · ⊗ Vn → R
I . If
T1 is a tensor on V1 and T2 is a tensor on V2 , we can define the product tensor, T1 ⊗ T2 ,
as (for v1 ∈ V1 , v2 ∈ V2 ) a map that is multilinear and satisfies
T1 ⊗ T2 (v1 , v2 ) := T1 (v1 )T2 (v2 ).
(3.1.1)
This construction inductively extends to tensor products of more than two spaces.
Proposition 3.1.1: Basis of Linear Operators
(See [3] for proof) For j = 1, . . . , n, let {Tj,kj }, kj = 1, . . . , nj , a basis in the space (of
dimension nj = dim (Vj )) of linear operators on Vj . Then a basis in the space of linear
operators on V1 ⊗ V2 ⊗ · · · ⊗ Vn is given by {T1,k1 ⊗ T2,k2 ⊗ · · · ⊗ Tn,kn }, with kj = 1, . . . , nj .
The space of tensors on V1 ⊗ V2 ⊗ · · · ⊗ Vn has therefore dimension n1 n2 · · · nn .
The vector spaces we are mostly interested in are Tp M and Tp∗ M and we are interested in
tensors of the type (q, r) defined as tensors on ⊗q Tp∗ M ⊗r Tp M . We can use a basis of Tp M
as a basis of (Tp∗ M )∗ (they are naturally isomorphic), by identifying a vector V ∈ Tp M with
its action on Tp∗ M , i.e., for ω ∈ Tp∗ M , V (ω) := ω(V ). If we do that, a basis of Tp M gives us a
basis of (Tp∗ M )∗ . Using this identification, our privileged bases in Tp M and Tp∗ M (for a given
coordinate system) and Proposition 3.3.1, every (q, r) tensor, T , can be written as
µ ···µ
T = Tν11···νrq
∂
∂
⊗ · · · ⊗ µq ⊗ dxν1 ⊗ · · · ⊗ dxνr .
∂xµ1
∂x
k
The action on vectors (ω1 , . . . , ωq , X1 , . . . , Xr ) defined as ωj = ωjlj dxlj and Xj := Xj j ∂kj , gives
∂x
(See Homework)
µ ···µ
T (ω1 , . . . , ωq , X1 , . . . , Xr ) = Tν11···νrq ω1µ1 · · · ωqµq X1ν1 . . . Xrνr .
(3.1.2)
q
The space of (q, r) tensors at p ∈ M is denoted by Tr,p
(M ) and it has dimension mq+r
1 (M ) are tangent vectors and
(where m is the dimension of M ). In particular elements of T0,p
1 (M ) ' T M , while elements of T 0 (M ) are one forms and T 0 (M ) ' T ∗ M .
T0,p
p
p
1,p
1,p
Definition 3.1.2: Push-forward on Tensors
q
Let f : M → N be smooth. For tensors in T0,p
(M ) the push-forward can be naturally
31
q
q
defined as F∗ : T0,p
(M ) → T0,f
(p) (N ) and it transforms as
∂
∂
∂
µ1 ···µq ∂
⊗ · · · ⊗ µq := T µ1 ···µq (f∗ µ1 ) ⊗ · · · ⊗ (f∗ µq ). (3.1.3)
F∗ T := F∗ T
µ
1
∂x
∂x
∂x
∂x
q
See F∗ T is an element of T0,f
(p) (N ).
Definition 3.1.3: Pull-back on Tensors
0
∗ :
Analogously, for tensors in Tr,f
(p) (N ) a pull-back can be naturally defined as F
0
0
Tr,f
(p) (N ) → Tr,p (M ), and it transforms as
F ∗ Tj1 ...jr dy j1 ⊗ · · · ⊗ dy jr := Tj1 ...jr (f ∗ dy j1 ) ⊗ · · · ⊗ (f ∗ dy jr ).
3.2
(3.1.4)
Vector fields and tensor fields
Recall the definition of a tangent vector at p ∈ M , as a derivation Vp : F(M ) → R which assigns
to a function f (modulo the germ equivalence relation) a value in R
I . Such value will in general
depend on p ∈ M .
Definition 3.2.1: Vector Field
A smooth assignment of a tangent vector Vp to every p ∈ M is called a vector field on
M.
The word ‘smooth’ here means that the value of Vp f , depends smoothly on p for every smooth
function f ∈ F(M ). If X is a vector field, its value at p, X(p) is a tangent vector and X(p)f is
a smooth function of p for every f ∈ F(M ).
Definition 3.2.2: Lie Derivative of a Function
A vector field X defines a map F(M ) → F(M ), which is also denoted by X and it is
defined as
(Xf )(p) = X(p)f.
(3.2.1)
The value Xf is also called the Lie derivative of the function f along X at p.
Consider now a coordinate chart given. The simplest example of a vector field (described
locally in terms of the local coordinates) is the association to every p ∈ M of ∂x∂ µ |p , for a certain
fixed µ. We will denote such vector field by ∂x∂ µ (we have used this symbol before to indicate
a tangent vector at p omitting the |p symbol, and also for denoting the usual partial derivative
with respect to xµ , the context will resolve ambiguity among these three different meanings of
this notation).
32
Proposition 3.2.1
This vector field is smooth in that it transforms smooth functions into smooth functions.
Proof: The proof is a simple but useful exercise to recall some of the definitions we have
given. If f is a smooth function, then the function g := ∂x∂ µ (f ) is defined as g(p) =
∂
∂
−1
I m and φ is
∂xµ (f ◦ φ )|φ(p) where now ∂xµ denotes the standard partial derivative in R
the coordinate function. Recall that g smooth means that the function g ◦ φ−1 : R
Im→ R
I
∞
is in C in the usual calculus sense. This function is given by
g ◦ φ−1 =
∂(f ◦ φ−1 )
,
∂xµ
(3.2.2)
which is smooth since f is smooth.
Extending this, if we multiply ∂x∂ µ by a smooth function g µ and sum over µ, we still obtain
a smooth vector field, and, in fact, every vector field can be written locally like this. Therefore,
we shall write a vector field in local coordinates as
X = gµ
∂
.
∂xµ
(3.2.3)
Definition 3.2.3: Covector Field
A smooth association to every element p ∈ M of a one form in Tp∗ M is called a covector
field or dual vector field.
Smooth in this case means that it maps every smooth vector field to a smooth function.
Special differential forms are the ones that associate with every p the one form dxµ |p and every
dual vector field can be written as (cf. (3.2.3))
ω = ωµ dxµ ,
(3.2.4)
where ωµ are smooth functions.
Definition 3.2.4: Tensor Field
Further extending this, a tensor field of type (q, r) is a smooth association to every p ∈ M
q
of a (q, r) tensor in Tr,p
(M ). That is
µ ,...,µ
p → Tν11,...,νrq (p)
∂
∂
|
⊗
·
·
·
⊗
|p ⊗ dxν1 |p ⊗ · · · ⊗ dxνr |p ,
p
µ
∂x 1
∂xµq
(3.2.5)
µ ,...,µ
where the functions Tν11,...,νrq are smooth.
We remark that the definitions of smooth vector field and co-vector field were given in a
coordinate free fashion while the definition (3.2.5) seems to be dependent of the coordinates
chosen (see exercise 3.3.6).
The space of tensor fields of type (q, r) is denoted by Trq (M ). Special notations are reserved
to the space of vector fields, T01 (M ), which is also denoted by X (M ), and the space of dual
33
vector fields T10 (M ), which is also denoted by Ω1 especially, in the context of differential forms
as we shall see later. The space of smooth functions F(M ) is also denoted by T00 (M ) or Ω0 (M )
in the context of differential forms.
Figure 9: Vector field on S 2 manifold
3.2.1
f −related vector fields and tensor fields
Recall that given a smooth map f : M → N we have defined the push-forward f∗ : Tp M →
Tf (p) N . If X is a vector field on M , i.e., an element of X (M ) it is tempting, to define a vector
field Y on N , by saying that
Y (f (p)) := f∗ X(p),
(3.2.6)
and denote this vector field by Y := f∗ X. This definition has a few problems. First, the image
of f might not be all of N , so the vector field Y might not be defined on all of N . Furthermore,
the map f might not be injective, and therefore, there is ambiguity in the definition (3.2.6)
as to which p ∈ M to choose on the right hand side. These problems do not exist if f is a
diffeomorphism in which case (3.2.6) uniquely determines a vector field Y which we denote by
f∗ X.
Definition 3.2.5: f −related Vector Fields
In general, we shall say that two vector fields X on M and Y on N are f −related if for
every p ∈ M
f∗ (X(p)) = Y (f (p)),
(3.2.7)
that is the following commutative diagram holds.
34
M
f
X
N
Y
Tp M
f∗
Tf (p) N
Example (Vector Field Computation) Consider the smooth function f : R
I → R
I 2 defined as
d
∂
∂
f (t) = (cos(t), sin(t)). Consider the vector field X := dt on R
I and the vector field Y := x ∂y
−y ∂x
d
on R
I 2 . These two vector fields are f −related. To see this, we prove that at every t0 ∈ R
I , f∗ dt
|t=t0
2
applied to any smooth function g = g(x, y) on R
I , gives the same result as applying Y (f (t0 )) to
g. In fact, we have
f∗
d
d
∂g
∂g
|t0 g := |t0 (g ◦ f (t)) =
|cos(t0 ),sin(t0 ) (− sin(t0 )) +
|
(cos(t0 )),
dt
dt
∂x
∂y cos(t0 ),sin(t0 )
and
Y (cos(t0 ), sin(t0 ))(g) = x(t0 )
∂g
∂g
|
− y(t0 ) |cos(t0 ),sin(t0 ) =
∂y cos(t0 ),sin(t0 )
∂x
∂g
∂g
|
(− sin(t0 )) +
|
(cos(t0 )).
∂x cos(t0 ),sin(t0 )
∂y cos(t0 ),sin(t0 )
Similar considerations hold for dual vector fields ω1 ∈ Ω1 (M ) and ω2 ∈ Ω1 (N ) with f : M →
N , if for every p ∈ M
ω1 (p) = f ∗ ω2 (f (p)).
(3.2.8)
Definition 3.2.6: f-related on Tensor Field
More in general, tensor fields X ∈ T0q (M ) and Y ∈ T0q (N ) are f − related if for every
p∈M
F∗ X(p) = Y (f (p)).
Finally, tensor fields ω1 ∈ Tr0 (M ) and ω2 ∈ Tr0 (N ) are f − related if for every p ∈ M
ω1 (p) = F ∗ ω2 (f (p)).
35
3.3
Exercises
Exercise 3.1 Consider R
I n and R
I m with the standard bases {~ej1 }, j = 1, 2, . . . , n and {~ek2 },
k = 1, 2, . . . , m. An ordered basis for R
I n⊗ R
I m is given by (~ej1 , ~ek2 ), j = 1, 2, . . . , n, k = 1, 2, . . . , m
and
(~ej11 , ~ek21 ) < (~ej12 , ~ek22 ),
if j1 < j2 or j1 = j2 and k1 < k2 . Consider the dual bases in ( R
I n )∗ and ( R
I m )∗ , {T1j },
j = 1, 2, . . . , n, and {T2k }, j = 1, 2, . . . , n. Then, according to the previous proposition, a basis
for the space of tensors on R
In⊗ R
I m is given by {T1j ⊗ T2k }, on which we use the same order.
Therefore any tensor can be written as
T := ωj,k T1j ⊗ T2k ,
(3.3.1)
with ωj,k a 1 × nm matrix. Consider tensors
ω 1 = ωj1 T1j ,
ω 2 = ωk2 T2k ,
with ωj1 a 1 × n matrix, and ωk2 a 1 × m matrix. Prove that if we expand T := ω 1 ⊗ ω 2 as in
(3.3.1) the matrix ωj,k is the Kronecker product of the matrices ωj1 and ωk2 .
Exercise 3.2 The definition of F∗ T in (3.1.3) is given in terms of coordinates. Show that such
a definition does not depend on coordinates, i.e., the action of F∗ T on ⊗q Tf∗(p) N is uniquely
determined by the definition (3.1.3) independently of the coordinates used.
q
Exercise 3.3 For an element T̃ of T0,f
(p) (M ) in a given system of coordinates at f (p) write
T̃ := T̃ ν1 ,ν2 ,...,νq
∂
∂
∂
⊗ ν2 ⊗ νq .
ν
1
∂y
∂y
∂y
If T̃ = F∗ T write T̃ ν1 ,ν2 ,...,νq as functions of T µ1 ···µq in (3.1.3).
Exercise 3.4 The definition of F ∗ T in (3.1.4) is given for given in terms of coordinates. Show
that such a definition does not depend on coordinates, i.e., the action of F ∗ T on ⊗r Tp M is
uniquely determined by the definition (3.1.4) independently of the coordinates used.
0 (M ) in a given system of coordinates at p write
Exercise 3.5 For an element T̃ of Tr,p
T̃ := T̃k1 ,k2 ,...,kr dxk1 ⊗ dxk2 ⊗ · · · ⊗ dxkr .
If T̃ = F ∗ T write T̃k1 ,k2 ,...,kr as functions of Tj1 ···jr in (3.1.4).
36
Exercise 3.6 Prove the smoothness in definition (3.2.5) does not depend on the coordinate
system used. That is, if we replace the coordinates xµ with coordinates y µ , the corresponding
µ ,...,µ
functions T̃ν11,...,νrq in
µ ,...,µ
p → T̃ν11,...,νrq (p)
∂
∂
|p ⊗ · · · ⊗ µq |p ⊗ dy ν1 |p ⊗ · · · ⊗ dy νr ,
∂y µ1
∂y
are still smooth.
Exercise 3.7 Verify that every vector field X can be written locally as X = g µ ∂x∂ µ .
37
(3.3.2)
4
Integral curves and flows
Consider a vector field X on M and a point p ∈ M . Recall that we have denoted the tangent
d
vector corresponding to a curve c : R
I → M , using the push-forward, as c∗ dt
.
Definition 4.0.1: Integral Curve
An integral curve of X at p is a curve c : (−a, b) → M (a, b > 0) such that
1. c(0) = p,
2.
d
= X(c(t)),
(4.0.1)
dt
for every t ∈ (−a, b). This means that for every point c(t), with t ∈ (−a, b), the
tangent vector associated with c coincides with the value of the vector field at that
point.
c∗
4.1
Relation with ODE’s. The problem ‘upstairs’ and ‘downstairs’
Given the coordinate chart (U, φ), with p ∈ U , there is a one to one correspondence between
curves c on M (upstairs) crossing the point p and in the neighborhood U and curves cφ in R
Im
(downstairs) crossing the point φ(p), given by
cφ (t) = φ(c(t)).
(4.1.1)
Proposition 4.1.1
Let c : (−a, b) → M a curve such that c((−a, b)) ⊆ U , for the coordinate neighborhood
U , in the coordinate chart (U, φ), with p ∈ U . Then c is an integral curve for X = X ν ∂x∂ ν
at p if and only if cφ in (4.1.1) satisfies the Initial Value Problem
ċνφ (t) = X ν (φ−1 (cφ (t)),
cφ (0) = φ(p).
(4.1.2)
The proposition shows that the study of integral curves (upstairs) on a manifold can be
locally reduced to the study of solutions of differential equations (downstairs) in R
I m.
Proof. Assume c is an integral curve. Then applying both sides of (4.0.1) to the coordinate
functions xν , we obtain
d
∂
c∗ xν = X µ (c(t)) µ xν ,
dt
∂x
ċνφ (t) = X µ (c(t))δµν = X ν (c(t)).
That is, the differential equation in (4.1.2). Viceversa, if cφ = cφ (t) satisfies (4.1.2) consider
c := φ−1 ◦ cφ . Let us apply the left hand side and right hand side of (4.0.1) to a function
f ∈ F(M ), with this c, and for a given t ∈ (−a, b). The left hand side gives
c∗
d
d
d
∂f ◦ φ−1 ν
f = (f ◦ c(t)) = (f ◦ φ−1 ◦ cφ (t)) =
ċφ .
dt
dt
dt
∂xν
38
(4.1.3)
The right hand side gives
X(c(t))f = X(φ−1 ◦ cφ (t))(f ) = X µ (φ−1 ◦ cφ (t))
∂
∂f ◦ φ−1
µ −1
|
(c
(t)))
,
−1 ◦c (t) f := X (φ
φ
φ
φ
∂xµ
∂xν
(4.1.4)
which give the same result because of (4.1.2).
In particular, there is a one to one correspondence locally between what happens upstairs
and downstairs, and properties of integral curves can be obtained from properties of solutions of
differential equations. In particular, given a vector field X, at a point p ∈ M , we obtain that an
integral curve exists and is unique.
Figure 10: Unique integral curve γ passing through a point
4.2
Definition and properties of the flow
Definition 4.2.1: Flow
A smooth function σ = σ(t, p) : R
I × M → M such that, for every p, σ(·, p) is the integral
curve of the vector field X at p is called the flow associated with X.
We remark that σ is not necessarily defined an all of R
I × M because the integral curve σ(t, p)
for a given p might only be defined on an interval (−a, b).
Example
∂
The flow of X := x2 ∂x
on R
I which is easily found by solving the I.V.P.
dcφ
= c2φ (t),
dt
cφ (0) = p,
39
to be σ(t, p) =
11.
p
1−pt ,
and it is defined only for the points (t, p) between the two curves in Figure
∂
Figure 11: Domain of the flow σ = σ(t, p) for X = x2 ∂x
, in R
I × R
I
Proposition 4.2.1
The flow σ = σ(t, p) has the following two properties
1.
σ(0, p) = p,
∀p ∈ M,
(4.2.1)
2.
σ(t, (σ(s, p)) = σ(t + s, p),
(4.2.2)
for every t and s (small enough) so that both left hand side and right hand side
exist.
Proof. In the following proof, we use the notation σ to indicate a curve in M and σ µ to
indicate the µ-th component of its coordinate representation. Analogously, X represents
a vector field, X µ represents the µ-th component of its coordinate representation. See
that 4.2.1 follows from definition of integral curve of X at p.
40
Fix s and map left hand side and right hand side using a coordinate map φ (downstairs)
to R
I m . If they map to the same curve, they are the same curve in M . For t = 0, they
both map to φ(σ(s, p)). Moreover, taking the derivative of the left hand side, we have
d µ
(σ (t, σ(s, p))) = X µ (σ µ (t, σ(s, p))),
dt
(4.2.3)
and taking the derivative of the right hand side,
d µ
(σ (t + s, p)) = X µ (σ µ (t + s, p)).
dt
(4.2.4)
The two functions satisfy the same O.D.E with the same initial condition, so they are the
same by the uniqueness theorem. If σ(t + s, p) exits the coordinate neighborhood U , we
can inductively repeat the same argument on every other coordinate neighborhood.
Definition 4.2.2: Complete Vector Field
A vector field X is called complete if the corresponding flow σ = σ(t, p) is defined on all
of R
I × M.
Theorem 4.2.1
If M is compact, every vector field X on M is complete. (see Exercise 4.2)
We have seen that, by definition σ(t, p) is, for fixed p, the integral curve at p associated with
the vector field X. We now fix t and look at σ(t, p) as a function on p. By varying the values
of t, we get a family of maps σ(t, ·) : M × M . These maps form a one parameter commutative
group of transformations which is local, i.e., its elements are defined for small t and the interval
of values of t depending on p where σ(t, p) is defined. It is a global one parameter group of
transformations if M is compact (cf. Theorem 4.2.1). We denote the transformation σ(t, ·) by
σt , and we have
1. σ0 = identity
2. σt+s = σt ◦ σs = σs ◦ σt ,
3. σ−s = (σs )−1 .
The following three lemmas give some more properties of the flow. Although these properties
are almost obvious consequences of the definition, it is useful to state them formally for future
reference. Let X a vector field and σt the associated one parameter group of transformations.
Lemma 4.2.1
For every q ∈ M and every function f ∈ F(M )
Xσt (q) f =
d
(f (σt (q))).
dt
41
(4.2.5)
Proof. Using the definition of the flow ,
Xσt (q) = σ∗
d
,
dt
σ0 (q) = q,
apply left hand side and right hand side to a function f ∈ F(M ),
d
d
f=
Xσt (q) f = σ∗
(f ◦ σt (q)) .
dt
dt
(4.2.6)
(4.2.7)
We can rewrite formula (4.2.7) in the form in which it is usually used, that is, for t = 0 and
q fixed
d
Xq f := Xf (q) = |t=0 (f ◦ σt (q)) ,
(4.2.8)
dt
which can be seen, as q varies in M , as an equality between the two functions Xf and
d
dt |t=0 (f
◦ σt ).
Lemma 4.2.2
If σt is the flow associated with X, then σ−t is the flow associated with −X.
Proof. This fact follows from the upstairs-downstairs correspondence of equation (4.1.1),
since it is easily verified in R
I m : If cφ = cφ (t) satisfies
d ν
c = X ν (cφ (t)),
dt φ
(4.2.9)
dcφ (−t)
d ν
(cφ (−t)) = −
= −X ν (cφ (−t)).
dt
d(−t)
(4.2.10)
then cφ (−t) satisfies
Combining Lemmas 4.2.1 and 4.2.2, we obtain,
Lemma 4.2.3
For any function f ∈ F(M ) and q ∈ M ,
−Xσ−t (q) f =
d
(f ◦ σ−t (q)),
dt
(4.2.11)
and specializing at t = 0,
1
(f ◦ σ− (q) − f (q)) .
→0 −Xq f = lim
42
(4.2.12)
4.3
Exercises
Exercise 4.1 The notion of f −related vector fields introduced in subsection 3.2.1 is a generalization of the familiar notion of change of coordinates for a system of differential equations.
Assume the vector field Y on N is f −related to the vector field X on M (cf. (3.2.7)). Prove
that if c = c(t) is an integral curve of X on M at p then f ◦ c is an integral curve of Y on N at
f (p).
Exercise 4.2 Prove Theorem 4.2.1.
Exercise 4.3 Calculate the flow of the vector field X = x1 ∂x∂ 1 + (x1 + x2 ) ∂x∂ 2 on S 2 in the
coordinate neighborhood U0+ . Recall that φ0+ ((x, y, z)) = (y, z).
Exercise 4.4 Recall that a smooth structure on RP n is defined as {Ul , φl } where Ul are classes
of representatives whose l − th component a in Rn+1 is non-zero and φl ([A]) = a−1 B with B
the vector in Rn with the l − th component removed. Calculate the flow of the vector field
X = ∂x∂ 0 + ... ∂x∂n−1 in neighborhood Un .
43
5
Lie Derivative
5.1
Lie derivative of a vector field
Consider a vector field X and its associated flow σt . Given another vector field Y we want to
analyze how the vector field Y varies along the integral curve σt (p), for a certain p ∈ M . So
we want to compare Yp and Yσ (p) . However, we cannot simply take the difference Yp − Yσ (p)
because these two vectors belong to different tangent spaces. The idea then is to take Yσ (p) and
bring it to Tp M by applying σ− ∗ . It makes sense to look at the difference ∆Y of two vectors in
Tp M (cf. Figure 12)
∆Y := (σ− )∗ Yσ (p) − Yp .
(5.1.1)
Notice that this difference is itself a tangent vector and so is the limit as → 0.
Figure 12: Construction for the Lie derivative. Remark: In the picture is not necessarily small,
as a matter of fact Yσ (p) is not similar to Yp .
We want to see how this difference behaves for small .
Definition 5.1.1: Lie Deriviative of a Vector Field
We define the Lie derivative of Y along X, LX Y by
1
(σ− )∗ Yσ (p) − Yp
→0 LX Y = lim
.
Two alternative but equivalent definitions are as follows :
44
(5.1.2)
1. By changing to −k in (5.1.2) we obtain
1
(σk )∗ Yσ−k (p) − Yp =
k→0 −k
LX Y = lim
(5.1.3)
1
Yp − (σk )∗ Yσ−k (p) .
k→0 k
lim
2. By collecting (σ− )∗ in (5.1.2) and using the fact that lim→0 (σ− )∗ is equal to the identity
operator, we obtain
1
LX Y = lim
Yσ (p) − (σ )∗ Yp .
(5.1.4)
→0 In this last definition, we transport the tangent vector Yp ∈ Tp M to Tσ (p) M before we
compare it with Yσ (p) .
Definition 5.1.2: Commutator of two vector fields
Define the commutator of two vector fields
[X, Y ] := XY − Y X,
(5.1.5)
the product on the left hand side means composition, so that for any f ∈ F(M )
[X, Y ]p f := Xp (Y f ) − Yp (Xf ).
(5.1.6)
The following theorem states that the Lie derivative can be expressed as a commutator.
Theorem 5.1.1: Lie Derivative with Commuator
At a point p,
LX Y = [X, Y ]p .
(5.1.7)
Proof. Take a function f ∈ F(M ) and calculate (LX Y )(f ). By definition, we have
1
((σ− )∗ Yσ (p) )f − Yp (f ) =
→0 (LX Y )(f ) = lim
(5.1.8)
1
Yσ (p) (f ◦ σ− ) − Yp (f ) ;
→0 Adding and subtracting Yσ (p) f inside the parenthesis, we get
lim
1
1
Yσ (p) [(f ◦ σ− ) − f ] + lim
Yσ (p) f − Yp f .
→0 →0 LX Y (f ) = lim
The first limit is
(5.1.9)
1
Yσ (p) [(f ◦ σ− ) − f ] =
(5.1.10)
→0 1
1
(lim Yσ (p) )(lim (f ◦ σ− − f )) = Yp (lim (f ◦ σ− − f )) = −Yp (Xf ),
→0
→0 →0 lim
45
where, in the last equality, we used (4.2.12). For the second limit, we have,
1
1
lim (Yσ (p) f − Yp f ) = lim ((Y f )(σ (p)) − (Y f )(p)) = Xp (Y f ),
→0 →0
(5.1.11)
where in the first equality we used the definition of the action of a vector field on a function
(3.2.1) and in the last equality we used formula (4.2.5) of Lemma 4.2.1 with t = 0, q = p
and f replaced by Y f . Therefore, the theorem is proven.
The following propositions collect some of the main properties of the Lie derivative.
Proposition 5.1.1: Properties of Lie Derivative
1. LX Y is a derivation, i.e., it is linear and it satisfies Liebnitz condition.
2. The map (X, Y ) → [X, Y ] is bilinear.
3. (skew-symmetry) For X, Y ∈ X (M )
[X, Y ] = −[Y, X].
4. (Jacobi Identity) For X, Y, Z ∈ X (M ).
[X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y ]] = 0
Definition 5.1.3: Lie Algebra
Properties 2),3) and 4) above qualify the vector space X (M ) with the operation (X, Y ) →
[X, Y ] as a Lie algebra. The commutator is also sometimes called a Lie bracket.
Proposition 5.1.2: Another Lie Derivative Property
For X, Y ∈ X (M ) and f ∈ F(M ), we have
[f X, Y ] = f [X, Y ] − (Y f )X.
(5.1.12)
Proposition 5.1.3: f-related with Lie Brackets
Let X1 , X2 ∈ X (M ) and Y1 , Y2 ∈ X (N ), and f a smooth map f : M → N . If X1 and Y1
are f -related and X2 and Y2 are f -related, then [X1 , X2 ] and [Y1 , Y2 ] are also f -related.
Example 5.1.1: Lie Brackets
Proposition 5.1.3 is important for practical calculations especially when the map f is the
inclusion map. It is often convenient to work in a larger manifold (typically R
I n ) than
the original manifold M (where we have to choose various systems of coordinates to be
patched together). The proposition ensures us that the algebra we do for the vector fields
46
on the larger space coincide with the algebra we would do on the smaller space. Consider
for instance the sphere in Figure 13 and the vector field Y in part a) whose integral curves
are rotations on the sphere about the y axis and the vector field Z in part b) whose integral
curves are rotations about the z axis.
Then we have
∂
∂
∂
∂
i∗ Y = z
−x ,
i∗ Z = −y
+x .
∂x
∂z
∂x
∂y
From this we can calculate
∂
∂
∂
∂
[i∗ Y, i∗ Z] = i∗ [Y, Z] = z
− x , −y
+x
∂x
∂z
∂x
∂y
. Then we use bilinearity to get
∂
∂
∂
∂
∂
∂
∂
∂
z , −y
+ z ,x
+ −x , −y
+ −x , x
,
∂x
∂x
∂x ∂y
∂z
∂x
∂z ∂y
∂
∂
which is equal to z ∂y
− y ∂z
, since for example recalling that every vector field is a derivation, we have,
∂
∂
∂
∂
∂
∂
=z
x
−x
z
=
z ,x
∂x ∂y
∂x
∂y
∂y
∂x
z
∂ ∂
∂ ∂
∂
∂
+ zx
− xz
=z ,
∂y
∂x ∂y
∂y ∂x
∂y
where we used Schwartz equality of mixed derivatives.
47
Figure 13: Figure for Example 5.1.1
In a given system of coordinates (x1 , x2 , . . . , xn ), the vector field X is written as X = X µ ∂x∂ µ ,
and the vector field Y is written as Y = Y ν ∂x∂ ν . If Z := [X, Y ] := Z k ∂x∂ k , in order to find Z k in
terms of the X µ and Y ν , we recall that Z k = Z(xk ), so that
Z k = [X, Y ]xk = X(Y xk ) − Y (Xxk ) = X(Y k ) − Y (X k ) = X µ
k
∂Y k
µ ∂X
−
Y
.
∂xµ
∂xµ
(5.1.13)
Notice we can represent X, Y and Z with a column vector of functions R
Im→ R
I m . Now denote
JX (JY ) the Jacobian matrix corresponding to X (Y ). Then formula (5.1.13) can be written in
matrix-vector notation as
Z = JY X − JX Y.
The equality between the Lie derivative and the commutator suggests a geometric interpretation of the Lie derivative as a measure of noncommutativity of two flows. Consider σt the flow
associated with X and τs the flow associated with Y . Starting from a point p, we move following
Y for time s first and then following X for time t. We arrive in σ(t, τs (p)). Now, starting from p
follow X first for time t and then follow Y for time s. We arrive in τ (s, σ(t, p)) (cf. Figure 14).
If the two flows commuted, τs (σt (p)) = σt (τs (p)). However, in general, this is not the case.
Let us compare τ (s, σ(t, p)) and σ(t, τ (s, p)). We cannot simply take the difference between these
two points as we are on a manifold (where in general we have not defined a distance function).
48
Figure 14: Noncommuting flows
We can however, take a (arbitrary) function f in F(M ) and consider the difference
d(t, s) := f (τs (σt (p))) − f (σt (τs (p))).
Expand d(t, s) in a Taylor series about the point (0, 0) and calculate the various terms. We are
going to routinely use formula (4.2.7) in various forms. Clearly d(0, 0) = f (p) − f (p) = 0.
∂
∂d
∂
=
f (τs (σt (p))) − f (σt (τs (p))) =
∂s
∂s
∂s
= Yτs (σt (p)) f − Yτs (p) (f ◦ σt ),
(5.1.14)
the last term being the tangent vector Yτs (p) applied to the function f ◦ σt . This gives zero for
∂2d
(s, t) = (0, 0). Analogously, one sees that ∂d
∂t at (0, 0) is also zero. Calculate ∂s2 |s=0,t=0 from
(5.1.14). Notice that by definition
Yτs (σt (p)) f = (Y f )(τs (σt (p)),
and
Yτs (p) (f ◦ σt ) = (Y (f ◦ σt ))(τs (p)).
Taking the derivative with respect to s in (5.1.14), we get
Yτs (σt (p)) (Y f ) − Yτs (p) (Y (f ◦ σt )).
This at s = 0, t = 0 gives
Yp (Y f ) − Yp (Y f ) = 0.
49
(5.1.15)
Analogously, one shows that the second derivative with respect to t is zero. We now calculate
the mixed derivative taking
∂
[Y f (τs ◦ σt (p)) − (Y (f ◦ σt ))(τs (p))] =
∂t
Xσt (p) ((Y f ) ◦ τs ) − (Y
∂
f ◦ σt )(τs (p)),
∂t
where in the second term we have switched the order of Y and
∂
∂t .
This is equal to
Xσt (p) ((Y f ) ◦ τs ) − (Y (Xf ))(τs (p)) := Xσt (p) ((Y f ) ◦ τs ) − Yτs (p) (Xf ),
which at t = 0, s = 0, gives
Xp (Y f ) − Yp (Xf ) = [X, Y ]p f.
So the commutator gives the first nonzero term in the Taylor expansion measuring the distance
between the two values of the function f .
5.2
Lie derivatives of co-vector fields and general tensor fields
Definition 5.2.1: Lie Derivative of a Covector Field
For a tensor field of type (0, 1), that is, a dual vector field, ω the Lie derivative along
a vector field X at p ∈ M is defined as
1
σ ∗ ωσ (p) − ωp .
→0 LX ω := lim
(5.2.1)
The motivation for this definition is similarly to (5.1.2), that is, we want to compare the
values of the co-vector field at two different points, but since these values belong to two different
co-tangent spaces, we use (this time) the pull-back to bring both co-tangent vectors to Tp∗ M .
The expression in local coordinates for LX ω can be found to be
∂ωµ ν ∂X ν
LX ω =
X +
ων dxµ .
(5.2.2)
∂xν
∂xµ
To extend the definition of Lie derivative to general tensors, we need to extend the definition
of push-forward and pull-back to tensors of mixed type. This can be done if the map f : M → N
used in the definition is a diffeomorphism (like for example σ ).
Definition 5.2.2: Push-forward and Pull-back of Tensor Extension
q
q
q
For a tensor in Tr,p
(M ), the push-forward F∗ : Tr,p
(M ) → Tr,f
(p) (N ) is defined as
F∗ (V1 ⊗ · · · ⊗ Vq ⊗ ω1 ⊗ · · · ⊗ ωr ) = f∗ V1 ⊗ · · · ⊗ f∗ Vq ⊗ ((f −1 )∗ ω1 ⊗ · · · ⊗ (f −1 )∗ ωr . (5.2.3)
q
q
q
∗
For a tensor in Tr,f
(p) (N ), the pull-back F : Tr,f (p) (N ) → Tr,p (M ) is defined as
F ∗ (V1 ⊗ · · · ⊗ Vq ⊗ ω1 ⊗ · · · ⊗ ωr ) = f −1 ∗ V1 ⊗ · · · ⊗ f −1 ∗ Vq ⊗ f ∗ ω1 ⊗ · · · ⊗ f ∗ ωr . (5.2.4)
These defintions are extensions of the coordinate definitions (3.1.3 and 3.1.4) (See Exercise
50
3.2 and 3.4).
We remark, in particular, that the flow σ is a diffeomorphism. Let σ− ∗ denote the pushforward associated with the flow σ− of a vector field X.
Definition 5.2.3: Lie Deriviative of Tensor field
The Lie derivative of a general tensor field T ∈ Trq (M ) is defined as
1
σ− ∗ Tσ (p) − Tp .
→0 LX T := lim
(5.2.5)
The following proposition (see Proposition 5.1 [1] and proof there) allows us to reduce the
study of Lie derivatives of general tensors to Lie derivatives of vector fields and co-vector fields
only.
Proposition 5.2.1: Lie Derivative of Tensors to Vector fields
For tensor fields t1 and t2 ,
LX (t1 ⊗ t2 ) = (LX t1 ) ⊗ t2 + t1 ⊗ (LX t2 )
51
5.3
Exercises
Exercise 5.1 Prove Proposition 5.1.1
Exercise 5.2 Prove Proposition 5.1.2
Exercise 5.3 Prove Proposition 5.1.3
Exercise 5.4 Give an alternative proof of Theorem 5.1.1 by proving that LX Y xk is equal to the
right hand side of (5.1.13).
Exercise 5.5 Prove formula (5.2.2). (Hint: Apply (5.2.1) to the tangent vector
∂
∂xµ |p .)
Exercise 5.6 Prove that the pushforward distributes with the Lie derivative that is
f∗ [X, Y ] = [f∗ X, f∗ Y ] .
Extend this to Lie derivatives of general tensor fields.
52
6
Differential Forms Part I: Algebra on Tensors
6.1
Preliminaries: Permutations acting on tensors
Definition 6.1.1: Permutation on a Tensor
0 (M ) and P a permutation in the symmetric
Consider a tensor of the type (0, r), i.e., ω ∈ Tr,p
a
group on r elements, Sr . A new tensor, P ω, is defined as
(P ω)(V1 , . . . , Vr ) = ω(VP (1) , . . . , VP (r) ).
a
(6.1.1)
The symmetric group is the group of permutations on r elements.
Notice here there is a little abuse of notation since P denotes both the operation on the tensor
ω and the permutation on 1, . . . , r. Also, P (j) denotes the transformation of j in the permuted
r−ple P (1, 2, . . . , r). For instance, if r = 3 and P is given by P (1, 2, 3) := (3, 2, 1), then
P ω(V1 , V2 , V3 ) = ω(V3 , V2 , V1 ).
Remark 6.1.1: Notation
It is also sometimes convenient to denote (VP (1) , . . . , VP (r) ) as P (V1 , . . . , Vr ). This notation
is very ‘expressive’ because what P actually does is change the order of the Vj according
to the permulation P . We shall use this notation occasionally in the sequel.
Question: If ω is described in a given basis as ωµ1 ,...,µr dxµ1 ⊗ · · · ⊗ dxµr , how is P ω written
in the same basis?
We have
∂
∂
(P ω)µ1 ,...,µr = (P ω)( µ1 , . . . , µr ) :=
(6.1.2)
∂x
∂x
∂
∂
,
.
.
.
,
= ωµP (1) ,...,µP (r) ,
ω
∂xµP (1)
∂xµP (r)
that is the indexes are permuted according to the permutation P . For instance, in the above
example where P (1, 2, 3) = (3, 2, 1), we have
(P ω)µ1 ,µ2 ,µ3 = ωµ3 ,µ2 ,µ1 .
r
Alternatively, and more commonly, when applying the permutation P to ωµ1 ,...,µr dxµ1 ⊗· · ·⊗dxµ ,
we write
r
r
P ωµ1 ,...,µr dxµ1 ⊗ · · · ⊗ dxµ = ωµ1 ,...,µr P dxµ1 ⊗ · · · ⊗ dxµ
(6.1.3)
where P in the right hand side permutes the symbols dxµ1 ,...,dxµr .
Example 6.1.1: mxm Matrices
Recall that a tensor of type (0, r) is specified by mr elements. As a special (familiar) case,
tensors of the type (0, 2) are specified by m × m matrices ωµ1 ,µ2 . The are only two possible
permutations for (1, 2), the identity and P (1, 2) = (2, 1). Since (P ω)µ1 ,µ2 = ωµ2 ,µ1 the P
operation corresponds to exchanging the first (row) index with the second (column) index
53
and therefore corresponds to matrix transposition.
Definition 6.1.2: Symmetrizer and Anti-symmetrizer Operation
Using the definition (6.1.1), we define the symmetrizer operation, S, for elements
0 (M ),
ω ∈ Tr,p
1 X
S(ω) :=
P ω,
(6.1.4)
r!
P ∈Sr
and the anti-symmetrizer operation, A,
A(ω) :=
1 X
sign(P )P ω.
r!
(6.1.5)
P ∈Sr
Recall that the sign of a permutation P ∈ Sr is +1 if the permutation is of even order
and equal to −1 if the permutation is of odd order. The order of a permutation is the number
of inversions of that permutation as compared to the trivial permutation, i.e., the number of
occurrences of order changes. For example the permutation P : (1, 2, 3, 4) → (4, 1, 3, 2) has order
4 because 1, 3 and 2 come after 4 (3 inversions) and 2 comes after 3 (1 inversion).
Definition 6.1.3: Totally Symmetric and Totally Anti-symmetric Tensors
0 (M ) is called totally symmetric if
A tensor ω ∈ Tr,p
∀P ∈ Sr .
P ω = ω,
(6.1.6)
In the case of r = 2, this corresponds to m × m symmetric matrices.
0 (M ) is called called totally antisymmetric if
A tensor ω ∈ Tr,p
P ω = sign(P )ω,
∀P ∈ Sr .
(6.1.7)
In the case of r = 2, this corresponds to m × m skew-symmetric matrices.
If ω is totally symmetric then, using the definitions (6.1.4), (6.1.5), (6.1.6),
Sω = ω,
Aω = 0,
(6.1.8)
while if it is totally anti-symmetric, using the definitions (6.1.4), (6.1.5), (6.1.7),
Sω = 0,
Aω = ω.
(6.1.9)
Notice in particular that, for every tensor ω, Sω (Aω) is totally symmetric (totally anti-symmetric),
so that we have, using (6.1.8) and (6.1.9),
SSω = Sω,
SAω = ASω = 0,
AAω = Aω.
In the following, we shall be mostly interested in totally anti-symmetric tensors. Therefore, we
54
look more closely at totally anti-symmetric tensors of type (0, r), written in a coordinate basis
as
ω = ωµ1 ,...,µr dxµ1 ⊗ · · · ⊗ dxµr .
Given a permutation P , we have from (6.1.2) and using the antisymmetry property
(P ω)µ1 ,...,µr = ωµP (1) ,...,µP (r) = sign(P )ωµ1 ,...,µr .
(6.1.10)
So, totally anti-symmetric tensors have the property that if we permute the indexes according
to a certain permutation P they change or do not change the sign according to the sign of P .
Example 6.1.2: Anti-symmetric Tensor Examples
For example, for a tensor of type (0, 2), totally anti-symmetric, ωµ,ν , µ, ν = 1, . . . , m,
is such that ωµ,ν = −ων,µ as for an antisymmetric matrix, as we have seen. Another
example is the Levi-Civita symbol µ,ν,λ , µ, ν, λ = 1, 2, 3, which is zero any time there are
repeated indices and it is otherwise defined by P (1),P (2),P (3) := sign(P ). So, for example,
1,2,3 = 1, and 2,1,3 = −1.
6.2
Differential forms and exterior product
Definition 6.2.1: r-form
A differential form of order r or an r-form is a totally antisymmetric tensor of type
(0, r).
Differential forms of order r form a vector space denoted by Ωrp (M ), and we have Ω0p (M ) := R
I,
= Tp∗ M .
Ω1p (M )
Consider now a tensor ω ∈ Ωrp (M ) and a tensor ξ ∈ Ωqp (M ). In (3.1.1) we have defined the
product of ω and ξ as
(ω ⊗ ξ)(V1 , . . . , Vr , Vr+1 , . . . , Vr+q ) = ω(V1 , . . . , Vr )ξ(Vr+1 , . . . , Vr+q ).
(6.2.1)
0
This is a (0, r + q) tensor, i.e., and element of Tr+q,p
(M ) but not necessarily a totally antir+q
symmetric tensor, i.e., and element of Ωp (M ), that is, an (r + q)−form. We get an (r + q)-form
by considering the exterior product of ω and ξ.
Definition 6.2.2: Exterior product
We define the exterior product ω ∧ ξ, which is an element of Ωr+q
p (M ), as follows:
ω∧ξ =
(r + q)!
A(ω ⊗ ξ).
r!q!
(6.2.2)
Therefore, ω ∧ ξ acting on (V1 , . . . , Vr , Vr+1 , . . . , Vr+q ) gives (using the definition of A in
(6.1.5))
ω ∧ ξ(V1 , . . . , Vr , Vr+1 , . . . , Vr+q ) :=
(r + q)!
A(ω ⊗ ξ)(V1 , . . . , Vr , Vr+1 , . . . , Vr+q ) =
r!q!
55
1 X
sign(P )(P ω ⊗ ξ)(V1 , . . . , Vr , Vr+1 , . . . , Vr+q ) =
r!q!
P ∈Sr+q
1 X
sign(P )ω(VP (1) , . . . , VP (r) )ξ(VP (r+1) , . . . , VP (r+q) ).
r!q!
P ∈Sr+q
Example 6.2.1: Exterior Product on Cotangent Vectors
Assume ω ∈ Ω1p (M ), with ω := ωµ dxµ and ξ ∈ Ω1p (M ), with ξ := ξν dxν . Write ω ∧ ξ :
ζk,l dxk ⊗ dxl . We want to obtain the expression of ζk,l in terms of the ωµ and ξv . We know
that
∂
∂
2! 1 X
∂
∂
ζk,l = ω ∧ ξ
,
=
,
=
sign(P )P ω ⊗ ξ
1!1! 2!
∂xk ∂xl
∂xk ∂xl
P ∈S2
∂
∂
∂
∂
ξ
−ω
ξ
= (ωk ξl − ωl ξk ).
ω
∂xk
∂xl
∂xl
∂xk
The following theorem gives some of the properties of the exterior product, which are used in
most of the calculation with r−forms
Theorem 6.2.1: Properties of the Exterior Product
The exterior product is:
1. Linear in each factor, that is bilinear.
2. Graded commutative, i.e., i.e., ω ∈ Ωqp (M ), ξ ∈ Ωrp (M ),
ω ∧ ξ = (−1)qr ξ ∧ ω.
Notice in particular the if q is odd ω ∧ ω = 0.
3. Associative
(ω ∧ η) ∧ ξ = ω ∧ (η ∧ ξ).
4. Such that
F ∗ (ω ∧ ξ) = (F ∗ ω) ∧ (F ∗ ξ),
(6.2.3)
where F ∗ is the pull back associated with a smooth map f : M → N .
We give a sketch of the proof of property 3 and postpone the proof of the properties 1,2, and
4 to Exercise 6.1, below. The proof uses the following Lemma.
Lemma 6.2.1: Property of Anti-symmetrizer Operation
(Theorem 2 part (1) in Spivak, pg. 203) If A(ω) = 0, for a tensor ω, A(ξ ⊗ω) = A(ω ⊗ξ) =
0 for every tensor ξ.
56
We calculate, using the definition (assume ω ∈ Ωrp (M ), η ∈ Ωqp (M ), ξ ∈ Ωsp (M ))
(r + q + s)!
(r + q + s)!
(ω ∧ η) ∧ ξ :=
A((ω ∧ η) ⊗ ξ) =
A
(r + q)!s!
(r + q)!s!
(r + q)
A(ω ⊗ η) ⊗ ξ
r!q!
=
(r + q + s)!
(r + q + s)!
A((A(ω ⊗ η)) ⊗ ξ) =
A((A(ω ⊗ η) + ω ⊗ η − ω ⊗ η) ⊗ ξ) =
r!q!s!
r!q!s!
(r + q + s)!
[A ((A(ω ⊗ η) − ω ⊗ η) ⊗ ξ) + A(ω ⊗ η ⊗ ξ)] .
r!q!s!
0
The first term in the square bracket is zero because of Lemma 6.2.1, since with ω := A(ω ⊗ η) −
0
ω ⊗ η we have Aω = 0. Therefore, we have
(ω ∧ η) ∧ ξ =
(r + q + s)!
A(ω ⊗ η ⊗ ξ).
r!q!s!
(6.2.4)
The same result is obtained if we start with ω ∧ (η ∧ ξ), which proves the claim.5
In view of part 2 of Proposition 6.2.1, we can simply write ω ∧η ∧ξ for (ω ∧η)∧ξ or ω ∧(η ∧ξ).
Moreover extending inductively formula (6.2.4), we have
Corollary 6.2.1: n Exterior Products
For ωi ∈ Ωrpi (M ), i = 1, . . . , k
ω1 ∧ ω2 ∧ · · · ∧ ωk =
(r1 + r2 + · · · + rk )!
A(ω1 ⊗ ω2 ⊗ · · · ⊗ ωk ),
r1 !r2 ! · · · rk !
(6.2.5)
which extends the definition (6.2.2).
Example 6.2.2: Exterior Product on R2
I 2 ). We calculate for two tangent vectors V1 :=
Consider the 2-form dx ∧ dy ∈ Ω2p ( R
∂
∂
∂
∂
a1 ∂x
+b1 ∂y
, and V2 := a2 ∂x
+b2 ∂y
, dx∧dy(V1 , V2 ). By definition dx∧dy = dx⊗dy−dy⊗dx,
so that we have
dx∧dy(V1 , V2 ) = (dx⊗dy−dy⊗dx)(V1 , V2 ) = dx⊗dy(V1 , V2 )−dy⊗dx(V1 , V2 ) = a1 b2 −b1 a2 .
This is the oriented area of the parallelogram with sides V1 and V2 , which justifies the fact
that such form is called the area element. If we use polar coordinates r and θ instead,
from dx = cos(θ)dr − r sin(θ)dθ, and dy = sin(θ)dr + r cos(θ)dθ, we get, that the area
element is
dx ∧ dy = (cos(θ)dr − r sin(θ)dθ) ∧ (sin(θ)dr + r cos(θ)dθ) =
r cos2 (θ)dr ∧ dθ − r sin2 (θ)dθ ∧ dr = rdr ∧ dθ.
5
For more details see the proof of part (3) of the Theorem 2 in Spivak, pg. 203.
57
6.3
Characterization of the vector spaces Ωrp (M )
We now want to find a suitable basis for Ωrp (M ). An element ω ∈ Ωrp (M ) can be written as
ω = ωµ1 ,...,µr dxµ1 ⊗ · · · ⊗ dxµr ,
(6.3.1)
and unless we have information on the coefficients ωµ1 ,...,µr , there is no sign that this tensor is
totally antisymmetric. We now show that the exterior product forms a basis.
Proposition 6.3.1: dxµ1 ∧ · · · ∧ dxµr is a basis of Ωrp (M )
In order to highlight this fact, we apply A to both left hand side and right hand side of
(6.3.1) using the fact that Aω = ω, and we have
ω = Aω = ωµ1 ,...,µr A(dxµ1 ⊗ · · · ⊗ dxµr ) =
(6.3.2)
1
ωµ ,...,µr dxµ1 ∧ · · · ∧ dxµr .
r! 1
This shows that dxµ1 ∧· · ·∧dxµr , µ1 , . . . , µr ∈ {1, . . . , m}, span Ωrp (M ). Moreover, because
of the properties of Theorem 6.2.1, we can assume that µ1 < µ2 < · · · < µr . The m
r
tensors dxµ1 ∧ · · · ∧ dxµr , with µ1 < µ1 < · · · < µr , therefore span Ωrp (M ). To show that
this is a basis we have to show that these tensors are linearly independent. Take a linear
combination of them which gives zero:
X
0=
aµ1 ,...,µr dxµ1 ∧ · · · ∧ dxµr ,
(6.3.3)
µ1 <µ2 <···<µr
and apply it to
0=
∂
, . . . , ∂x∂kr
∂xk1
X
for fixed k1 , . . . , kr , with k1 < k2 < . . . < kr . This gives
aµ1 ,...,µr r!A(dx
µ1
µr
⊗ · · · ⊗ dx )
µ1 <µ2 <···µr
"
X
µ1 <µ2 <···µr
aµ1 ,...,µr
X
sign(P ) dxµ1
P ∈Sr
∂
∂
, . . . , kr
k
1
∂x
∂x
∂
∂xkP (1)
· · · dxµr
=
∂
∂xkP (r)
(6.3.4)
#
.
Consider one of the terms inside the square brackets corresponding to a fixed value µ1 <
µ2 < · · · µr . It is easily seen that the only term in the sum which is possibly different from
zero is the one corresponding to the basic permutation, P : (1, 2, . . . , r) → (1, 2, . . . , r). If
this was not the case we would have at least an inversion, that is i < j and kP (j) < kP (i) .
µ
Then we would have terms in the products δkµPi (i) and δkPj (j) , both equal to 1, that is
µi = kP (i) and µj = kP (j) . However, kP (j) < kP (i) contradicts µi < µj . Moreover,
necessarily we must have kj = µj for j = 1, . . . , r. This therefore shows ak1 ,...,kr = 0.
We have therefore found a basis. We notice that, from Part 2 of Theorem 6.2.1, Ωqp (M ) is
zero if q > m. Moreover, by the equality of the dimensions, Ωrp (M ) is isomorphic to Ωm−r
(M ).
p
The space
Ω0p (M ) ⊕ Ω1p (M ) ⊕ · · · ⊕ Ωm
(6.3.5)
p (M ),
58
with the wedge product has the structure of a graded algebra, the grading being given by the j
in Ωjp (M ).
59
6.4
Exercises
Exercise 6.1 Prove parts 1,2,4 of Theorem 6.2.1.
Exercise 6.2 Prove or disprove that properties 1,2,3 hold if the exterior product ∧ is replaced
by the tensor product ⊗. (For property 4 see (3.1.4)).
Exercise 6.3 Use Part 2 of Theorem 6.2.1 to prove Ωqp (M ) is zero if q > m.
60
7
Differential Forms Part II: Fields and the Exterior Derivative
7.1
Fields
Definition 7.1.1: r-form Field
Analogously to what we have done for general tensor fields, we define a r- form field
as a smooth assignment to every point p of an r-form. The space of r-form fields will be
denoted by Ωr (M ) (without reference to the point p).
Obviously Ωr (M ) ⊆ Tr0 (M ). With slight abuse of terminology, we shall refer to r−form
fields simply as r−forms again or as differential forms, omitting the word ‘field’. In this context,
a differential form of order r, in a given basis, (i.e., given a system of coordinates) can be written
as
ω = ωµ1 ,...,µr dxµ1 ⊗ · · · ⊗ dxµr ,
where ωµ1 ,...,µr are now smooth functions defined on M , and dxµ1 denote the covector fields
assigning to any point p the one form dxµ1 |p and, of course, ωµ1 ,...,µr are totally antisymmetric,
i.e., satisfy (6.1.10). More commonly, we write it as
ω=
1
ωµ ,...,µr dxµ1 ∧ · · · ∧ dxµr ,
r! 1
cf. (6.3.2). Sometimes, we also write it as
X
ω=
ωµ1 ,µ2 ,...,µr dxµ1 ∧ dxµ2 ∧ · · · ∧ dxµr ,
(7.1.1)
(7.1.2)
µ1 <µ2 <···<µr
where the arbitrary functions of ωµ1 ,µ2 ,...,µr for µ1 < µ2 < · · · < µr automatically by anti-symmetry
determine the other ωµ1 ,µ2 ,...,µr .
7.2
The exterior derivative
Definition 7.2.1: Exterior Derivative
The exterior derivative dr or differential is a map dr : Ωr (M ) → Ωr+1 (M ). It is
defined on the expression of a differential form in local coordinates. Then it is shown that
it does not depend on the coordinates. In particular given ω ∈ Ωr (M ), we have
1
(dωµ1 ,...,µr ) ∧ dxµ1 ∧ · · · ∧ dxµr =
r!
1 ∂ωµ1 ,...,µr
dxµ ∧ dxµ1 ∧ · · · ∧ dxµr ,
r!
∂xµ
dr ω :=
where df denotes the differential of a function f .
61
(7.2.1)
7.2.1
Independence of coordinates
If ω was written in the y coordinates as
ω=
1
ω̃ν ,...,νr dy ν1 ∧ · · · ∧ dy νr ,
r! 1
then dr ω would have been defined as (7.2.1)
1 ∂ ω̃ν1 ,...,µr
0
dy ν ∧ dy ν1 ∧ · · · ∧ dy νr .
dr ω =
r!
∂y ν
(7.2.2)
(7.2.3)
0
We want to show that dr ω in (7.2.1) and dr ω in (7.2.3) are actually the same tensor. By naturally
extending the argument that led to (2.2.7) (cf. Exercise 5.6 that requires you to do a coordinate
transformation for a general tensor field), we know that
ω̃ν1 ,...,νr = ωµ1 ,...,µr
∂xµ1 ∂xµ2
∂xµr
·
·
·
,
∂y ν1 ∂y ν2
∂y νr
(7.2.4)
and inserting this in (7.2.3) we get,
∂
∂xµ1 ∂xµ2
1
∂xµr
0
(7.2.5)
dr ω =
ωµ1 ,...,µr ν1
dy ν ∧ dy ν1 ∧ · · · ∧ dy νr =
· · · νr
r! ∂y ν
∂y ∂y ν2
∂y
µ1 µ2
1 ∂ωµ1 ,...,µr
∂x ∂x
∂xµr
· · · νr dy ν ∧ dy ν1 ∧ · · · ∧ dy νr +
r!
∂y ν
∂y ν1 ∂y ν2
∂y
µ1 µ2
1
∂
∂x ∂x
∂xµr
ωµ ,...,µr
· · · νr
dy ν ∧ dy ν1 ∧ · · · ∧ dy νr .
r! 1
∂y ν ∂y ν1 ∂y ν2
∂y
Now the second term in the right hand side above is zero. To see this, fix µ1 , . . . , µr and denote
µ1 ∂xµ2
,...,µr
∂xµr
the product ∂x
by j Fνµ11,...,ν
r
∂y ν1 ∂y ν2 · · · ∂y νr with the j-th factor omitted. So, the term that
multiplies ωµ1 ,...,µr in the second term, is


r r
2
µ
X
X
j
∂ 2 xµj
∂ x
ν
ν1
νr
ν
ν1
νr
µ1 ,...,µr 
µ1 ,...,µr

.
dy ∧dy ∧· · ·∧dy =
j Fν1 ,...,νr
j Fν1 ,...,νr dy ∧ dy ∧ · · · ∧ dy
∂y ν ∂y νj
∂y ν ∂y νj
j=1
j=1
Consider now each of the r terms in the above sum, which is itself a sum over ν, ν1 , . . . , νr . Fix a
term in the sum corresponding to certain values ν = ν̄, ν1 = ν̄1 ,..., νr = ν̄r , i.e., the (r + 1)-ple,
ν̄, ν̄1 , ..., ν̄j , ..., ν̄r . The term corresponding to the (r + 1)-ple, ν̄j , n̄u1 , ..., n̄u, ..., n̄ur is equal and
opposite because of the equality of mixed derivatives and property 2. of Theorem 6.2.1. These
terms all cancel, and all the r terms in the sum are zero. We are left with
µ1 µ2
1 ∂ωµ1 ,...,µr
∂x ∂x
∂xµr
0
dr ω =
· · · νr dy ν ∧ dy ν1 ∧ · · · ∧ dy νr =
(7.2.6)
r!
∂y ν
∂y ν1 ∂y ν2
∂y
µ1
µ2
µr
1 ∂ωµ1 ,...,µr
∂x
∂x
∂x
ν
ν1
ν2
νr
dy ∧
dy
∧
dy
∧ ··· ∧
dy
=
r!
∂y ν
∂y ν1
∂y ν2
∂y νr
1 ∂ωµ1 ,...,µr
dy ν ∧ dxµ1 ∧ dxµ2 ∧ · · · ∧ dxµr .
r!
∂y ν
The first factor in the expression above is the differential of the function ωµ1 ,...,µr . If we write
this differential in the x coordinates, we find the expression of dr ω as desired.
62
7.3
Properties of the exterior derivative
The following Theorem describes some of the main proprties of exterior differentiation. We drop
in the following the index r in dr and denote the exterior derivative simply by d.
Theorem 7.3.1: Properties of the Exterior Derivative
The exterior derivative d : Ωr (M ) → Ωr+1 (M )
1. is linear
d(aω1 + bω2 ) = adω1 + bdω2 ;
(7.3.1)
2. satisfies for ξ ∈ Ωq (M ), ω ∈ Ωr (M ),
d(ξ ∧ ω) = dξ ∧ ω + (−1)q ξ ∧ dω;
(7.3.2)
d2 = 0.
(7.3.3)
3. satisfies
Proof. 1. follows directly from the definition while 2. is left as exercise. We only prove 3.
∂ωµ1 ,...,µr µ
1
µ1
µr
=
dx ∧ dx ∧ · · · ∧ dx
d(dω) = d
r!
∂xµ
1
(r!)(r + 1)!
∂ ∂ωµ1 ,...,µr k
dx ∧ dxµ
∂xk ∂xµ
∧ dxµ1 ∧ · · · ∧ dxµr .
Inside the round parentheses, the terms with k = µ are zero because dxk ∧ dxk = 0. The
terms with k 6= µ pairwise cancel because of the equality of mixed derivatives.
The following theorem says that properties (7.3.1), (7.3.2) and (7.3.3) uniquely define in a
coordinate independent fashion, the exterior derivative.
Theorem 7.3.2: Uniqueness of the Exterior Derivative
0
Consider an operator d : Ωr (M ) → Ωr+1 (M ), which satisfies (7.3.1), (7.3.2) and (7.3.3)
0
0
(with d replaced by d ) and agrees with d on functions, then d = d.
Notice this theorem is another way to show that the definition of d does not depend on
coordinates. We could have simply defined d as the unique operator that satisfies (7.3.1),
(7.3.2)and (7.3.3) and it is equal to the differential on functions in F(M ).
Proof. (the proof follows Spivak pg. 211-212.) By linearity, it is enough to show that
0
d (f dxµ1 ∧ · · · ∧ dxµr ) = d(f dxµ1 ∧ · · · ∧ dxµr ) := df ∧ dxµ1 ∧ · · · ∧ dxµr .
(7.3.4)
From property (7.3.2), we have
0
0
0
d (f dxµ1 ∧ · · · ∧ dxµr ) = d f ∧ dxµ1 ∧ · · · ∧ dxµr + f d (dxµ1 ∧ · · · ∧ dxµr ) =
0
df ∧ dxµ1 ∧ · · · ∧ dxµr + f d (dxµ1 ∧ · · · ∧ dxµr ),
63
(7.3.5)
0
because of equality with d on functions. It is therefore enough to show that d (dxµ1 ∧
0
· · · ∧ dxµr ) = 0, or equivalently since dxµj = d xµj (because of equality on functions), that
0
0
0
d (d xµ1 ∧ · · · ∧ d xµr ) = 0. This is shown by induction on r. For r = 1, it is true because
of (7.3.3), and using property (7.3.2),
0
0
0
0
0
0
0
0
0
0
0
d (d xµ1 ∧· · ·∧d xµr ) = d (d xµ1 )∧d xµ2 ∧· · ·∧d xµr −d xµ1 ∧d (d xµ2 ∧· · ·∧d xµr ) = 0−0 = 0,
(7.3.6)
where we have used the inductive assumption.
Additionally, Every element in Ωr (M ) ⊆ Tr0 (M ) can be seen as a multi-linear map from
r-ples of vector fields to the space of smooth functions, i.e.,
ω : X (M ) × X (M ) × · · · × X (M ) → F(M ).
One more property, which can also be taken as coordinate independent definition of the exterior
derivative, is given in terms of how, for ω ∈ Ωr (M ), dω ∈ Ωr+1 (M ) acts on (r + 1)-tuples of
vector fields to give functions in F(M ). We have (the proof that this definition is equivalent to
the one previously given is omitted; it can be found for instance in Spivak Theorem 13, Chapter
7) Xj ∈ X (M ), j = 1, . . . , r + 1,
dω(X1 , . . . , Xr+1 ) =
r+1
X
(−1)i+1 Xi (ω(X1 , . . . , Xi−1 , Xi+1 , . . . , Xr+1 ))+
(7.3.7)
i=1
X
(−1)i+j ω([Xi , Xj ], X1 , . . . , Xi−1 , Xi+1 , . . . , Xj−1 , Xj+1 , . . . , Xr+1 ).
i<j
Special cases of this formula are important and often used in calculations. For r = 0, ω is just
a smooth function and the formula (7.3.7) gives dω(X1 ) = X1 (ω), that is the definition of the
differential of a function (cf. (2.2.1)). If ω is a 1− form, we get, with X, Y ∈ X (M ),
dω(X, Y ) = X(ω(Y )) − Y (ω(X)) − ω([X, Y ]).
7.3.1
(7.3.8)
Examples
Example 7.3.1: M = R3
Consider M = R
I 3 . Since Ω0p (M ) and Ω3p (M ) are both one-dimensional, both the elements
0
of Ω (M ) and Ω3 (M ) can be identified with functions. That is, an element in Ω0 (M )
is a function ω0 , and an element in Ω3 (M ) is identified by the function ωx,y,z in ω3 =
ωx,y,z dx ∧ dy ∧ dz. Analogously, since Ω1p (M ) and Ω2p (M ) are both one-dimensional, both
the elements of Ω1 (M ) and Ω2 (M ) can be identified with vector fields. That is, an element
ω1 ∈ Ω1 (M ) is identified by three functions ωx , ωy , ωz , as ω1 = ωx dx + ωy dy + ωz dz. An
element ω2 ∈ Ω2 (M ) is identified by three functions ωx,y ωy,z , ωz,x , as ω2 = ωx,y dx ∧ dy +
ωy,z dy ∧ dz + ωz,x dz ∧ dx. Applying to these differential forms, the exterior derivative gives
64
well known operations of vector calculus. In particular,
dω0 =
∂ω0
∂ω0
∂ω0
dx +
dy +
dz,
∂x
∂y
∂z
gives the gradient (∇) of ω0 .
∂ωy
∂ωy
∂ωz
∂ωx ∂ωz
∂ωx
dx ∧ dy +
dy ∧ dz +
dz ∧ dx,
dω1 =
−
−
−
∂x
∂y
∂y
∂z
∂z
∂x
correspond to the rot (∇×) operation.
∂ωy,z
ωz,x ∂ωx,y
dω2 =
dx ∧ dy ∧ dz,
+
+
∂x
∂y
∂z
corresponds to the div operation (∇·).
Example 7.3.2: General Relativity
(From general relativity) In the space-time with coordinates x0 , x1 , x2 , x3 , where x0 denotes time and x1 , x2 , x3 are the three spatial coordinates, the electromagnetic potential is
a one form A = Aµ dxµ . The electro magnetic tensor is defined as dA. Expanding dA and
separating the basis elements, which contain dx0 from those that do not, we can write dA
as
dA = −E1 dx0 ∧ dx1 − E2 dx0 ∧ dx2 − E3 dx0 ∧ dx3 +
(7.3.9)
B3 dx1 ∧ dx2 − B2 dx1 ∧ dx3 + B1 dx2 ∧ dx3 ,
for some functions E1,2,3 and B1,2,3 . If we interpret E1,2,3 and B1,2,3 as components of the
electric and magnetic field, respectively, the two Maxwell’s equations
~ = 0,
∇·B
~
∂B
~ = 0,
+∇×E
∂t
(7.3.10)
follow from the fact that d2 A = 0, i.e., by setting each of the four components of this
3-tensor equal to zero.
7.3.2
Closed and Exact Forms
Definition 7.3.1: Exact and Closed r-forms
An r−form ω is called exact if there exists an r − 1 form α such that ω = dα. An r−form
ω is called closed if dω = 0. It is clear, from the property d2 = 0 that exact forms are
closed. Closed forms span a subspace of Ωr (M ), which is denoted by Z r (M ). Exact forms
span a subspace of Z r (M ) and therefore of Ωr (M ), which is denoted by B r (M ). The
quotient space (vector space)
H r (M ) := Z r (M )/B r (M ),
65
(7.3.11)
is called the r−th co-homology group.
Remark 7.3.1: Exact ODE’s
Recall in the theory of differential equations, that a covector field M dx + N dy is exact if
My = Nx .
Then we would use the potential function f where
fx = M
and
fy = N,
giving
f (x, y) = c
as the solution. This is not precisely the case since this says that a covector field being
closed implies that it is exact. This is why in these exact equations we require that the
open set in R2 where the covector field is defined is simply connected. This means it is
path connected, and any two smooth paths on the open set with the same endpoints can
be continuously deformed into each other, or simply the set has no holes. This condition
lets closed imply exact for our covector field in R2 .
7.4
Interior product
Definition 7.4.1: Interior Product
Consider a vector field X ∈ X (M ) and ω ∈ Ωr (M ). The interior product iX : Ωr (M ) →
Ωr−1 (M ), is defined as iX ω = 0 if ω ∈ Ω0 (M ) and
iX ω(X1 , X2 , . . . , Xr−1 ) = ω(X, X1 , X2 , . . . , Xr−1 ),
(7.4.1)
for any X1 , X2 , . . . , Xr−1 ∈ X (M ). This definition is coordinate independent.
The interior product is a form of ‘contraction’ (cf. subsection ??) between the vector
field X and the r−form ω. In order to see this, let us express iX ω in given coordinates.
We write X = X µ ∂x∂ µ and ω := r!1 ωµ1 ,µ2 ,...,µr dxµ1 ∧ · · · ∧ dxµr , and iX ω in coordinates as
1
iX ω = (r−1)!
(iX ω)ν1 ,...,νr−1 dxν1 ∧ · · · ∧ dxνr−1 . We know that
(iX ω)ν1 ,...,νr−1 = iX ω
∂
∂
, . . . , νr−1
∂xν1
∂x
:=
∂
∂
∂
∂
µ ∂
,
, . . . , νr−1 = X µ ωµ,ν1 ,...,νr−1 .
ω X, ν1 , . . . , νr−1 = ω X
∂x
∂x
∂xµ ∂xν1
∂x
66
(7.4.2)
Therefore,
iX ω =
7.4.1
1
X µ ωµ,ν1 ,...,νr−1 dxν1 ∧ · · · ∧ dxνr−1 .
(r − 1)!
Properties of the interior product
The following theorem summarizes the main properties of the interior product.
Theorem 7.4.1: Properties of the Interior Product
iX has the following properties:
1. iX ω is linear in both X and ω.
2. for any function f ∈ F(M ), iX (df ) = df (X) = LX (f ) = X(f ).
3.
iX (ω ∧ ξ) = (iX ω) ∧ ξ + (−1)r ω ∧ iX ξ,
if ω ∈ Ωr (M ).
4. iX iY = −iY iX . In particular i2X = 0.
5. i[X,Y ] = LX ◦ iY − iY ◦ LX
Proof. Property 1. follows directly from the definition. Property 2. also follows from the
definition of Lie derivative of a function f and the definition of differential of a function.
Properties 3., 4. and 5. are left as exercise.
A lot of algebra can be done with differential forms and the operations d, ∧ and iX and
Lie derivatives of vector fields to obtain identities relating these operations. One important
formula is the so-called Cartan magic formula, or, simply, Cartan identity, which relates the Lie
derivative along a vector field X, with the interior product with respect to X. It says that the
Lie derivative of a differential form along X is the anticommutator of the exterior derivative d
and the interior product iX . In particular, we have
Theorem 7.4.2
For every ω ∈ Ωr (M ),
LX ω = (iX ◦ d + d ◦ iX )ω.
67
(7.4.3)
Remark 7.4.1: Another Proof of Theorem 7.4.2
The proof we present is different from Nakahara’s one and it does not use the
representation of the various operations in given coordinates. It follows Dr. Zuoqin
Wang’s online notes from the University of Michigan (cf. Lecture 28 in
http://www-personal.umich.edu/ wangzuoq/437W13/.) In principle, there is nothing wrong with proofs which are done in specific coordinates. However, they tend
to be a little messier with the indices and it might be difficult to arrange the various terms and indices to get exactly the equality we want. Some tricks are found
repeatedly in proofs of equalities that involve iX , LX and d. One of them is to use
induction on the order r of a differential form. Another trick is to use property 2.
of Proposition 5.2.1., which extends to differential forms as
LX (t1 ∧ t2 ) = (LX t1 ) ∧ t2 + t1 ∧ LX t2 .
(7.4.4)
Often proofs for low degree tensors can be done using the expression in local coordinates without too much confusion with indexes. One tool often used is the equality
of mixed derivatives. These elements are present in the following proof.
Proof. By linearity, it is enough to show (7.4.3) for an r−form of the type ω = f dx1 ∧
dx2 ∧ · · · ∧ dxr for f ∈ F(M ). If r = 0, (7.4.3) is true since iX ω = 0 and property 2. of
Theorem 7.4.1 holds. To prove it for general r ≥ 1, assume it true for r − 1 and write
ω1 := f dx2 ∧ · · · ∧ dxr , so that
ω = dx1 ∧ ω1 .
(7.4.5)
Calculate the left hand side of (7.4.3), using (7.4.4) and (7.4.5). We have
LX ω = (LX dx1 ) ∧ ω1 + dx1 ∧ (LX ω1 ).
(7.4.6)
The right hand side of (7.4.3) is
(iX ◦ d + d ◦ iX )(dx1 ∧ ω1 ) = iX ◦ d(dx1 ∧ ω1 ) + d ◦ iX (dx1 ∧ ω1 ) =
−iX (dx1 ∧ dω1 ) + d[(iX dx1 )ω1 − dx1 ∧ iX ω1 ],
using d2 = 0, and property 3. of Theorem 7.4.1, and (7.3.2). Again using these properties,
we get:
= −(iX dx1 )dω1 + dx1 ∧ iX ◦ dω1 + (diX dx1 ) ∧ ω1 + (iX dx1 )dω1 + dx1 ∧ d(iX ω1 ) =
dx1 ∧ (diX ω1 + iX dω1 ) + dLX x1 ∧ ω1 .
By using the inductive assumption, the first term is equal to
second term, we have to use the property
dLX f = LX df,
∀f ∈ F(M ).
(7.4.7)
dx1
∧ LX ω1 . As for the
(7.4.8)
This can be verified in local coordinates. So the second term above, using the function x1
in place of f , is LX dx1 ∧ ω1 . This gives the equality of (7.4.7) with (7.4.6) and completes
the proof of the Theorem.
68
7.5
Exercises
Exercise 7.1 Prove part 2 of the above theorem.
Exercise 7.2 Complete the Proof of Theorem 7.4.1.
Exercise 7.3 We could have defined with X ∈ X (M ), the interior product as
ikX ω(X1 , X2 , . . . , Xr−1 ) = ω(X1 , X2 , . . . , Xk−1 , X, Xk , . . . , Xr−1 ),
for Xj ∈ X (M ), j = 1, . . . r − 1, with ikX : Ωr → Ωr−1 . As a special case i1X = iX . What is the
relationship between ikX and iX for general k?
Exercise 7.4 Prove Formula (7.4.8)
69
8
Integration of differential forms on manifolds part I: Preliminary Concepts
Consider a differential r−form ω defined on a (in some sense measurable) set S ⊆ M , for
example a submanifold of dimension r. Just like we did in the previous treatment, we may
define integration of an r−form on an (r−dimensional) sub-manifold in terms of integration on
R
I r . Since S is (locally) Euclidean, there exists a coordinate chart φ (locally) mapping S to a set
φ(S) in Rr , with coordinates x1 , . . . , xr . If ω := f dx1 ∧ dx2 ∧ · · · ∧ dxr , we can (roughly) define
Z
Z
f (φ−1 (x1 , x2 , . . . , xr ))dx1 dx2 · · · dxr ,
(8.0.1)
ω=
S
φ(S)
where the integral on the right hand side is a standard Riemann integral in R
I r . Beside the easily
settled issue of independence of coordinates in this definition, there are few issues to be carefully
analyzed. First of all, for domains of integration of RRiemannian integrals,
we have a concept of
Ra
b
orientation, which leads for instance to the fact that a f (x)dx = − b f (x)dx. We need to define
a notion of orientation on manifolds, which generalizes the one on R
I . Then, for most manifolds
and sets, it is not possible to find a single coordinate map φ mapping S to φ(S) since S might
be covered by more than one coordinate neighborhood. The device used to patch together these
various coordinate charts in the definition and calculation of the integral is called a partition of
unity. These topics will be treated in the next two subsections. Then we shall study integration
on special sets called chains which will lead us to the first version of Stokes’ theorem. We will
then generalize to integration over more general sets which we call regular domains and, on those,
we shall state a second version of Stokes’ theorem. This theorem is the most important one for
integration on manifolds, generalizing several theorems of multivariable calculus such as the
fundamental theorem of calculus, Green’s theorem, and the divergence theorem. Our treatment
mostly follows F. Warner’s book (Chapter 4).
8.1
Orientation on Manifolds
Definition 8.1.1: Base Orientation
Consider a manifold M and two overlapping charts (Ui , φi ) and (Uj , φj ), denoting by x(y)
the coordinates corresponding to φi (φj ). At a point p ∈ Ui ∩ Uj , the tangent space Tp M
∂
∂
has a basis given by { ∂x
i |p }, i = 1, . . . , m, and a basis given by { ∂y j |p }, j = 1, . . . , m,
with the two bases being related by the formula
∂
∂xi ∂
|
=
|p
|p .
p
∂y j
∂y j ∂xi
If
det
∂xi
|p
∂y j
(8.1.1)
> 0,
(8.1.2)
∂
∂
then the two bases { ∂x
i |p }, and { ∂y j |p }, are said to have the same orientation. Otherwise, if this determinant is < 0, they are said to have opposite orientation.
70
Definition 8.1.2: Orientable Manifold
A connected manifold M with atlas {Uj , φj } is said to be orientable if there exists a
subcover of {Uj }, such that for every pair of overlapping charts in this subcover and at
any point p in the intersection of the two coordinate neighborhoods, the corresponding
∂
∂
bases of Tp M , { ∂x
i |p } and { ∂y j |p } have the same orientation. The atlas (or the subcover)
given this way is said to be orientation compatible. In other terms, a manifold is
orientable if and only if there exists an orientation compatible atlas.
Example 8.1.1: S 1
All of the manifolds introduced in the examples in subsection 1.1 are orientable. Consider,
for instance, the circle S 1 with the two charts
(U1 , φ1 ),
(U2 , φ2 ),
U1 = S 1 − (1, 0),
U2 = S 1 − (−1, 0),
φ1 : U1 → (0, 2π),
φ2 : U2 → (−π, π),
φ1 (cos(θ), sin(θ)) = θ,
φ2 (cos(u), sin(u) = u.
The map φ1 ◦ φ−1
2 is θ = u or θ = u + 2π, according to which connected component of the
intersection U1 ∩ U2 we are considering. The determinant of the Jacobian is 1 > 0 and
therefore, the two charts share the same orientation and the manifold is orientable. The
orientation is the counterclock-wise one since θ increases as u increases, and viceversa (cf.
Figure 15).
Figure 15: Orientation on the circle S 1
71
Example 8.1.2: Möbius strip
The Möbius strip in Figure 16 is the prototipical example of non orientable manifold. It
consists of a strip depicted in the upper part of the figure where the two ends are glued
together. However, a half twist is performed so that the points A and B on the left and
right hand of the strip match.
This manifold can be covered by two overlapping charts depicted in Figure 16. To give the
resulting manifold the structure of a differentiable manifold, we cover it with two charts.
The first chart with coordinates (x1 , y1 ) is depicted in the upper part of the figure and
3
associated coordinate neighborhood consists of the portion of the strip from −3
2 to 2 . It
2
is mapped via the identity to R
I . The second coordinate neighborhood in the lower part
of the figure contains the segment A − B, which is mapped to a segment on the y axis.
The boxes between −2 and −1 and between 1 and 2 are mapped to the two boxes on the
left and right side of the y axis. However, the left box is twisted so as to make the points
A and B match. The coordinates are x2 and y2 . The regions U and V are where the two
coordinate neighborhoods overlap. On V , we have for the transition functions x1 = 2−x2 ,
y1 = −y2 , so that the Jacobian is
−1 0
JV =
,
0 −1
with determinant equal to 1. However, on U the transition functions are x1 = −x2 − 2,
y1 = y2 , so that the Jacobian is
−1 0
,
JU =
0 1
with determinant equal to −1, and the atlas is not orientation compatible and the manifold
is not orientable.
Notice that this is not a proof that the Mobius strip cannot be given an a differentiable
structure with an atlas that is orientation compatible. It is only a proof that the Möbius
strip with the given atlas defined here is not orientable.
72
Figure 16: Differentiable structure for a Möbius strip
8.2
Partition of Unity
A partition of unity is a tool to patch together charts on a differentiable manifold. We sketch
the main points here, referring to the literature such as Spivak, or Warner for details.
Definition 8.2.1: Paracompact
We need to assume that the manifold M is paracompact, that is, every open cover has an
open cover that is a locally finite refinement. Then we can say it has an atlas (compatible
with the assigned differentiable structure) {Uj , φj } such that every point p, is covered by
only a finite subset of the {Uj }’s.
This hypothesis is not really a hypothesis since we have assumed that M is second countable
and Hausdorff, and it can be proven that this is sufficient to conclude that M is paracompact if M
is locally compact (See 1.3) (cf. for instance Lemma 1.9 in F. Warner). Clearly compact spaces
(manifolds) are automatically paracompact and, for example, all manifolds we have discussed in
the first lecture, which have atlases with a finite number of coordinate neighborhoods, are also
automatically paracompact. We assume therefore in the following that the atlas we use on our
manifold M has a finite set of coordinate neighborhoods at each point.
73
Definition 8.2.2: Partition of unity
Given such a cover {Uj }, a partition of unity subordinate to the cover {Uj } is a
family of smooth functions j : M → R
I such that
1. j (p) ≥ 0,
∀p ∈ M ,
2. j (p) = 0,
if p ∈
/ Uj ,
3. 1 (p) + 2 (p) + · · · + · · · = 1, for every p ∈ M .
Notice that in the last condition, because of the choice of the cover and condition 2, only
a finite number of terms has to be considered. However, such a number depends on the point
p ∈ M.
The existence of a partition of unity is not a completely trivial matter, and a construction can
be found in Warner or Spivak. We summarize here the main ideas without going into technical
details. The main tool is the function
1
h1 (x) = e− x2 ,
h1 (x) ≡ 0,
or the function
x ∈ (0, +∞),
x ∈ (−∞, 0],
1
h2 (x) = e− x ,
h2 (x) ≡ 0,
(8.2.1)
x ∈ (0, +∞),
(8.2.2)
x ∈ (−∞, 0],
which are examples of C ∞ functions, which are zero on a set of positive measure but not identically zero (these functions are C ∞ but not analytic). With these functions, we can define the
bump function. Let g(x) be given by
g(x) =
h2 (x)
,
h2 (x) + h2 (1 − x)
(8.2.3)
which is a function equal to zero for x ≤ 0 and grows from 0 to 1 in the interval [0, 1] and then
it is equal to 1 for x ≥ 1. The bump function
b(x) := g(x + 2)g(2 − x)
(8.2.4)
is equal to zero outside the interval (−2, 2), equal to 1 in the interval [−1, 1], and continuously
grows from zero to 1 in [−2, −1] and decreases from 1 to zero in [1, 2]. The bump function
(8.2.4) can be extended to higher dimensions by simply multiplying several bump functions with
each-other, one for each dimension, i.e., b(x)b(y)b(z) · · · , and can be made asymmetric or can be
shrunk or shifted by simply scaling or shifting the variables. Consider now the given open cover
on M , {Uj }. According to the shrinking Lemma, (Theorem 14 pg 51 in Spivak), it is possible to
0
0
0
0
obtain another cover {Uj }, such that, for every j, Ūj ⊂ Uj , where Ūj is the closure of Uj . Now,
0
let us consider one of the neighborhoods Uj and corresponding Uj , and let us map it using the
0
coordinate map φj to R
I m . Let us assume for simplicity that φj (Uj ) is bounded so that φj (Ūj )
0
is compact. We can cover φj (Ūj ) with a finite number (hyper)-boxes so that the union of these
boxes is entirely contained in φj (Uj ). This is shown in Figure 17.
74
Figure 17: Construction for Partition of Unity
For each box, construct a (generalized) bump function, which is 1 inside the box and zero
outside φj (Uj ). By summing all these (finite number of) functions, we obtain a function, which
0
is strictly positive on φj (Uj ) and zero outside φj (Uj ). Call this function fj . The function
0
Fj = fj ◦ φj is a smooth function (back) on the manifold M , which is strictly positive on Uj and
zero outside Uj . By the way it was constructed, this function is smooth. The functions
Fj (p)
j (p) := P
,
k Fk (p)
(8.2.5)
where the sum is taken over the finite number of coordinate neighborhoods containing p, give a
partition of unity for the given manifold, subordinate to the cover {Uj }. Notice in particular that
0
the fact that the Uj forms a cover guarantees that the denominator in (8.2.5) is always different
from zero.
75
8.3
Orientation and existence of a nowhere vanishing form
Definition 8.3.1: Volume form
One of the main features of an orientable manifold is that it admits an m-form (recall
m = dim M ) called a volume form, which is nowhere vanishing and plays the role of a
measure on the manifold.
We have the following theorem.
Theorem 8.3.1
A manifold M is orientable if and only if there exists a volume form, that is, a nowhere
vanishing m−form.
In the proof of the theorem, in particular to show that orientability implies the existence of
a nowhere vanishing form, we will assign to every chart (Uj , φj ) a form ωj = dx1 ∧· · ·∧dxm .
To combine all these forms into a form defined on the whole manifold M , we need a
partition of unity.
Proof. (Proof of Theorem 8.3.1) Assume there exists a nowhere vanishing m−form ω for
the m-dimensional manifold M , which we write locally for a chart (U, φ) as
ω = f (x)dx1 ∧ dx2 ∧ · · · ∧ dxm ,
(8.3.1)
for f (x) > 0, on φ(U ). We remark that f is always positive since it is continuous, and in
order to change sign, it has to be equal to zero at some point, which contradicts the fact
that it is nowhere vanishing. Consider now the form ω on another chart (V, ψ), overlapping
with (U, φ), and write it there as
ω = g(y)dy 1 ∧ dy 2 ∧ · · · ∧ dy m ,
(8.3.2)
with g(y) 6= 0 defined in ψ(V ). As we have done for f , since g(y) is continuous and
never zero, we can assume that g(y) is always positive or always negative. If it is always
negative, we can change, in the chart, the coordinate y 1 to −y 1 which, changes the sign of
g(y). Therefore, we can assume that the two coordinate charts are such that both f and
g are positive on the domain where they are defined. On U ∩ V ,
ω = gdy 1 ∧ · · · ∧ dy m = f dx1 ∧ · · · ∧ dxm .
Applying ω in (8.3.3) at a point p ∈ U ∩ V to ∂x∂ 1 |p , . . . , ∂x∂m |p , we get
∂
∂
f (φ(p)) = g(ψ(p))(dy 1 ∧ · · · ∧ dy m )
|
,
.
.
.
,
|
p
p =
∂x1
∂xm
g(ψ(p)) X
∂
∂
1
m
sign(P )(dy ⊗ · · · ⊗ dy )
| , . . . , P (m) |p =
P (1) p
r!
∂x
∂x
P ∈Sm
g(ψ(p)) X
∂y 1
∂y 2
∂y m
sign(P )
|p
|p · · ·
|p .
r!
∂xP (1)
∂xP (2)
∂xP (m)
P ∈S
m
76
(8.3.3)
(8.3.4)
i
∂y
The sum on the right hand side is the determinant of the Jacobian matrix ∂x
j at p [Recall
that one of the definitions of the determinant of a m × m matrix {ai,j } is
X
det({ai,j }) =
sign(P )a1,P (1) a2,P (2) · · · ar,P (m) ].
P ∈Sm
Therefore, given the signs of f and g this determinant is positive, which implies that the
two coordinate systems share the same orientation. Since this can be repeated for any
two overlapping coordinate systems, the manifold is orientable.
To show the converse in the theorem statement, we assume that a cover exists such that
overlapping charts have the same orientation. Then taking one of these charts (U, φ) a
nowhere vanishing form defined on U certainly exists. It is
ω = dx1 ∧ · · · ∧ dxm .
(8.3.5)
The form ω is then extended to the other charts. In particular, let ωj be the form associated
with the coordinate chart (Uj , φj ). The nowhere vanishing form is then defined using a
partition of unity {j } subordinate to the cover {Uj } as
X
ω :=
j ωj .
(8.3.6)
j
To show that this form is nowhere vanishing, consider a point p ∈ M and according to the
paracompactness assumption, there are only a finite number of coordinate charts covering
p. Take the (finite) intersection of the corresponding coordinate neighborhoods. Consider
one of the coordinate neightborhoods and the corresponding basis of the tangent space
{ ∂x∂ 1 |p , . . . , ∂x∂m |p }. For every j, let ωj be written as ωj := dy 1 ∧ · · · dy m , and we have (cf.
calculation (8.3.4))
j ∂
∂
∂y
ωj
|p , . . . , m |p = det
|p > 0.
(8.3.7)
1
∂x
∂x
∂xk
Therefore, ω applied to ∂x∂ 1 |p , . . . , ∂x∂m |p gives a finite sum weighed by the j (p) of strictly
positive numbers. Since the j (p) are not all zero (their sum gives 1), this value is strictly
positive.
The first type of regions where we shall integrate differential forms on M will be singular
simplexes and singular chains, which we define in the following two subsections.
77
8.4
Simplexes
Definition 8.4.1: Geometrically independent points
Let {p0 , p1 , . . . , pr } with r ≤ m an (r + 1)-tuple of geometrically independent points
in R
I m . Recall that r + 1 points are called geometrically independent if there is no (r − 1)dimensional hyperplane which contains them all. Equivalently, denoting by ~v0 , ~v1 ,...,~vr
the vectors corresponding to p0 , p1 , . . . , pr in R
I m , ~v1 − ~v0 , ~v2 − ~v0 ,...,~vr − ~v0 , are linearly
independent. Figure 18 shows an example of geometrically independent and geometrically
dependent points in R
I 2.
Figure 18: Examples of geometrically independent points (p0 , p1 , p2 in part a)) and geometrically
dependent points (q0 , q1 , q2 in part b))
Definition 8.4.2: r-simplex
The points {p0 , p1 , . . . , pr } identify an r−simplex, σr , defined as


r
r


X
X
σr := p ∈ R
I m |p =
cj pj , cj ≥ 0,
cj = 1 .


j=0
(8.4.1)
j=0
The numbers
Pcj ’s are called barycentric coordinates of the point p ∈ σr , since the point
p, given by rj=0 cj pj represents the baricenter of a system of masses c0 , c1 , . . . , cr placed
in the points p0 , p1 , . . . , pr .
A 0− simplex is a point while a 1−simplex is a line segment while a 2−simplex is a triangle as
in Figure 18 part a), and a 3−simplex is a tetrahedron. Notice an r−simplex can be considered
in any space R
I m , with r ≤ m. For example, a 2-simplex is a triangle in R
I 2 as in Figure 18 a),
3
but it may also be a triangle in R
I .
78
Definition 8.4.3: Standard r-simplex
A special case is the standard n-simplex in R
I n , σ̄n , which is the simplex σ̄n :=
(p0 , p1 , . . . , pn ), with p0 := (0, 0, . . . , 0) and pi := (0, 0, . . . , 0, 1, 0, . . . , 0) where 1 appears
in the i−th position. In this case, formula (8.4.1) specializes as


n


X
I n|
(8.4.2)
σ̄n := p := (x1 , . . . , xn ) ∈ R
xj ≤ 1, xj ≥ 0 .


j=1
The standard 0− simplex is a point, while the standard 1− simplex is the segment [0, 1] in R
I.
The triangle in part c) of Figure 6.1 19 is the standard 2−simplex in R
I 2 , while the tetrahedron
in part d) in the same figure is the standard 3−simplex in R
I 3.
Figure 19: Standard 0, 1, 2, and 3 simplexes in parts a), b), c) and d), respectively.
Consider now a subset of q + 1 points of the r + 1 points {p0 , p1 , . . . , pr }, and notice (using
the criterion on the linear independence of the position vectors) that since {p0 , p1 , . . . , pr } are
geometrically independent, they also are geometrically independent.
Definition 8.4.4: q-face
Therefore, they identify a q-simplex σq , which is called a q− face of the simplex σr , and
we denote σq ≤ σr . A q-face σq , is the subset of σr where r − q baricentric coordinates
are set to zero.
79
Definition 8.4.5: Orientation
An orientation can be defined on a simplex by choosing an order for the points
{p0 , p1 , . . . , pr }. For example, for a 1-simplex, the simplex (p0 , p1 ) as an oriented simplex is different from (p1 , p0 ).a A direction is defined on the segment [p0 , p1 ] going from p0
to p1 or viceversa. Formally, consider ordered (r + 1)−ples of the points {p0 , p1 , . . . , pr }.
We say that two ordered (r + 1)−ples have the same orientation if it is possible to go
from one to the other via an even permutation. Otherwise, we say that they have opposite orientation. An oriented r-simplex is defined as an equivalence class (r + 1)-ples
according to this equivalence relation. Two oriented r-simplexes σr1 and σr2 , which only
differ by the orientation are formally in the relation σr2 = −σr1 .
a
Following standard notation we use round brackets as in (p0 , p1 , . . . , pr ) to denote oriented simplexes.
Definition 8.4.6: r-chain
Consider a set Ir of oriented r-simplexes, σr,i . They can be formally combined in an
r−chain as
X
c :=
ci σr,i ,
ci ∈ R
I.
(8.4.3)
i
P
P
r−chains form a vector space Cr with the sum of c1 := i c1,i σr,i and c2 := i c2,i σr,i
defined as
X
c1 + c2 =
(c1,i + c2,i )σr,i .
(8.4.4)
i
Definition 8.4.7: Boundary
Moreover on r−chains, we define a boundary operator ∂r , which gives a (r − 1)-chain as
a ZI -linear operator, which on simplexes is defined as follows. Let σr := (p0 , p1 , . . . , pr )
be an oriented r−simplex. Then the boundary of σr , ∂r σr , is defined as
∂r σr :=
r
X
(−1)i (p0 , p1 , . . . , p̂i , . . . , pr ),
(8.4.5)
i=0
where p̂i means that pi is omitted. Notice that each element in the sum in the right-hand
side of (8.4.5) is an (oriented) r − 1 face of the simplex σr , hence the name ‘boundary’
operator. We shall sometimes denote for brevity (p0 , p1 , . . . , p̂i , . . . , pr ) := σri , i.e., the
i−th (r − 1)-face of σr .
An important property of the boundary operator is given by the following theorem.
Theorem 8.4.1: Boundary property
∂r ◦ ∂r+1 = 0.
80
(8.4.6)
Remark 8.4.1: Parallel with Exterior Derivative
Notice the similarity of property (8.4.6) with the property (7.3.3). We shall take this
parallel much further in the following.
8.5
Singular r-chains, boundaries and cycles
Definition 8.5.1: Singular r-chains
Let σ̄r be the standard r−simplex, and sr : σ̄r → M be a smooth map with image in
a manifold M . sr is assumed to be smooth on an open set containing σ̄r in R
I r . sr is
called a singular r−simplex in M . For r = 0, sr maps the 0−simplex to a point in M .
In general, notice there is no requirement on sr beside smoothness. In particular it does
not have to be one to one, hence the word ‘singular’. Let {sr,i } be a set of such singular
r-simplexes. A (singular) r−chain c is a formal linear combination of the sr,i ’s, with
coefficients in R
I , i.e.,
X
c :=
ai sr,i .
(8.5.1)
i
These are formal linear combinations of smooth maps. They form a vector space under
the operation, which associates to c1 + c2 the chain with coefficients, which are sums of the
coefficients corresponding to the same simplex. This vector space is denoted by Cr (M )
and called the r-chain group.
The boundary operator ∂r on simplexes, σr , defined in (8.4.5) induces a boundary operator
on singular simplexes sr , which we denote by ∂. This is defined as follows: denote by sir for
i = 0, 1, ..., r the restriction of sr to the i−th face of σ̄r (which is an (r − 1)-simplex but not a
standard simplex (in particular it is not in R
I r−1 ). Then
∂sr :=
r
X
(−1)i sir .
(8.5.2)
i=0
More precisely, we define, for i = 0, 1, . . . , r, functions fri , that are maps σ̄r−1 → σ̄r , mapping
σ̄r−1 to the i−th face of σ̄r , the one obtained by omitting pi . They are defined as
fr0 (x1 , x2 , . . . , xr−1 )
:= (1 −
r−1
X
xj , x1 , x2 , . . . , xr−1 ),
(8.5.3)
j=1
fri (x1 , x2 , . . . , xr−1 ) := (x1 , x2 , . . . , xi−1 , 0, xi , . . . , xr−1 ),
(8.5.4)
for r ≥ 2. If r = 1, since the standard zero simplex is just a point p,
f10 (p) := 1,
f11 (p) := 0
(8.5.5)
For example, for the standard simplexes σ̄1 and σ̄2 (cf. b) and c) in Figure 19) with t ∈ [0, 1],
we have
f20 (t) = (1 − t, t),
f21 (t) = (0, t),
f22 (t) = (t, 0).
81
Then sir is defined as
sir := sr ◦ fri ,
(8.5.6)
and it is an (r − 1)-singular simplex.
Definition 8.5.2: Boundary Operator
The boundary operator ∂ is defined as
∂sr :=
r
X
(−1)i sir :=
i=0
r
X
(−1)i sr ◦ fri ,
(8.5.7)
i=0
which is a chain of singular (r−1)−simplexes with the induced orientation. This operation
extends to r-chains by linearity, i.e., as


X
X
∂
aj sr,j  :=
aj ∂sr,j ,
(8.5.8)
j
j
providing a linear map ∂ : Cr (M ) → Cr−1 (M ).
Figure 20: An example of a singular complex s2 and its boundary ∂s2 = s02 − s12 + s22 and its
image. Notice we have a consistent direction on the boundary.
It is important that anagously to Theorem 8.4.1, we have
82
Theorem 8.5.1: Boundary Operator Property
∂ 2 = 0.
(8.5.9)
Definition 8.5.3: r-cycles and r-chains
The space Cr (M ) has the structure of a vector space on R
I . An element cr in Cr (M ) such
that there exists an element cr+1 ∈ Cr+1 (M ), with
cr = ∂cr+1 ,
(8.5.10)
is called an r-boundary. An element cr such that ∂cr = 0 is called an r-cycle. That is,
r-cycles span the kernel of the linear operator ∂ on Cr (M ).
Both boundaries and cycles form vector subspaces of Cr (M ), which are denoted respectively by Br (M ) and Zr (M ). Moreover, from the property (8.5.9) it follows that if an
r-chain is a boundary, then it is also an r-cycle, that is,
Br (M ) ⊆ Zr (M ).
(8.5.11)
The singular homology group Hr (M ) is defined as the quotient space
Hr (M ) = Zr (M )/Br (M ).
(8.5.12)
The situation is exactly parallel to the one discussed in subsection 7.3.2. The spaces C0 (M ),
C1 (M ),...,Cm (M ) form a ‘chain complex’ of vector spaces with Cl (M ) = 0 for l > m, i.e.,
C0 (M ) ← C1 (M ) ← · · · ← Cm (M ),
(8.5.13)
with the boundary operator ∂ mapping in the direction of the arrow, and ∂ 2 = 0. Analogously,
we have the (co-)chain complex
Ω0 (M ) → Ω1 (M ) → · · · → Ωm (M )
(8.5.14)
where Ωl = 0 for l > m, and the exterior derivative d, which maps in the direction of the
arrows and d2 = 0. Notice the correspondence between boundaries and exact forms and the
correspondence between cycles and closed forms, which is reflected in the notations Zr (M ) ↔
Z r (M ), Br (M ) ↔ B r (M ), Hr (M ) ↔ H r (M ). The result that links homology and co-homology
is Stokes theorem.
83
8.6
Exercises
Exercise 8.1 Prove that the real projective space RP n in subsection 1.1 is orientable for every
n.
Exercise 8.2 Prove Theorem 8.4.1.
Exercise 8.3 Prove Theorem 8.5.1.
84
9
Integration of differential forms on manifolds part II: Stokes
theorem
9.1
Integration of differential r-forms over r−chains; Stokes theorem
Consider now a differential r-form ω defined on M , and a chain c ∈ Cr (M ). We want to define
the integral of ω on c.
Definition 9.1.1: Integral of r-form on a singular r-simplex
To do that we first define the integral of the r−form ω on a singular r-simplex sr by
Z
Z
ω :=
s∗r ω,
(9.1.1)
sr
σ̄r
R
where σ̄r s∗r ω is the standard Riemann integral on R
I r calculated on σ̄r , and s∗r ω is an
r
r − f orm in R
I . In general, for r ≥ 1, for an r−form ξ := adx1 ∧ · · · ∧ dxr on A ⊆ R
I r,
we define
Z
Z
1
r
adx1 · · · dxr ,
(9.1.2)
adx ∧ · · · ∧ dx :=
A
A
r
which is the standard Riemann integral in R
I . For r = 0 and for a 0−form, i.e., a function
at a point p, the definition is
Z
ω := ω(p).
(9.1.3)
p
Notice that the integral in (9.1.1) depends, in general, not just on the image of sr , but on sr
itself. The following simple examples clarify this fact as well as the meaning of the definition.
Example 9.1.1: Multivariable Calculus
The definition (9.1.1) is the generalization of something we encountered in Calculus. When
integrating a differential (1-)form in R
I 2 , ω := adx1 + bdx2 along a curve c defined by
c : [0, 1] → R
I 2 , we were told that
Z
1
2
Z
c
1
((a ◦ c(t))
adx + bdx :=
0
dx1
dx2
+ (b ◦ c(t))
)dt.
dt
dt
(9.1.4)
However, the integrand on the right hand side is nothing but c∗ ω as it can be easily
d
verified (just calculate c∗ ω( dt
) using the definition).
Example 9.1.2: Orientation with Integrals
Let ω be a 1−form on the manifold R
I 2 , in coordinates x and y given by ω := dx + dy.
The singular simplex s1 : [0, 1] = σ̄1 → R
I 2 defined by s1 (t) = (t, t) has the same image as
the singular simplex f1 : [0, 1] = σ̄1 → R
I 2 defined by f1 (t) = (1 − t, 1 − t). We have
s∗1 (dx + dy) = 2dt,
f1∗ (dx + dy) = −2dt,
85
and
Z
Z
s1
1
Z
Z
2dt = 2 6=
ω :=
0
f1
0
Definition 9.1.2: Integral on general r-chain
P
For a general r−chain cr := i ai sr,i , we define
Z
X Z
ω :=
ai
cr
1
−2dt = −2
ω :=
ω.
(9.1.5)
sr,i
i
Theorem 9.1.1: Stokes Theorem
Let c ∈ Cr (M ), and ω ∈ Ωr−1 (M ).
Z
Z
dω =
c
ω.
(9.1.6)
∂c
Notice that the integral can be seen as an inner product between Ωr (M ) and C r (M ),
hω, ci. Stokes theorem says that the boundary operator ∂ is the adjoint of the exterior
derivative, i.e.,
h∂c, ωi = hc, dωi.
(9.1.7)
Proof. (We mostly follow the proof of Theorem 6.1 in Nakahara and Theorem 4.7 in
Warner ) By linearity, it is enough to prove (9.1.6) on simplexes, i.e.,
Z
Z
ω,
(9.1.8)
dω =
∂sr
sr
with ω ∈ Ωr−1 (M ).
Moreover, we observe the following general Lemma .
86
Lemma 9.1.1: Commutativity of Exterior Derivative and Pull-back
Let f smooth, f : M → N and ω ∈ Ωr−1 (N ). Then
f ∗ dω = df ∗ ω,
(9.1.9)
i.e., pullback and exterior derivative commute
Proof. By linearity and in a given system of coordinates, we can restrict ourselves
to forms ω of the type
ω = gdx1 ∧ · · · dxr−1 .
Moreover, let us first assume that property (9.1.9) holds for zero forms, i.e., functions. Using (6.2.3) of Theorem 6.2.1, we have that
f ∗ ω = (f ∗ g)(f ∗ dx1 ) ∧ · · · ∧ (f ∗ dxr−1 ),
(9.1.10)
where f ∗ g by definition is g ◦ f (g is a function). Using property (7.3.2) of Theorem
7.3.1, we have
df ∗ ω = (df ∗ g) ∧ (f ∗ dx1 ) ∧ · · · ∧ (f ∗ dxr−1 ) + (f ∗ g)d (f ∗ dx1 ) ∧ · · · (f ∗ dxr−1 ) .
Using the fact that the property is true on functions, we can write the right hand
side as
f ∗ dg ∧ (f ∗ dx1 ) ∧ · · · ∧ (f ∗ dxr−1 ) + f ∗ gd (df ∗ x1 ) ∧ · · · ∧ (df ∗ xr−1 ) .
The second term of this sum is zero if we distribute the d according to (7.3.2) and
use recursively d2 = 0. So we are left with
df ∗ ω = f ∗ dg ∧ f ∗ dx1 ∧ · · · ∧ f ∗ dxr−1 ,
and if we use (6.2.3) of Theorem 6.2.1 again we have
df ∗ ω = f ∗ (dg ∧ dx1 ∧ · · · ∧ dxr−1 ) = f ∗ dω.
So, we are only left with proving the result for zero forms ω. Let’s examine how
f ∗ dω acts on a vector field X on M , or, if we consider at a point p on a tangent
vector at that point. We have
(f ∗ dω)X := dω(f∗ X) := (f∗ X)(ω) := X(f ∗ ω) := d(f ∗ ω)X.
(9.1.11)
In all the previous steps, we used the definitions. Since X is arbitrary, we have
equality (9.1.9) for zero forms, and the Lemma is proved.
87
Using this Lemma, the left hand side of (9.1.8) is
Z
Z
d(s∗r ω).
dω =
(9.1.12)
σ̄r
sr
On the other hand, by definition, the right hand side is
Z
Z
Z
r
r
X
X
i
i
ω=
ω=
(−1)
(−1)
∂sr
Since
Z
sir
i=0
(sir )∗ ω =
Z
σ̄r−1
Z
σ̄r−1
(fri )∗ (s∗r ω) :=
σ̄r−1
the right hand side is
Z
ω=
∂sr
r
X
i
(9.1.13)
σ̄r−1
i=0
(sr ◦ fri )∗ ω =
(sir )∗ ω.
Z
(−1)
fri
i=0
Z
fri
s∗r ω
(s∗r ω).
(9.1.14)
So the theorem is proved if we prove the formula on R
Ir
Z
Z
r
X
i
dψ =
(−1)
ψ,
σ̄r
(9.1.15)
fri
i=0
for a general r − 1 form ψ on the manifold R
I r.
Consider first the case r = 1. The left hand side is
Z 1
dψ
dt.
0 dt
The right hand side is
Z
Z
ψ−
Z
f11
f10
(f10 )∗ ψ −
ψ=
σ̄0
Z
(f11 )∗ ψ = ψ(1) − ψ(0),
(9.1.16)
σ̄0
from the definition (9.1.3). Thus, the theorem follows from the fundamental theorem of
calculus. Therefore, we now prove (9.1.15) for r ≥ 2.
By linearity, again we can assume ψ = a(x1 , ...., xr )dx1 ∧·∧dxr−1 . Calculate the integrand
of the left hand side of (9.1.15)
dψ =
∂a µ
∂a
dx ∧ dx1 ∧ · · · dxr−1 = (−1)r−1 r dx1 ∧ · · · ∧ dxr .
µ
∂x
∂x
Therefore, the left hand side of (9.1.15) is (from Calculus)
Z
Z
∂a 1 2
r−1
dψ = (−1)
dx dx · · · dxr =
r
σ̄r
σ̄r ∂x
r−1
Z
1
2
dx dx · · · dx
(−1)
r−1
Ar
Z
0
88
1−
Pr−1
j=1
xj
∂a r
dx ,
∂xr
(9.1.17)
(9.1.18)
where Ar is the set
1
Ar := {(x , . . . , x
r−1
r−1
)∈ R
I
j
| x ≥ 0,
r−1
X
xj ≤ 1},
j=1
which is the standard (r − 1)−simplex, i.e., Ar := σ̄r−1 . Using the fundamental theorem
of calculus on the inside integral in (9.1.18), we get
Z
dψ =
(9.1.19)
σ̄r

(−1)r−1
Z
a(x1 , x2 , . . . , xr−1 , 1 −
σ̄r−1
r−1
X

xj ) − a(x1 , x2 , . . . , xr−1 , 0) dx1 dx2 · · · dxr−1 =
j=1
(9.1.20)
r−1
Z
1
2
a(x , x , . . . , x
(−1)
r−1
,1 −
σ̄r−1
(−1)r−1
r−1
X
xj )dx1 dx2 · · · dxr−1 −
j=1
Z
a(x1 , x2 , . . . , xr−1 , 0)dx1 dx2 · · · dxr−1 .
σ̄r−1
As for the right hand side of (9.1.15), we have to calculate
Z
Z
r
r
X
X
(−1)i
ψ=
(−1)i
a(x1 , ...., xr )dx1 ∧ · · · ∧ dxr−1 .
fri
i=0
i=0
(9.1.21)
fri
for a general r − 1-form ψ on R
I r . Consider a term in the sum in (9.1.21) corresponding
to i 6= 0 and i 6= r, i.e.,
Z
Z
adx1 ∧ · · · ∧ dxr−1 :=
(fri )∗ adx1 ∧ · · · dxr−1 .
fri
σ̄r−1
Consider the integrand on the right hand side. This is
(fri )∗ (adx1 ∧ · · · dxr−1 ) = ((fri )∗ a)((fri )∗ dx1 ) ∧ · · · ∧ ((fri )∗ dxi ) ∧ · · · ∧ ((fri )∗ dxr−1 ).
In particular the 1−form (fri )∗ dxi is equal to zero since
(fri )∗ dxi = d(xi ◦ fri ) = 0,
(9.1.22)
from the definition (8.5.4) xi ◦ fri ≡ 0.a
Therefore, only the i = 0 and i = r faces give contribution in (9.1.21). So we get
Z
Z
Z
r
X
i
1
r
1
r−1
r
(−1)
ψ=
a(x , ...., x )dx ∧· · ·∧dx +(−1)
a(x1 , ...., xr )dx1 ∧· · ·∧dxr−1 .
i=0
fri
fr0
frr
(9.1.23)
89
As for the first term on the right hand side of (9.1.23), we have
Z
Z
1
r
1
r−1
a(x , ...., x )dx ∧ · ∧ dx
=
(fr0 )∗ a((fr0 )∗ dx1 ) ∧ ((fr0 )∗ dx2 ) ∧ · · · ((fr0 )∗ dxr−1 ).
fr0
σ̄r−1
(9.1.24)
Using the definition (8.5.3), for j = 1, 2, ..., r − 1, we have
(fr0 )∗ dx1 = d(x1 ◦ fr0 ) = −
r−1
X
dxk ,
k=1
and
(fr0 )∗ dxj = d(xj ◦ fr0 ) = dxj−1 ,
for j = 2, ..., r − 1.
Therefore,
((fr0 )∗ dx1 ) ∧ ((fr0 )∗ dx2 ) ∧ · · · ((fr0 )∗ dxr−1 ) = −
r−1
X
!
dxk
∧ dx1 ∧ dx2 ∧ · · · ∧ dxr−2 =
k=1
(9.1.25)
−dxr−1 ∧ dx1 ∧ dx2 ∧ · · · ∧ dxr−2 = (−1)r−1 dx1 ∧ dx2 ∧ · · · ∧ dxr−2 ∧ dxr−1 .
Since

((fr0 )∗ a)(x1 , x2 , ..., xr−1 ) = a 1 −
r−1
X

xj , x1 , ..., xr−1  ,
j=1
from (8.5.3), we have that the first term on the right hand side of (9.1.23) is equal to


Z
r−1
X
(−1)r−1
a 1 −
xj , x1 , ..., xr−1  dx1 dx2 · · · dxr−1 .
(9.1.26)
σ̄r−1
j=1
P
j
1
r−2 ),
By making the change of coordinates φ : (x1 , x2 , . . . , xr−1 ) → (1 − r−1
j=1 x , x , . . . , x
which is such that | det(Jφ)| = 1, from the change of variables formula for integration in
R
I r−1 , we have that


Z
r−1
X
a 1 −
xj , x1 , ..., xr−1  dx1 dx2 · · · dxr−1 =
σ̄r−1
j=1

Z
σ̄r−1
a x1 , ..., xr−2 , 1 −
r−1
X

xj  dx1 dx2 · · · dxr−1 ,
j=1
and therefore the term in (9.1.26) is the first term in (9.1.19).
The second term in (9.1.23)
Z
Z
r
1
r
1
r−1
r−1
(−1)
a(x , ...., x )dx ∧ · · · ∧ dx
= −(−1)
(frr )∗ (adx1 ∧ · · · ∧ dxr−1 ) =
frr
σ̄r−1
(9.1.27)
90
r−1
Z
−(−1)
a(x1 , x2 , . . . , xr−1 , xr )dx1 · · · dxr−1 ,
σ̄r−1
which is the second term in (9.1.19). Thus, the theorem is proved.
a
Recall that for a smooth map f : M → N and dy a differential form on M associated with the function
y, f ∗ dy = d(y ◦ f ) (cf. Lemma 9.1.1).
9.2
9.2.1
Integration of Differential forms on regular domains and the second version of Stokes’ Theorem
Regular Domains
Definition 9.2.1: Regular Domain
The hyper-plane H m is defined as H m := {{x1 , x2 , ..., xm } ∈ R
I m |xm ≥ 0}.
A regular domain D ⊆ M is a closed subset of M , with non empty interior, such that
for every point p ∈ ∂D there exists a chart (U, φ), with p ∈ U and φ(U ∩ D) = φ(U ) ∩ H m
(cf. Figura 21).
Figure 21: Definition of Regular Domain
Necessarily points on the boundary ∂D are mapped by φ to points φ(p) in R
I m with xm = 0.
n
If p ∈ ∂D was mapped to an interior point in H , we could take a (small) open neighorhood of
φ(p) still contained in the interior of φ(U ∩ D) and its counter image in U ∩ D will be an open
set contained in D and containing p, which contradicts the fact that p is a boundary point.
91
Now we want to give ∂D the structure of a differentiable manifold by displaying an atlas of
compatible charts.
The set Ũ := U ∩ ∂D is open in ∂D since ∂D is endowed with the subset topology, and it is
mapped homeomorphically by φ̃, the restriction of φ to U ∩ ∂D to (φ(U ) ∩ {x ∈ R
I m |xm = 0}) ⊆
R
I m−1 . Therefore, (Ũ , φ̃) is a coordinate chart for ∂D at the point p. Since, by definition, such
a coordinate chart exists for every p ∈ ∂D, we have an atlas for ∂D as long as the compatibility
of two overlapping charts is verified. We have:
Proposition 9.2.1: Structure on ∂M
1. The above coordinate charts (Ũ , φ̃) defined as Ũ := U ∩∂D, φ̃ := φŨ are compatible.
2. If two overlapping charts (U, φ), (V, ψ) have compatible orientations on M and D,
then the induced charts (Ũ , φ̃), (Ṽ , ψ̃) have compatible orientations.
Therefore, ∂D is orientable if M is orientable.
Proof. Consider two overlapping charts (Ũ , φ̃) and (Ṽ , ψ̃) and the corresponding charts on
M (U, φ) and (V, ψ) at a point p, so that φ(p) = (φ̃(p), 0) := (a1 , a2 , . . . , am−1 , 0). Denote
also by ỹ j , x̃j , j = 1, . . . , m − 1, the coordinate functions associated with the maps ψ̃
and φ̃, respectively and by y j , xj , j = 1, . . . , m the coordinate functions associated with
the map ψ and φ, respectively. Consider ψ̃ ◦ φ̃−1 : (x̃1 , . . . , x̃m−1 ) → (ỹ 1 , . . . , ỹ m−1 ). For
j = 1, 2, . . . , m − 1, we have,
ỹ j (x̃1 , x̃2 , . . . , x̃m−1 ) = y j (x̃1 , x̃2 , . . . , x̃m−1 , 0).
(9.2.1)
Therefore, smoothness of the y j functions implies smoothness of the ỹ j functions, and this
gives the compatibility of the two charts (Ũ , φ̃) and (Ṽ , ψ̃). Moreover from (9.2.1), we
have.
∂ ỹ j
∂y j
|φ̃(p) =
|
,
k
∂ x̃
∂xk φ(p)
j = 1, 2, . . . , m − 1,
k = 1, 2, . . . , m − 1.
(9.2.2)
m
, for k = 1, 2, . . . , m − 1. Since (locally) φ−1 maps points in {x ∈
Examine now ∂y
∂xk
m m
R
I |x = 0} to ∂D, and ψ maps points on ∂D to {y ∈ R
I m |y m = 0}, we have (in a
neighborhood of φ(p)) that y m as a funcion of x1 , . . . , xm−1 is constant and equal to zero
which gives
∂y m
|
= 0,
(9.2.3)
∂xk φ(p)
for k = 1, 2, . . . , m − 1. Moreover
∂y m
|
> 0,
(9.2.4)
∂xm φ(p)
because an increase of xm , that is the point moves to the upper hyperspace induces an
increase in y m as the image of the point also moves in the upper hemisphere.
∂y j
The Jacobian ∂x
k |φ(p) , j = 1, 2, . . . , m, k = 1, 2, . . . , m has the form
!
∂ ỹ r
∗
∂y j
s |φ̃(p)
∂x
|
=
,
(9.2.5)
∂y m
∂xk φ(p)
0
∂xm |φ(p)
92
and it has positive determinant because the two charts on M (U, φ) and (V, ψ) have the
∂ ỹ j
same orientation. Since (9.2.4) holds, the Jacobian ∂x
k |φ̃(p) also has positive determinant.
Therefore, the two charts (Ũ , φ̃) and (Ṽ , ψ̃) also have compatible orientation. Then ∂D is
orientable.
9.2.2
Orientation and induced orientation
Since M is orientable, by definition it is covered by an orientation compatible atlas. Fixing the
orientation at one point p in M means fixing an equivalence class of ordered bases [~e1 , . . . , ~em ]
of Tp M where two bases are called equivalent if the change of basis matrix to go from one to the
other has positive determinant. For an orientable manifold, fixing the orientation at one point
determines the orientation
at every
point. In particular, let ω be the no-where vanishing form of
Theorem 8.3.1. If ∂x∂ 1 , ..., ∂x∂m is an ordered basis of Tp M such that
ω
∂
∂
, ..., m
1
∂x
∂x
> 0,
then the orientation at q, for Tq M , is chosen so that the ordered basis { ∂y∂ 1 , . . . , ∂y∂m } for coordinates y 1 , ..., y m at q, is6
∂
∂
ω
, ..., m > 0.
∂y 1
∂y
Assume now, an orientation on M , and therefore on a regular domain D in M , is given,
and consider a point p ∈ ∂D. Let the oriented basis of Tp M be given as [~n, ~e1 , . . . , ~em−1 ] so
that [~e1 , . . . , ~em−1 ] is an ordered basis of Tp ∂D and ~n is an outward vector, that is a vector
corresponding to a curve c = c(t) such that c(t) ∈ M − D for 0 < t < for some > 0.7 The
orientation on ∂D is by convention chosen as [~e1 , . . . , ~em−1 ]. So, in summary, the orientation
of ∂D is chosen by taking (at a point p ∈ ∂D) a basis of Tp (∂D), [~e1 , . . . , ~em−1 ], so that the
oriented basis [~n, ~e1 , . . . , ~em−1 ] has the original orientation in Tp M .
6
Notice that since we have an orientation compatible atlas this statement is independent of the coordinate
chosen. If we consider coordinates (z 1 , z 2 , ..., z m ) and an ordered basis of Tq M , { ∂z∂ 1 , ..., ∂z∂m }, we have with
g(q) : ω ∂y∂ 1 , ..., ∂y∂m > 0
ω(
∂
∂
∂
∂
, ..., m ) = g(q)dy 1 ∧ · · · ∧ dy m ( 1 , ..., m ) = g(q) det
∂z 1
∂z
∂z
∂z
∂y j
∂z k
> 0.
Analogously, it does not depend depend on the choice of coordinates in p.
7
Usually the vector ~n is chosen perpendicular to ~e1 , . . . , ~em in cases where we have an inner product defined
on the tangent space as for Riemannian manifolds.
93
Figure 22: Induced orientation on the boundary of a regular domain. [~n, e1 ] is the same orientation class as [~e1 , ~e2 ].
Example 9.2.1: Orientation on H n
Let us carry out this program for H n , where the boundary is R
I n−1 . The orientation for
H n is always chosen as [~e1 , ~e2 , . . . , ~en ], given by the standard basis in the standard order.
For n = 2, ~n = −~j, and a basis for ∂H n = R
I 1 could be chosen as ±~i. We choose the sign
+ because [−~j,~i] = [~i, ~j] coincides with the standard basis in H 2 . So the induced basis on
R
I 1 coincides in this case with the usual basis ~i. Consider now n = 3. In this case ~n = −~k.
We could choose as orientation of ∂H 3 = R
I 2 , [~i, ~j] or [~j,~i]. We choose [~j,~i], because
[−~k, ~j,~i] = [~j, ~k,~i] = −[~j,~i, ~k] = [~i, ~j, ~k].
(9.2.6)
Therefore, in this case, the induced orientation is the opposite of the standard orientation
on R
I n . Extending this reasoning, one can prove that in general the induced orientation
on ∂H n coincides with the standard orientation for R
I n−1 if n is even, and it is opposite
if n is odd.
9.2.3
Integration of differential forms over regular domains
We shall now give a definition of the integral of a m−form ω, on a regular domain in terms of
the integrals over simplexes defined in subsection 9.1.
Definition 9.2.2: supp(ω)
To avoid issues concerning convergence and improper integrals, we shall assume that a
differential form ω has compact support, where we recall that the support of ω, supp(ω) is
defined as
supp(ω) = cl{p ∈ M |ω(p) 6= 0},
(9.2.7)
where clA denotes the closure of a set A.
94
Definition 9.2.3: Regular Simplex
A regular simplex is a singular m-simplex f : σ̄m → M such that f is an orientation
preserving diffeomorphism.
Recall that the standard simplex σ̄m := (p0 , p1 , . . . , pm ) has an orientation given by the
standard ordering of the points p0 := (0, 0, ..., 0), p1 = (1, 0, 0, . . . , 0), etc. This coincides with the
standard orientation of R
I m in that it determines the standard ordering of the vectors p0~p1 := ~e1 ,
p0~p2 := ~e2 ,...,p0~pm := ~em , which is also an ordering for the basis of the tangent space in R
I m at
any point.
Definition 9.2.4: Orientation preserving diffeomorphism
Orientation preserving diffeomorphism f means that at every point x ∈ R
I m the basis
{f∗~e1 , f∗~e2 , . . . , f∗~em } of Tf (x) M has the same orientation as M .
Consider a coordinate cover {Uj } of M , which is also a cover of supp(ω). Since supp(ω)
is compact, we can choose a finite subcover of (supp(ω)) ∩ D, {U1 , U2 , . . . , Uk }, and if U0 :=
M −((suppω)∩D), then {U0 , U1 , U2 , . . . , Uk } is a cover of M . Let {0 , 1 , . . . , k } be a partition of
unity subordinate to the cover {U0 , U1 , U2 , . . . , Uk }. Now, associate to each of the above elements
U1 , U2 , . . . , Uk of the cover {U0 , U1 , U2 , . . . , Uk } a regular m−simplex s1 ,...,sk 8 corresponding to
diffeomorphisms with images s1 (σ̄m ),...,sk (σ̄m ) in D. We assume (without loss of generality as
we can always get smaller open sets) that the open sets U1 , . . . , Uk are small enough so that
either
1)
Uj ⊂ int(sj (σ̄m )) ⊆ int(D),
where int denotes the interior of a set, which is the case of s1 in Figure 23
or
2)
sj (σ̄m ) ⊆ D and
Uj ∩ ∂D ⊆ sm
j (σ̄m−1 ),
(9.2.8)
(9.2.9)
i.e., Uj ∩ ∂D is a subset of the m−th face of the standard simplex sj . The faces of an r−simplex
are defined in (8.5.3)-(8.5.6). This is the case of the simplex s2 in Figure 23.
Definition 9.2.5: Integral of r-form on Regular Domain
Within this setting, we define the integral
Z
ω :=
D
k Z
X
i=1
i ω,
(9.2.10)
si
that is, the integral is defined in terms of the integrals over simplexes as defined in (9.1.1).
8
For simplicity of notation we omit the subscript m, in sm . It is understood that we are dealing with
m−simplexes.
95
Figure 23: Two types of open sets U and V in the cover of supp(ω) ∩ D and the associated
regular simplexes s1 , s2 , respectively.
We now show that this definition is independent of the cover used, simplexes used, and the
partition of unity used. It only depends on ω and D. Let {U0 , U1 , ..., Uk } be the above cover with
associated regular simplexes s1 , ..., sk and partition of unity {0 , 1 , ..., k } as for the definition
(9.2.10) and {V0 , V1 , . . . , Vl } be another cover with associated simplexes f1 , f2 , ..., fl and partition
of unity δ0 , δ1 , ...., δl . Using the latter set-up, the definition (9.2.10) gives
Z
ω :=
D
l Z
X
j=1
δj ω.
(9.2.11)
fj
P
Since δ0 = 0 on supp(ω)∩D, we have lj=1 δj = 1 on supp(ω)∩D. Inserting this in the definition
(9.2.10), we get
k Z
k Z X
l
X
X
XZ
i ω =
δj i ω =
δj i ω.
(9.2.12)
i=1
si
i=1
si j=1
i,j
si
Analogously, starting from (9.2.11) we get
l Z
X
j=1
δj ω =
fj
XZ
i,j
(δj i ω).
(9.2.13)
fj
To show that (9.2.12) and (9.2.13) are equal, fix i and j in the right hand side of (9.2.12). By
definition we have
Z
Z
(δj i ω) :=
(9.2.14)
s∗i (δj i ω),
si
σ̄m
m
where the right hand side is an integral in R
I . On R
I m , we consider a variable transformation
96
s−1
i ◦ fj , which is orientation preserving. We have the following:
Z
Z
Z
∗
(δj i ω) :=
si (δj i ω) =
s∗i (δj i ω) =
si
Z
fj−1 (Vj ∩Ui )
s−1
i (Vj ∩Ui )
σ̄m
(s−1
i
◦
Z
fj )∗ s∗i (δj i ω)
Z
fj∗ (δj i ω)
=
fj−1 (Vj ∩Ui )
(9.2.15)
fj∗ (δj i ω) =
Z
:=
(δj i ω).
fj
σ̄m
The second equality is due to the fact that δj i ω is possibly different from zero only on Vj ∩ Ui .
The third equality is an application of the transformation of variables formula for integrals in
R
I m together with the fact that s−1
i ◦ fj is assumed to be orientation preserving. The fourth
equality follows directly from properties of the pullback. The fifth equality follows again from
the fact that δj i ω is possibly different from zero only on Vj ∩ Ui .
This definition of the integral on a regular domain allows us to relate the integral of dω on
D with the integral of ω on ∂D, for ω ∈ Ωm−1 (M ), which is the content of the second version of
Stokes theorem.
9.2.4
The second version of Stokes theorem
For the sake of clarity of the presentation, we first record a simple property.
Lemma 9.2.1
Let 1 , . . . , k be the part of the partition of unity chosen above. Then on supp(ω) ∩ D,
dω =
k
X
d(i ω).
(9.2.16)
i=1
Proof. We have
k
X
i=1
d(i ω) =
k
X
di ∧ ω + i dω =
i=1
k
X
!
di
i=1
∧ω+
k
X
i dω.
i=1
Pk
The first term of this expression is zero since
i=1 j ≡ 1 and therefore constant on
supp(ω) ∩ D. Moreover,
!
k
k
X
X
i dω =
i dω = dω.
(9.2.17)
i=1
i=1
Theorem 9.2.1: Second version of Stokes Theorem
Let ω be an m − 1 form with compact support and D a regular domain. Then
Z
Z
dω =
ω.
D
∂D
97
(9.2.18)
Proof. Using Lemma 9.2.1, we write
Z
dω =
D
k Z
X
d(i ω) :=
D
i=1
k X
k Z
X
j d(i ω) =
sj
i=1 j=1
k Z
X
i ω.
(9.2.19)
∂si
i=1
R
Here the second equality is due to the fact that, as we have seen in the definition, D η for
every m-form η does not depend on the cover of supp(η) ∩ D or on the partition of unity
and simplexes chosen. In our case η := d(i ω) is zero outside of Ui , and we can take as
a cover of supp(η) ∩ D all the Uj ∩ Ui ’s as j = 0, 1, ..., k. We can also take all simplexes
equal to si . Therefore for the independence of the choice of the simplexes, we have
k Z
X
j=1
j d(i ω) =
sj
k Z
X
Z
j d(i ω) =
(
si
j=1
k
X
Z
j )d(i ω) =
si j=1
d(i ω).
si
The third equality in (9.2.19) follows from the first version of Stokes’ theorem.
We now look at
Z
i ω,
∂si
in (9.2.19), for a given i. If Ui ⊂ int(si (σ̄m )), that is we are in the situation described by
(9.2.8), we have
Z
i ω = 0,
(9.2.20)
∂si
since i ω = 0 on the boundary of si (σ̄m ).
R
Assume now that we are in the situation (9.2.9), i.e., Ui ∩ ∂D ⊆ sm
i (σ̄m−1 ). Then ∂si i ω
has non zero contribution only along the m−th face of ∂si . That is we have
Z
i ω =
∂si
m
X
(−1)l
l=0
Z
i ω = (−1)m
sli
Z
sm
i
i ω
(9.2.21)
Now i ω is zero on ∂D outside sm
i (σ̄m−1 ). Moreover, since si is orientation preserving it
also preserves the induced orientation of its m−th face as part of the boundary of σ̄m when
mapping to the oriented manifold ∂D. However, the induced orientation on this face is
(−1)m the standard orientation on σ̄m−1 (cf. Example 9.2.1). Therefore, the orientation of
m
sm
i (σ̄m−1 on ∂D) is (−1) times the orientation of ∂D. Combining these two observations
we obtain
Z
Z
Z
2m
i ω = (−1)
i ω =
i ω.
(9.2.22)
∂si
∂D
∂D
Using (9.2.21) and (9.2.22) in (9.2.19), we obtain
Z
dω =
D
m Z
X
i=1
Z
i ω =
∂D
(
m
X
∂D i=1
as desired.
98
Z
i )ω =
ω,
∂D
(9.2.23)
9.3
De Rham Theorem and Poincare’ Lemma
Definition 9.3.1: Inner product on Cr (M ) × Ωr (M )
The integral of a differential form over an r−chain, determines an inner product (·, ·) :
Cr (M ) × Ωr (M ) → R
I , defined as
Z
c ∈ Cr (M ), ω ∈ Ωr (M ).
(9.3.1)
(c, ω) := ω,
c
Elementary properties of the integral prove that this is indeed an inner product, i.e., linear
in both c and ω. Stokes theorem is a statement about the fact that the operators ∂ and d are
adjoint of each other with respect to this inner product. It reads for c ∈ Cr (M ), ω ∈ Ωr−1 (M )
as
Z
Z
(9.3.2)
ω = dω := (c, dω).
(∂c, ω) :=
c
∂c
Definition 9.3.2: Λ: Inner product on Hr (M ) × H r (M )
The inner product (·, ·) : Cr (M ) × Ωr (M ) → R
I induces an inner product Λ : Hr (M ) ×
r
H (M ) → R
I , as follows
Z
Λ([c], [ω]) := (c, ω) := ω.
(9.3.3)
c
We have to check that this definition is well posed, i.e., independent of the representatives c
and ω. Let us choose a different representative for [c], c + ∂c1 . We have
Z
Z
Z
Z
Z
Z
ω= ω+
ω= ω+
dω = ω,
(9.3.4)
c+∂c1
∂c1
c
c1
c
c
where we used Stokes’ theorem and in the last equality the fact that ω is a closed form so that
dω = 0. Also we have, for ψ ∈ Ωr−1 (M )
Z
Z
Z
Z
Z
Z
ω + dψ = ω + dψ = ω +
ψ = ω,
(9.3.5)
c
c
c
c
∂c
c
where again we used Stokes’ theorem and the fact that c is a cycle so that ∂c = 0.
If we fix [ω] ∈ H r (M ), Λ(·, [ω]) is a linear map Hr (M ) → R
I and therefore, an element of
the dual space (Hr (m))∗ . Therefore, we have a linear map λ : H r (M ) → (Hr (M ))∗ , which
associates to [ω], the element Λ(·, [ω]). Is this map an isomorphism? This is the content of De
Rham Theorem:
Theorem 9.3.1: De Rham Theorem
If M is a compact manifold, then H r (M ) and Hr (M ) are finite dimensional and the above
map λ : [ω] → Λ(·, [ω]) is an isomorphism λ : H r (M ) → (Hr (M ))∗ .
Therefore, under the assumptions of the theorem, Hr (M ) and H r (M ) are dual of each other,
and information on one can be obtained from information on the other. Moreover, the inner
product Λ under the assumptions of the theorem is nondegenerate (see Exercise 9.3).
99
9.3.1
Consequences of the De Rham Theorem
As a consequence of the De Rham theorem the Betti numbers, br (M ) := dim Hr (M ) are equal
to dim(H r (M )). Moreover, if [c1 ], [c2 ],...,[ck ], is a basis of Hr (M ), then ω ∈ Z r (M ) is exact if
and only if
Z
(cj , ω) :=
ω = 0,
∀j = 1, 2, ..., k.
(9.3.6)
cj
In fact, this would mean that λ([ω]) is the zero map in (Hr (M ))∗ , but since λ is an isomorphism,
[ω] must be zero, that is ω is an exact form. This means that
we can check that a form is exact by calculating its integral over a finite number of chains.
Given a basis {[c1 ], [c2 ], ..., [ck ]} of Hr (M ) we can use the non-degenerate inner product Λ to
find a dual basis {[ω1 ], ..., [ωk ]} for H r (M ), which satisfies the requirement
Λ([ci ], [ωj ]) = δi,j .
(9.3.7)
To see this, just take any basis {[ω̃l ]} of H r (M ) and for fixed j, solve the equations
Λ([cs ],
br
X
xl [ω̃l ]) =
l=1
br
X
xl Λ([cs ], [ω̃l ]) = δs,j ,
s = 1, 2, . . . br ,
(9.3.8)
l=1
since the matrix Λ([cs ], [ω̃l ]) is nonsingular.
9.3.2
Poincare’ Lemma
The fact that H r (M ) 6= {0} indicates that there are r−forms that are not exact. How ‘many’
these forms are depends on how ‘big’ H r (M ) is. Poincare’ Lemma states a condition on M
for H r (M ) to be zero for r = 1, . . . , m. Once again, this is a topological condition which has
consequences for a space which was defined in terms of analytic conditions. Notice M is not
required to be compact here.
Theorem 9.3.2: Poincare’ Lemma
If M is contractible to a point, then H r (M ) = {0} for r = 1, 2, . . . , m.a Therefore, any
closed r-form on M is exact.
a
and obviously for r = m + 1, m + 2, ... since Ωm+1 (M ) = Ωm+2 (M ) = · · · = 0.
In particular if U is a contractible coordinate neigborhood in M , we can consider the restriction of a differential form ω on U , which is exact. Every closed form is locally exact, and H r (M )
captures the global properties of M .
100
9.4
Exercises
Exercise 9.1 Using the linearity of the Riemann integral, prove that the integral defined in 9.1.1
is also linear.
Exercise 9.2 Prove that H r ( R
I n ) = 0 for r = 1, 2, . . . , n and H 0 ( R
I n) ∼
I.
= R
Exercise 9.3 Prove the inner product Λ is nondegenerate if the manifold is compact.
101
10
10.1
Lie groups Part I; Basic Concepts
Basic Definitions and Examples
Definition 10.1.1: Lie Groups
Lie group G is a differentiable manifold which is also group and such that the group
operations, product, · : G × G → G, (g1 , g2 ) → g1 · g2 , and inversion −1 , G → G, g → g −1 ,
are C ∞ (here G × G is the product manifold of G with itself). The dimension of the Lie
group is its dimension as a manifold.
The simplest example of a Lie group is R
I ∗ := R
I − {0} with the multiplication operation,
which is a manifold with two connected components and a Lie group since the operations of
multiplication and inversion are C ∞ . The connected component R
I + of positive real numbers
∗
is also a Lie group, with the same operations of R
I . It has obviously dimension 1. A general∗
ization of R
I is the general linear group Gl(n, R
I ) of n × n real matrices with determinant
different from zero, where the product and the inversions are the standard product of matrices and inversion of a matrix. As a manifold, Gl(n, R
I ) has the subset topology derived from
2
2
the topology of Mn,n ( R
I) ≡ R
I n and in fact Gl(n, R
I ) is the same as R
I n with the closed set
{A ∈ Mn,n ( R
I ) | det(A) = 0} removed. Analogously to R
I + , Gl+ (n, R
I ) is the Lie group of
nonsingular matrices with positive determinant. The general linear group Gl(n, CI ) is the
Lie group of nonsingular n × n matrices with complex entries, with the product and inversion
operation of a matrix. This is a 2n2 dimensional Lie group, the 2n2 real parameters give the
coordinate functions of a (global) chart.
All Lie groups which are subgroups (in the algebraic sense) of Gl(n, R
I ) or Gl(n, CI ) are
called matrix Lie groups or linear Lie groups. They give some of the most important
examples of Lie groups. Among these, SL(n, R) (SL(n, C)), the special Linear group, is
the Lie group of matrices in Gl(n, R
I ) (Gl(n, CI )) with determinant equal to 1. The Lie group
O(n), the orthogonal group, is the Lie group of n × n orthogonal matrices in Gl(n, R
I ), i.e.,
A ∈ O(n) ⇔ AT A = AAT = 1; SO(n), the special orthogonal group is the Lie group of
matrices in O(n) with determinant equal to 1. The Lie group U (n), the unitary group, is the
Lie group (subgroup of Gl(n, C
I )) of n × n unitary matrices, i.e., U ∈ U (n) ⇔ U † U = U U † = 1.
The Lie group SU (n), the special unitary group, is the Lie group of matrices in U (n) with
determinant equal to one.
O(n) and U (n) are Lie groups defined by the fact that they preserve some inner product; For
example if and only if A ∈ O(n), (A~x)T (A~x) = ~xT AT A~x = ~xT ~x, for every ~x ∈ Rn . More in
general, one can consider Lie groups which preserve a quadratic form, not necessarily positive
definite. Consider the matrix 1p,q := diag(1p , −1q ), where 1s is the s × s identity matrices. This
defines a quadratic form ~xT 1p,q ~x := x21 + x22 + · · · + x2p − x2p+1 − x2p+2 − · · · − x2p+q . Then the Lie
O(p, q) is the Lie group of all matrices A such that
AT 1p,q A = 1p,q .
A particularly important case is O(1, 3), the Lorentz group where 11,3 represents the Minkowski
metric in time-space R
I 4.
102
10.2
Lie subgroups and coset spaces
Definition 10.2.1: Lie subgroups
A Lie subgroup H of a Lie group G is a subgroup of G (in the algebraic sense) which is
also a Lie group and a submanifold of G
Notice in particular that since the inclusion map i : H → G is supposed to be smooth and
therefore continuous and open, the topology on H has to coincide with the subset topology of
H as a subset of G. This observation gives rise to examples of Lie groups which are subgroups
but not Lie subgroups. The following example is an instance of this fact.
Example Let T 2 be the 2-dimensional torus defined as T 2 := S 1 × S 1 , which is a Lie group
with the group operation (eiθ1 , eiθ2 ) · (eiθ3 , eiθ4 ) := (ei(θ1 +θ3 ) , ei(θ2 +θ4 ) ). Consider now the Lie
group Hλ := {P ∈ T 2 |P = (eit , eiλt ), t ∈ R
I } which is a subgroup of T 2 and it is a Lie group with
2
the operation inherited from T . The topology on H is such that H is locally homeomoprhic to R
I.
Assume however that λ is an irrational number. Then this topology does not coincide with the
subset topology. In particular, consider the open set N in H, N := {(eit , eiλt ) ∈ H|t ∈ (−, )}.
If we try to realize N as H ∩ V where V is an open set in T 2 we should be able to find a δ
sufficiently small so that H ∩ Iδ ⊆ N , where Iδ : {(eiγ , eiψ ) ∈ G | γ, ψ ∈ (−δ, δ)}. In fact, any
open set V in G is the union of Iδ ’s, therefore N = H ∩ V = H ∩ (∪δIδ ) implies that there exists
a δ such that
H ∩ Iδ ⊆ N .
(10.2.1)
Let us have a closer look at H ∩ Iδ . (eiγ , eiψ ) is in H, if and only if, for some real x there
exist integers k and m such that
γ = x + 2kπ,
ψ = λx + 2mπ.
(10.2.2)
Consider now the coordinate chart φ on T 2 which maps Iδ to the box (−δ, δ) × (−δ, δ) ⊆ R2 .
We should have, assuming sufficiently small, from (10.2.1)
φ(H ∩ Iδ ) ⊆ φ(N ).
(10.2.3)
However φ(N ) is a single segment in (−δ, δ) × (−δ, δ) while φ(H ∩ Iδ ) is made up of segments
of lines (obtained by solving for x in (10.2.2)) ψ = λ(γ − 2kπ) + 2mπ all with the same slope λ
and with intersection with the ψ axis
ψm,k = 2π(m − λk).
(10.2.4)
Since λ is an irrational number, beside the intersection corresponding to m = 0, k = 0, ψ0,0 = 0
there are an infinite number of such intersections, ψm,k ∈ (−δ, δ) no matter how small δ is.
More concretely let us ask the question: How many values ψm,k in (10.3.4) exist so that
|ψm,k | < δ? that is, how many pairs (m, k) exist so that
|m − λk| <
δ
?
2π
(10.2.5)
There are in fact an infinite numbers of them because λ is irrational. This is a consequence
of Dirichlet’s approximation theorem [2] .This says that for every real number λ and positive
103
integer N , there exists two integers k and m, with 1 ≤ k ≤ N such that |kλ − m| < N1 . Now
assume that there was only a finite number of pairs and take the pair (m, k) which gives the
minimum of |m − λk|. Since λ is irrational such a minimum, call it min cannot be zero. Then
choose an N such that N1 < min and we can choose (k, m) so that |m − λk| < min which is
a contradiction. Therefore no matter what δ is there are an infinite number of values of ψm,k
which is not compatible with (10.2.3) .
The following important theorem gives a condition for a subgroup H of a Lie group G to be
a Lie subgroup.
Theorem 10.2.1: Closed subgroup theorem
A Lie group H which is a subgroup of a Lie group G and it is closed in the subset topology,
is a Lie subgroup.
This theorem in particular shows that the matrix Lie groups SL(n, R
I ), SO(n) and so on
are Lie subgroups of Gl(n, R
I ) as they are defined as the inverse images of a smooth map from
Gl(n, R
I ) to a closed set of R
I m , for appropriate m. For example, SL(n, R
I ) is the inverse immage
of the set {1} ∈ R
I under the function det; SO(n) is defined as the inverse image of the identity
under the function that maps elements A in Gl(n, R
I ) to the n2 entries of AT A and det(A) = 1,
n2 +1
a closed set in R
I
. Analogous discussion can be applied to SU (n), etc, as Lie subgroups of
Gl(n, CI ).
If H is a Lie subgroup of G, we can define an equivalence relation on elements of G, by saying
0
the g ∼ g 0 if and only if there exists h ∈ H with g = gh. The space of all equivalence classes
is called the (left)9 coset space, and any equivalence class is called a (left) coset. The coset
space G/H is a manifold of dimension dim(G) − dim(H) (see subsection ?? below). It is also a
group if H is a normal subgroup, that is, ghg −1 ∈ H for every h ∈ H and g ∈ G. In this case
the product operation can be defined between two equivalence classes [g1 ][g2 ] := [g1 g2 ] which is
independent of the representative chosen since
[g1 h1 ][g2 h2 ] = [g1 h1 g2 h2 ] = [g1 g2 g2−1 h1 g2 h2 ] = [g1 g2 h] = [g1 g2 ],
where h := g2−1 h1 g2 h2 ∈ H if h1 and h2 are in H. Also [g]−1 := [g −1 ] independent of the
representative since
[gh]−1 := [(gh)−1 ] = [h−1 g −1 ] = [g −1 gh−1 g −1 ] = [g −1 h̃] = [g −1 ],
where h̃ := gh−1 g −1 ∈ H if h ∈ H.
10.3
Invariant vector fields and Lie algebras
Given an element a in a Lie group G, the left translation La on G is the map La : G → G
La (g) := ag (that is, multiplication on the left). Analogously the right translation Ra is the
map Ra : G → G Ra (g) := ga (that is, multiplication on the right).
On G, just like any other manifold, we can define the Lie algebra of vector fields X (G).
However a subclass of these vector fields are particularly important.
9
Because we have used multiplication on the left by g ∈ G otherwise they would be called right coset space
104
Definition 10.3.1: (Invariant vector field)
A vector field on G is said to be left invariant (right invariant) if, for every g ∈ G
La∗ Xg = XLa (g) = XLa (g) ,
(Ra∗ Xg = XRa (g) = Xga ),
(10.3.1)
or, in terms of vector fields,
La∗ X = X,
(Ra∗ X = X).
(10.3.2)
From now on we shall deal only with left invariant vector fields as the theory for right invariant
vector fields can be obtained analogously.
Example 10.1. In a quest to understand how left invariant vector fields look like , we first
consider the Lie group R
I ∗ with the multiplication as the group operation. We know this is a
one dimensional manifold so in the standard coordinate, x, a vector field will look like
X=f
d
,
dx
(10.3.3)
d
where f is a function of the point x. Now the vector field La∗ X at ax is written as h dx
where h
is La∗ X applied to the coordinate function φ. That is,
h = [La∗ X(φ)] ◦ La := f (x)
d
d
(φ ◦ La ) = f (x) (ax) = f (x)a.
dx
dx
However, since La∗ X = X, we must have f (ax) = h. Therefore f (ax) = af (x), i.e., f is a linear
function and the left invariant vector field (10.3.3) are of the form
X = ax
d
.
dx
(10.3.4)
We shall generalize this example to Gl(n, R
I ) later.
Example 10.2. As another simple example consider R
I 2 but with the Lie group operation +,
(a1 , a2 ) + (b1 , b2 ) = (a1 + b1 , a2 + b2 ) (Notice this group is commutative so left translation and
right translation coincide). Let a := (a1 , a2 ). A general vector field, in the standard (Cartesian)
coordinates will have the form
∂
∂
X=f
+g ,
(10.3.5)
∂x
∂y
∂
∂
for smooth functions f = f (x, y) and g = g(x, y). Write La∗ X as La∗ X := h1 (x, y) ∂x
+h2 (x, y) ∂y
and recall that h1 is La∗ X applied to the first coordinate function (say φ) and h2 is La∗ X applied
to the second coordinate function (say ψ). That is, h1 (x + a1 , y + a2 ) = (La∗ X)|(x+a1 ,y+a2 ) φ,
h2 (x + a1 , y + a2 ) = (La∗ X)|(x+a1 ,y+a2 ) ψ.
In particular
(La∗ X)x+a1 ,y+a2 φ = Xx,y (φ ◦ La ) = f (x, y)
∂
∂
(x + a1 ) + g(x, y) (x + a1 ) = f (x, y). (10.3.6)
∂x
∂y
105
Recall the fact that φ ◦ La = x + a1 so that the derivative with respect to x gives 1. Analogously,
applying La∗ X to the second component, ψ, we get h2 (x + a1 , y + a2 ) = g(x, y). Therefore
(La∗ X)|(x+a1 ,y+a2 ) = f (x, y)
∂
∂
|(x+a1 ,y+a2 ) + g(x, y) |(x+a1 ,y+a2 ) .
∂x
∂y
(10.3.7)
On the other hand
X(x+a1 ,y+a2 ) = f (x + a1 , y + a2 )
∂
∂
|(x+a1 ,y+a2 ) + g(x + a1 , y + a2 ) |(x+a1 ,y+a2 ) .
∂x
∂y
(10.3.8)
Since (10.3.7) and (10.3.8) must be equal because of invariance, we have that f and g must be
constants in (10.3.5)
10.3.1
The Lie algebra of a Lie group
From examples 10.1, 10.2, we notice that, in these cases, the space of left invariant vector fields
is a vector space which is finite dimensional and whose dimension is equal to the dimension
of the Lie group G as a manifold. We also observe in the previous two examples that the space
of left invariant vector fields it is a Lie algebra, with the Lie bracket operation, since if we take
the commutator of two left invariant vector fields we obtain another left invariant vector field
(in fact zero in these cases) and therefore a Lie subalgebra of the Lie algebra of all vector fields
X (G). For example, in the case of R∗ of Example 10.1, the commutator of two left invariant
vector fields (in the standard coordinates) gives
∂
∂
∂
∂
∂
∂
ax , bx
= ax
bx
− bx
ax
=
∂x
∂x
∂x
∂x
∂x
∂x
∂
∂2
∂
∂2
+ abx2 2 − bxa
− bax2 2 = 0.
∂x
∂x
∂x
∂x
We are therefore left wondering whether these features are true in general.
The fact that the set of left invariant vector fields is a vector space follows immediately from
the definition and the fact that it is closed under Lie bracket also follows from the definition.
In fact take two left invariant vector fields X and Y and check the definition of left invariance
for [X, Y ], we have using the distributivity property of the pushforward with respect to the Lie
derivative (see, Exercise 5.6)
axb
La∗ ([X, Y ]|g ) = [La∗ X, La∗ Y ]|ag = [X, Y ]|ag .
(10.3.9)
We shall denote by g the Lie algebra of left invariant vector fields on G. The fact that the g is
a vector space of dimension equal to the dimension of the manifold G follows from the following
proposition.
Proposition 10.3.1
There exists an isomorphism between the tangent space at the identity Te G and the
space of left invariant vector fields of G. This isomorphism associates to V ∈ Te G the left
106
invariant vector field XV defined by
XV |a = La∗ V.
(10.3.10)
Proof. It is clear that the map (10.3.10) is linear. Moreover it gives indeed left invariant vector
fields since for b ∈ G,
Lb∗ XV |a = Lb∗ La∗ V = (La ◦ Lb )∗ V := XV |ba .
(10.3.11)
Moreover, the map is onto since for any left invariant vector field X the vector V = Xe ∈ Te G
produces the vector field X according to (10.3.10). Moreover, it is one to one since different
vectors in Te G give different left invariant vector fields. In fact, the map XV → V = XV |e
provides the inverse of the isomorphism.
The Lie algebra associated with a Lie group is by definition the Lie algebra of left
invariant vector fields on the Lie group. As we have seen, it is a vector space isomorphic to
the tangent space at the identity. In the following we shall follow the standard notation to
denote by lower case gothic letter(s), g the Lie algebra corresponding to a Lie group G denoted
by the same upper case letter(s). For instance, gl(n, R
I ) denotes the Lie algebra associated with
Gl(n, R
I ) and so(n) denotes the Lie algebra associated with SO(n).
10.3.2
Lie algebra of a Lie subgroup
In the following subsection, we shall describe the Lie algebra gl(n, R
I ) of Gl(n, R
I ) in the standard
coordinates (which consider as a coordinate each entry of a n × n matrix). For Lie subgroups of
Gl(n, R
I ), such as SO(n), we could in principle choose the appropriate local coordinates ( (n−1)(n)
2
in the case of SO(n)) and describe the Lie algebra, in these coordinates. However it is more
convenient to see the Lie algebra of the Lie subgroup as a subalgebra of the Lie algebra of the
bigger group and continue to work in the basis of gl(n, R
I ) just considering a subspace. After all,
for matrix groups, we still want to work with n × n matrices. We discuss these Lie subalgebras
of Lie subgroups in general terms here.
Let H be a Lie subgroup of G and h and g denote the corresponding Lie algebras. Let i
denote the inclusion map which by definition of submanifold is an immersion. A map from h to g
is obtained as follows. If V is an element of Te H then i∗ V is a vector of Te G. XV is the element
of h corresponding to V and Xi∗ V is the element of g corresponding to i∗ V . By combining the
isomorphisms of (10.3.10) with i∗ , i.e.,
i
∗
h → Te H −
→
Te G → g,
(10.3.12)
we obtain a linear map Φ from h to g of rank equal to dim(H), that is an isomorphism Φ : h → g
onto its image.
It remains to show that Φ is a Lie algebra homomorphism from h to g, that is, it
preserves the Lie bracket in h, i.e., Φ([XV , XW ]) = [Φ(XV ), Φ(XW )] where the Lie bracket
on the left is a Lie bracket in h and the Lie bracket on the right is a Lie bracket in g and
therefore that Φ(h) is a Lie subalgebra of g isomorphic to h. To see this, we show that the vector
107
corresponding to [Xi∗V , Xi∗W ] := [Φ(XV ), Φ(XW )] in Te G as for the isomorphism (10.3.10),
that is [Xi∗V , Xi∗W ](e), is exactly the same as the one corresponding to Φ([XV , XW ]), that is,
i∗ ([XV , XW ](e)) as for the isomorphism of (10.3.10) Therefore we have to show that
[Xi∗V , Xi∗W ](e) = i∗ ([XV , XW ](e)).
(10.3.13)
To see this notice that on i(H), Xi∗ V = i∗ XV . In fact on a function f we obtain
Xi∗ V |i(a) f := Li(a)∗ (i∗ V )f,
(10.3.14)
(i∗ XV )|i(a) f = XV |a (f ◦ i) = La∗ V (f ◦ i) = (i∗ La∗ V )f,
(10.3.15)
and (10.3.15) and (10.3.14) coincide since La∗ and i∗ commute.10 Therefore in (10.3.13) we
obtain
[Xi∗V , Xi∗W ](e) = [i∗ XV , i∗ XW ](e) = i∗ ([XV , XW ](e)),
(10.3.16)
as desired.
10.4
The Lie algebras of matrix Lie groups
We first characterize elements of gl(n, R
I ). In the standard coordinate in Gl(n, R
I ), xjk representing the jk-th entries of a matrix in Gl(n, R
I ). These are general vector fields f jk (x) ∂x∂jk , for
jk
smooth functions f (x) and the question is: what is the form of the functions f jk (x) so that the
vector field is left invariant? We determine this here generalizing what we have done in Example
10.1 for R∗ . It is useful, although not necessary, to use the isomorphism (10.3.10) and start with
a general tangent vector at 1n ∈ Gl(n, R
I ),
V = V jk
∂
|1 .
∂xjk n
(10.4.1)
The corresponding element of gl(n, R
I ), XV , is given at p ∈ Gl(n, R
I ) by Lp∗ V . The corresponding
functions f jk (p) are given by this vector fields applied to the jk-th component of the coordinate
functions, which we denote by φjk . So we have
f jk (p) = Lp∗ V (φjk ) := V (φjk ◦ Lp ) = V lm
V lm
∂
|1 (φjk ◦ Lp ) =
∂xlm n
(10.4.2)
X
X
X ∂
X
∂
bk
lm
bk
lm
b k
δ
=
pjl V lk .
p
δ
|
p
x
=
V
|
p
x
=
V
1
1
jb
jb
jb
n
n
m
l
∂xlm
∂xlm
b
b
b
l
Therefore, is we think of V as a matrix, denote it by Ṽ , and p as a matrix, which is an element
of Gl(n, R
I ), the vector field XV at p has coordinates given by the matrix pṼ .
It is natural to ask the expression of [XV , XW ] given that V := V ij ∂x∂ij |1n and W :=
W ij ∂x∂ij |1n so that
∂
∂
XV |p = pik V kj ij , XW |p = pik W kj ij .
(10.4.3)
∂x
∂x
10
We know that i∗ La∗ = (i ◦ La )∗ and Li(a)∗ i∗ = (Li(a) ◦ i)∗ , however i ◦ La = Li(a) ◦ i.
108
To discover the expression of [XV , XW ] once again we apply [XV , XW ] := XV XW − XW XV to
the coordinate function φrq using (10.4.3) and the result of (10.4.2). We have
XW |p φrq = pik W kj
∂
(φrq ) = pik W kj δri δjq = prk W kq ,
∂xij
(10.4.4)
XV |p φrq = pik V kj
∂
(φrq ) = pik V kj δri δjq = prk V kq .
∂xij
(10.4.5)
and analogously
Now
XV XW φrq = XV |p (xrk W kq ) = pjl V lm
∂
(xrk W kq ) =
∂xjm
(10.4.6)
k
pjl V lm δjr δm
W kq = prl V lm W mq ,
which in matrix form gives P Ṽ W̃ . Analogously
XW XV φrq = XW |p (xrk V kq ) = pjl W lm
∂
(xrk V kq ) =
∂xjm
(10.4.7)
k kq
pjl W lm δjr δm
V = prl W lm V mq ,
which, in matrix form, is P W̃ Ṽ . We obtain the result that [XV , XW ] at the point P ∈ Gl(n, R
I)
can be represented by P Ṽ W̃ − P W̃ Ṽ = P [Ṽ , W̃ ], i.e., by the commutator, in the sense of
matrices.
109
11
Exercises
Exercise 10.1 Find the dimensions of O(n), SO(n), and O(p, q),
110
12
Lie groups Part II
111
13
Fiber bundles; Part I
112
14
Fiber bundles; Part II
113
References
[1] M. Nakahara, Geometry, Topology and Physics (Graduate Student Series in Physics) 2nd
Edition, Taylor and Francis Group, New York, 2003.
[2] W.M., Schmidt, Diophantine approximation, Lecture Notes in Mathematics. 785. Springer,
(1980).
[3] M. Spivak, A Comprehensive Introduction to Differential Geometry, Vol. 1, Publish or Perish, 3-rd. Edition, 1999.
114
Descargar