Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Part IA — Vectors and Matrices

Definitions

Based on lectures by N. Peake


Notes taken by Dexter Chua

Michaelmas 2014

These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.

Complex numbers
Review of complex numbers, including complex conjugate, inverse, modulus, argument
and Argand diagram. Informal treatment of complex logarithm, n-th roots and complex
powers. de Moivre’s theorem. [2]

Vectors
Review of elementary algebra of vectors in R3 , including scalar product. Brief discussion
of vectors in Rn and Cn ; scalar product and the Cauchy-Schwarz inequality. Concepts
of linear span, linear independence, subspaces, basis and dimension.
Suffix notation: including summation convention, δij and εijk . Vector product and
triple product: definition and geometrical interpretation. Solution of linear vector
equations. Applications of vectors to geometry, including equations of lines, planes and
spheres. [5]

Matrices
Elementary algebra of 3 × 3 matrices, including determinants. Extension to n × n
complex matrices. Trace, determinant, non-singular matrices and inverses. Matrices as
linear transformations; examples of geometrical actions including rotations, reflections,
dilations, shears; kernel and image. [4]
Simultaneous linear equations: matrix formulation; existence and uniqueness of solu-
tions, geometric interpretation; Gaussian elimination. [3]
Symmetric, anti-symmetric, orthogonal, hermitian and unitary matrices. Decomposition
of a general matrix into isotropic, symmetric trace-free and antisymmetric parts. [1]

Eigenvalues and Eigenvectors


Eigenvalues and eigenvectors; geometric significance. [2]
Proof that eigenvalues of hermitian matrix are real, and that distinct eigenvalues give
an orthogonal basis of eigenvectors. The effect of a general change of basis (similarity
transformations). Diagonalization of general matrices: sufficient conditions; examples
of matrices that cannot be diagonalized. Canonical forms for 2 × 2 matrices. [5]
Discussion of quadratic forms, including change of basis. Classification of conics,
cartesian and polar forms. [1]
Rotation matrices and Lorentz transformations as transformation groups. [1]

1
Contents IA Vectors and Matrices (Definitions)

Contents
0 Introduction 4

1 Complex numbers 5
1.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Complex exponential function . . . . . . . . . . . . . . . . . . . . 5
1.3 Roots of unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Complex logarithm and power . . . . . . . . . . . . . . . . . . . . 6
1.5 De Moivre’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Lines and circles in C . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Vectors 7
2.1 Definition and basic properties . . . . . . . . . . . . . . . . . . . 7
2.2 Scalar product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Geometric picture (R2 and R3 only) . . . . . . . . . . . . 7
2.2.2 General algebraic definition . . . . . . . . . . . . . . . . . 7
2.3 Cauchy-Schwarz inequality . . . . . . . . . . . . . . . . . . . . . . 8
2.4 Vector product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.5 Scalar triple product . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Spanning sets and bases . . . . . . . . . . . . . . . . . . . . . . . 8
2.6.1 2D space . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6.2 3D space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6.3 Rn space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.6.4 Cn space . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.7 Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.8 Suffix notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.9 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.9.1 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.9.2 Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.10 Vector equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Linear maps 11
3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.1 Rotation in R3 . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2 Reflection in R3 . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Rank and nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4.2 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4.3 Decomposition of an n × n matrix . . . . . . . . . . . . . 12
3.4.4 Matrix inverse . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5.1 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5.2 Properties of determinants . . . . . . . . . . . . . . . . . 13
3.5.3 Minors and Cofactors . . . . . . . . . . . . . . . . . . . . 13

2
Contents IA Vectors and Matrices (Definitions)

4 Matrices and linear equations 15


4.1 Simple example, 2 × 2 . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Inverse of an n × n matrix . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Homogeneous and inhomogeneous equations . . . . . . . . . . . . 15
4.3.1 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . 15
4.4 Matrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.5 Homogeneous problem Ax = 0 . . . . . . . . . . . . . . . . . . . 15
4.5.1 Geometrical interpretation . . . . . . . . . . . . . . . . . . 15
4.5.2 Linear mapping view of Ax = 0 . . . . . . . . . . . . . . . 15
4.6 General solution of Ax = d . . . . . . . . . . . . . . . . . . . . . 15

5 Eigenvalues and eigenvectors 16


5.1 Preliminaries and definitions . . . . . . . . . . . . . . . . . . . . . 16
5.2 Linearly independent eigenvectors . . . . . . . . . . . . . . . . . . 16
5.3 Transformation matrices . . . . . . . . . . . . . . . . . . . . . . . 16
5.3.1 Transformation law for vectors . . . . . . . . . . . . . . . 16
5.3.2 Transformation law for matrix . . . . . . . . . . . . . . . 16
5.4 Similar matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.5 Diagonalizable matrices . . . . . . . . . . . . . . . . . . . . . . . 17
5.6 Canonical (Jordan normal) form . . . . . . . . . . . . . . . . . . 17
5.7 Cayley-Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . 17
5.8 Eigenvalues and eigenvectors of a Hermitian matrix . . . . . . . . 17
5.8.1 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . 17
5.8.2 Gram-Schmidt orthogonalization (non-examinable) . . . . 17
5.8.3 Unitary transformation . . . . . . . . . . . . . . . . . . . 17
5.8.4 Diagonalization of n × n Hermitian matrices . . . . . . . 17
5.8.5 Normal matrices . . . . . . . . . . . . . . . . . . . . . . . 17

6 Quadratic forms and conics 18


6.1 Quadrics and conics . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.1.1 Quadrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.1.2 Conic sections (n = 2) . . . . . . . . . . . . . . . . . . . . 18
6.2 Focus-directrix property . . . . . . . . . . . . . . . . . . . . . . . 18

7 Transformation groups 19
7.1 Groups of orthogonal matrices . . . . . . . . . . . . . . . . . . . 19
7.2 Length preserving matrices . . . . . . . . . . . . . . . . . . . . . 19
7.3 Lorentz transformations . . . . . . . . . . . . . . . . . . . . . . . 19

3
0 Introduction IA Vectors and Matrices (Definitions)

0 Introduction

4
1 Complex numbers IA Vectors and Matrices (Definitions)

1 Complex numbers
1.1 Basic properties
Definition (Complex number). A complex number is a number z ∈ C of the
form z = a + ib with a, b ∈ R, where i2 = −1. We write a = Re(z) and b = Im(z).
Definition (Complex conjugate). The complex conjugate of z = a + ib is a − ib.
It is written as z̄ or z ∗ .
Definition (Argand diagram). An Argand diagram is a diagram   in which a
x
complex number z = x + iy is represented by a vector p = . Addition of
y
vectors corresponds to vector addition and z̄ is the reflection of z in the x-axis.
Im
z1 + z2
z1
z2

Re
z̄2

Definition (Moduluspand argument of complex number). The modulus of


z = x + iy is r = |z| = x2 + y 2 . The argument is θ = arg z = tan−1 (y/x). The
modulus is the length of the vector in the Argand diagram, and the argument is
the angle between z and the real axis. We have
z = r(cos θ + i sin θ)
Clearly the pair (r, θ) uniquely describes a complex number z, but each complex
number z ∈ C can be described by many different θ since sin(2π + θ) = sin θ
and cos(2π + θ) = cos θ. Often we take the principle value θ ∈ (−π, π].

1.2 Complex exponential function


Definition (Exponential function). The exponential function is defined as

z2 z3 X zn
exp(z) = ez = 1 + z + + + ··· = .
2! 3! n=0
n!
Definition (Sine and cosine functions). Define, for all z ∈ C,

X (−1)n 2n+1 1 1
sin z = z = z − z3 + z5 + · · ·
n=0
(2n + 1)! 3! 5!

X (−1)n 2n 1 2 1
cos z = z =1− z + z4 + · · ·
n=0
(2n)! 2! 4!

1.3 Roots of unity


Definition (Roots of unity). The nth roots of unity are the roots to the equation
z n = 1 for n ∈ N. Since this is a polynomial of order n, there are n roots of
unity. In fact, the nth roots of unity are exp 2πi nk for k = 0, 1, 2, 3 · · · n − 1.


5
1 Complex numbers IA Vectors and Matrices (Definitions)

1.4 Complex logarithm and power


Definition (Complex logarithm). The complex logarithm w = log z is a solution
to eω = z, i.e. ω = log z. Writing z = reiθ , we have log z = log(reiθ ) = log r + iθ.
This can be multi-valued for different values of θ and, as above, we should select
the θ that satisfies −π < θ ≤ π.
Definition (Complex power). The complex power z α for z, α ∈ C is defined as
z α = eα log z . This, again, can be multi-valued, as z α = eα log |z| eiαθ e2inπα (there
are finitely many values if α ∈ Q, infinitely many otherwise). Nevertheless, we
make z α single-valued by insisting −π < θ ≤ π.

1.5 De Moivre’s theorem


1.6 Lines and circles in C

6
2 Vectors IA Vectors and Matrices (Definitions)

2 Vectors
2.1 Definition and basic properties
Definition (Vector). A vector space over R or C is a collection of vectors v ∈ V ,
together with two operations: addition of two vectors and multiplication of a
vector with a scalar (i.e. a number from R or C, respectively).
Vector addition has to satisfy the following axioms:
(i) a + b = b + a (commutativity)
(ii) (a + b) + c = a + (b + c) (associativity)
(iii) There is a vector 0 such that a + 0 = a. (identity)
(iv) For all vectors a, there is a vector (−a) such that a + (−a) = 0 (inverse)
Scalar multiplication has to satisfy the following axioms:
(i) λ(a + b) = λa + λb.
(ii) (λ + µ)a = λa + µa.
(iii) λ(µa) = (λµ)a.
(iv) 1a = a.
Definition (Unit vector). A unit vector is a vector with length 1. We write a
unit vector as v̂.

2.2 Scalar product


2.2.1 Geometric picture (R2 and R3 only)
Definition (Scalar/dot product). a · b = |a||b| cos θ, where θ is the angle
between a and b. It satisfies the following properties:
(i) a · b = b · a
2
(ii) a · a = |a| ≥ 0
(iii) a · a = 0 iff a = 0
(iv) If a · b = 0 and a, b 6= 0, then a and b are perpendicular.

2.2.2 General algebraic definition


Definition (Inner/scalar product). In a real vector space V , an inner product
or scalar product is a map V × V → R that satisfies the following axioms. It is
written as x · y or hx | yi.
(i) x · y = y · x (symmetry)
(ii) x · (λy + µz) = λx · y + µx · z (linearity in 2nd argument)
(iii) x · x ≥ 0 with equality iff x = 0 (positive definite)
Definition. The norm of a vector, written as |a| or kak, is defined as

|a| = a · a.

7
2 Vectors IA Vectors and Matrices (Definitions)

2.3 Cauchy-Schwarz inequality


2.4 Vector product
Definition (Vector/cross product). Consider a, b ∈ R3 . Define the vector
product
a × b = |a||b| sin θn̂,
where n̂ is a unit vector perpendicular to both a and b. Since there are two
(opposite) unit vectors that are perpendicular to both of them, we pick n̂ to be
the one that is perpendicular to a, b in a right-handed sense.

a×b

The vector product satisfies the following properties:

(i) a × b = −b × a.
(ii) a × a = 0.
(iii) a × b = 0 ⇒ a = λb for some λ ∈ R (or b = 0).
(iv) a × (λb) = λ(a × b).

(v) a × (b + c) = a × b + a × c.

2.5 Scalar triple product


Definition (Scalar triple product). The scalar triple product is defined as

[a, b, c] = a · (b × c).

2.6 Spanning sets and bases


2.6.1 2D space
Definition (Spanning set). A set of vectors {a, b} spans R2 if for all vectors
r ∈ R2 , there exist some λ, µ ∈ R such that r = λa + µb.
Definition (Linearly independent vectors in R2 ). Two vectors a and b are
linearly independent if for α, β ∈ R, αa + βb = 0 iff α = β = 0. In R2 , a and b
are linearly independent if a × b 6= 0.

Definition (Basis of R2 ). A set of vectors is a basis of R2 if it spans R2 and


are linearly independent.

8
2 Vectors IA Vectors and Matrices (Definitions)

2.6.2 3D space
2.6.3 Rn space
Definition (Linearly independent vectors). A set of vectors {v1 , v2 , v3 · · · vm }
is linearly independent if
m
X
λi vi = 0 ⇒ (∀i) λi = 0.
i=1

Definition (Spanning set). A set of vectors {u1 , u2 , u3 · · · um } ⊆ Rn is a


spanning set of Rn if
n
X
(∀x ∈ Rn )(∃λi ) λi ui = x
i=1

Definition (Basis vectors). A basis of Rn is a linearly independent spanning


set. The standard basis of Rn is e1 = (1, 0, 0, · · · 0), e2 = (0, 1, 0, · · · 0), · · · en =
(0, 0, 0, · · · , 1).
Definition (Orthonormal basis). A basis {ei } is orthonormal if ei · ej = 0 if
i 6= j and ei · ei = 1 for all i, j.
Using the Kronecker Delta symbol, which we will define later, we can write
this condition as ei · ej = δij .
Definition (Dimension of vector space). The dimension of a vector space is the
number of vectors in its basis. (Exercise: show that this is well-defined)
n
P (Scalar product). The scalar product of x, y ∈ R is defined as
Definition
x·y = xi yi .

2.6.4 Cn space
Definition (Cn ). Cn = {(z1 , z2 , · · · , zn ) : zi ∈ C}. It has the same standard
n n
P ∗ as R but the scalar product is defined differently. For u, v ∈ C , u · v =
basis
ui vi . The scalar product has the following properties:
(i) u · v = (v · u)∗
(ii) u · (λv + µw) = λ(u · v) + µ(u · w)
(iii) u · u ≥ 0 and u · u = 0 iff u = 0

2.7 Vector subspaces


Definition (Vector subspace). A vector subspace of a vector space V is a subset
of V that is also a vector space under the same operations. Both V and {0} are
subspaces of V . All others are proper subspaces.
A useful criterion is that a subset U ⊆ V is a subspace iff
(i) x, y ∈ U ⇒ (x + y) ∈ U .
(ii) x ∈ U ⇒ λx ∈ U for all scalars λ.
(iii) 0 ∈ U .
This can be more concisely written as “U is non-empty and for all x, y ∈ U ,
(λx + µy) ∈ U ”.

9
2 Vectors IA Vectors and Matrices (Definitions)

2.8 Suffix notation


P
Notation (Einstein’s summation convention). ConsiderP a sum x · y = xi yi .
The summation convention says that we can drop the symbol and simply
write x · y = xi yi . If suffixes are repeated once, summation is understood.
Note that i is a dummy suffix and doesn’t matter what it’s called, i.e.
xi yi = xj yj = xk yk etc.
The rules of this convention are:

(i) Suffix appears once in a term: free suffix


(ii) Suffix appears twice in a term: dummy suffix and is summed over
(iii) Suffix appears three times or more: WRONG!
Definition (Kronecker delta).
(
1 i=j
δij = .
0 i 6= j

We have    
δ11 δ12 δ13 1 0 0
δ21 δ22 δ23  = 0 1 0 = I.
δ31 δ32 δ33 0 0 1
So the Kronecker delta represents an identity matrix.
Definition (Alternating symbol εijk ). Consider rearrangements of 1, 2, 3. We
can divide them into even and odd permutations. Even permutations include
(1, 2, 3), (2, 3, 1) and (3, 1, 2). These are permutations obtained by performing
two (or no) swaps of the elements of (1, 2, 3). (Alternatively, it is any “rotation”
of (1, 2, 3))
The odd permutations are (2, 1, 3), (1, 3, 2) and (3, 2, 1). They are the
permutations obtained by one swap only.
Define 
+1 ijk is even permutation

εijk = −1 ijk is odd permutation

0 otherwise (i.e. repeated suffices)

εijk has 3 free suffices.


We have ε123 = ε231 = ε312 = +1 and ε213 = ε132 = ε321 = −1. ε112 =
ε111 = · · · = 0.

2.9 Geometry
2.9.1 Lines
2.9.2 Plane

2.10 Vector equations

10
3 Linear maps IA Vectors and Matrices (Definitions)

3 Linear maps
3.1 Examples
3.1.1 Rotation in R3
3.1.2 Reflection in R3

3.2 Linear Maps


Definition (Domain, codomain and image of map). Consider sets A and B
and mapping T : A → B such that each x ∈ A is mapped into a unique
x0 = T (x) ∈ B. A is the domain of T and B is the co-domain of T . Typically,
we have T : Rn → Rm or T : Cn → Cm .
Definition (Linear map). Let V, W be real (or complex) vector spaces, and
T : V → W . Then T is a linear map if
(i) T (a + b) = T (a) + T (b) for all a, b ∈ V .
(ii) T (λa) = λT (a) for all λ ∈ R (or C).
Equivalently, we have T (λa + µb) = λT (a) + µT (b).
Definition (Image and kernel of map). The image of a map f : U → V is the
subset of V {f (u) : u ∈ U }. The kernel is the subset of U {u ∈ U : f (u) = 0}.

3.3 Rank and nullity


Definition (Rank of linear map). The rank of a linear map f : U → V , denoted
by r(f ), is the dimension of the image of f .
Definition (Nullity of linear map). The nullity of f , denoted n(f ) is the
dimension of the kernel of f .

3.4 Matrices
3.4.1 Examples
3.4.2 Matrix Algebra
Definition (Addition of matrices). Consider two linear maps α, β : Rn → Rm .
The sum of α and β is defined by

(α + β)(x) = α(x) + β(x)

In terms of the matrix, we have

(A + B)ij xj = Aij xj + Bij xj ,

or

(A + B)ij = Aij + Bij .

Definition (Scalar multiplication of matrices). Define (λα)x = λ[α(x)]. So


(λA)ij = λAij .

11
3 Linear maps IA Vectors and Matrices (Definitions)

Definition (Matrix multiplication). Consider maps α : R` → Rn and β :


Rn → Rm . The composition is βα : R` → Rm . Take x ∈ R` 7→ x00 ∈ Rm .
Then x00 = (BA)x = Bx0 , where x0 = Ax. Using suffix notation, we have
x00i = (Bx0 )i = bik x0k = Bik Akj xj . But x00i = (BA)ij xj . So

(BA)ij = Bik Akj .

Generally, an m × n matrix multiplied by an n × ` matrix gives an m × ` matrix.


(BA)ij is given by the ith row of B dotted with the jth column of A.

Definition (Transpose of matrix). If A is an m × n matrix, the transpose AT


is an n × m matrix defined by (AT )ij = Aji .
Definition (Hermitian conjugate). Define A† = (AT )∗ . Similarly, (AB)† =
B † A† .

Definition (Symmetric matrix). A matrix is symmetric if AT = A.


Definition (Hermitian matrix). A matrix is Hermitian if A† = A. (The diagonal
of a Hermitian matrix must be real).
Definition (Anti/skew symmetric matrix). A matrix is anti-symmetric or skew
symmetric if AT = −A. The diagonals are all zero.

Definition (Skew-Hermitian matrix). A matrix is skew-Hermitian if A† = −A.


The diagonals are pure imaginary.
Definition (Trace of matrix). The trace of an n × n matrix A is the sum of the
diagonal. tr(A) = Aii .

Definition (Identity matrix). I = δij .

3.4.3 Decomposition of an n × n matrix


3.4.4 Matrix inverse
Definition (Inverse of matrix). Consider an m × n matrix A and n × m matrices
B and C. If BA = I, then we say B is the left inverse of A. If AC = I, then
we say C is the right inverse of A. If A is square (n × n), then B = B(AC) =
(BA)C = C, i.e. the left and right inverses coincide. Both are denoted by A−1 ,
the inverse of A. Therefore we have

AA−1 = A−1 A = I.

Definition (Invertible matrix). If A has an inverse, then A is invertible.


Definition (Orthogonal and unitary matrices). A real n×n matrix is orthogonal
if AT A = AAT = I, i.e. AT = A−1 . A complex n × n matrix is unitary if
U † U = U U † = I, i.e. U † = U −1 .

3.5 Determinants
3.5.1 Permutations
Definition (Permutation). A permutation of a set S is a bijection ε : S → S.

12
3 Linear maps IA Vectors and Matrices (Definitions)

Notation. Consider the set Sn of all permutations of 1, 2, 3, · · · , n. Sn contains


n! elements. Consider ρ ∈ Sn with i 7→ ρ(i). We write
 
1 2 ··· n
ρ= .
ρ(1) ρ(2) · · · ρ(n)
Definition (Fixed
  point). A fixed point of ρ is a k such that ρ(k) = k. e.g. in
1 2 3 4
, 3 is the fixed point. By convention, we can omit the fixed point
4 1 3 2 
1 2 4
and write as .
4 1 2
Definition (Disjoint permutation). Two permutations aredisjoint if numbers 
1 2 4 5 6
moved by one are fixed by the other, and vice versa. e.g. =
5 6 1 4 2
  
2 6 1 4 5
, and the two cycles on the right hand side are disjoint.
6 2 5 1 4
Disjoint permutations commute, but in general non-disjoint permutations do
not.
 
2 6
Definition (Transposition and k-cycle). is a 2-cycle or a transposition,
 6 2
1 4 5
and we can simply write (2 6). is a 3-cycle, and we can simply write
5 1 4
(1 5 4). (1 is mapped to 5; 5 is mapped to 4; 4 is mapped to 1)
Definition (Sign of permutation). The sign of a permutation ε(ρ) is (−1)r ,
where r is the number of 2-cycles when ρ is written as a product of 2-cycles. If
ε(ρ) = +1, it is an even permutation. Otherwise, it is an odd permutation. Note
that ε(ρσ) = ε(ρ)ε(σ) and ε(ρ−1 ) = ε(ρ).
Definition (Levi-Civita symbol). The Levi-Civita symbol is defined by

+1 if j1 j2 j3 · · · jn is an even permutation of 1, 2, · · · n

εj1 j2 ···jn = −1 if it is an odd permutation

0 if any 2 of them are equal

Clearly, ερ(1)ρ(2)···ρ(n) = ε(ρ).


Definition (Determinant). The determinant of an n × n matrix A is defined as:
X
det(A) = ε(σ)Aσ(1)1 Aσ(2)2 · · · Aσ(n)n ,
σ∈Sn

or equivalently,
det(A) = εj1 j2 ···jn Aj1 1 Aj2 2 · · · Ajn n .

3.5.2 Properties of determinants


3.5.3 Minors and Cofactors
Definition (Minor and cofactor). For an n × n matrix A, define Aij to be the
(n − 1) × (n − 1) matrix in which row i and column j of A have been removed.
The minor of the ijth element of A is Mij = det Aij
The cofactor of the ijth element of A is ∆ij = (−1)i+j Mij .

13
3 Linear maps IA Vectors and Matrices (Definitions)

Notation. We use ¯ to denote a symbol which has been missed out of a natural
sequence.

14
4 Matrices and linear equations IA Vectors and Matrices (Definitions)

4 Matrices and linear equations


4.1 Simple example, 2 × 2
4.2 Inverse of an n × n matrix
4.3 Homogeneous and inhomogeneous equations
Definition (Homogeneous equation). If b = 0, then the system is homogeneous.
Otherwise, it’s inhomogeneous.

4.3.1 Gaussian elimination

4.4 Matrix rank


Definition (Column and row rank of linear map). The column rank of a matrix
is the maximum number of linearly independent columns.
The row rank of a matrix is the maximum number of linearly independent
rows.

4.5 Homogeneous problem Ax = 0


4.5.1 Geometrical interpretation
4.5.2 Linear mapping view of Ax = 0

4.6 General solution of Ax = d

15
5 Eigenvalues and eigenvectors IA Vectors and Matrices (Definitions)

5 Eigenvalues and eigenvectors


5.1 Preliminaries and definitions
Definition (Multiplicity of root). The root z = ω has multiplicity k if (z − ω)k
is a factor of p(z) but (z − ω)k+1 is not.
Definition (Eigenvector and eigenvalue). Let α : Cn → Cn be a linear map
with associated matrix A. Then x 6= 0 is an eigenvector of A if
Ax = λx
for some λ. λ is the associated eigenvalue. This means that the direction of the
eigenvector is preserved by the mapping, but is scaled up by λ.
Definition (Characteristic equation of matrix). The characteristic equation of
A is
det(A − λI) = 0.
Definition (Characteristic polynomial of matrix). The characteristic polynomial
of A is
pA (λ) = det(A − λI).
Definition (Eigenspace). The eigenspace denoted by Eλ is the kernel of the
matrix A − λI, i.e. the set of eigenvectors with eigenvalue λ.
Definition (Algebraic multiplicity of eigenvalue). The algebraic multiplicity
M (λ) or Mλ of an eigenvalue λ is the multiplicity of λ in pA (λ) = 0. By the
fundamental theorem of algebra,
X
M (λ) = n.
λ

If M (λ) > 1, then the eigenvalue is degenerate.


Definition (Geometric multiplicity of eigenvalue). The geometric multiplicity
m(λ) or mλ of an eigenvalue λ is the dimension of the eigenspace, i.e. the
maximum number of linearly independent eigenvectors with eigenvalue λ.
Definition (Defect of eigenvalue). The defect ∆λ of eigenvalue λ is
∆λ = M (λ) − m(λ).
It can be proven that ∆λ ≥ 0, i.e. the geometric multiplicity is never greater
than the algebraic multiplicity.

5.2 Linearly independent eigenvectors


5.3 Transformation matrices
5.3.1 Transformation law for vectors
5.3.2 Transformation law for matrix

5.4 Similar matrices


Definition (Similar matrices). Two n × n matrices A and B are similar if there
exists an invertible matrix P such that
B = P −1 AP,

16
5 Eigenvalues and eigenvectors IA Vectors and Matrices (Definitions)

i.e. they represent the same map under different bases. Alternatively, using the
language from IA Groups, we say that they are in the same conjugacy class.

5.5 Diagonalizable matrices


Definition (Diagonalizable matrices). An n × n matrix A is diagonalizable if
it is similar to a diagonal matrix. We showed above that this is equivalent to
saying the eigenvectors form a basis of Cn .

5.6 Canonical (Jordan normal) form


5.7 Cayley-Hamilton Theorem
5.8 Eigenvalues and eigenvectors of a Hermitian matrix
5.8.1 Eigenvalues and eigenvectors
5.8.2 Gram-Schmidt orthogonalization (non-examinable)
5.8.3 Unitary transformation
5.8.4 Diagonalization of n × n Hermitian matrices
5.8.5 Normal matrices
Definition (Normal matrix). A normal matrix as a matrix that commutes with
its own Hermitian conjugate, i.e.

N N † = N †N

17
6 Quadratic forms and conics IA Vectors and Matrices (Definitions)

6 Quadratic forms and conics


Definition (Sesquilinear, Hermitian and quadratic forms). A sesquilinear form
is a quantity F = x† Ax = x∗i Aij xj . If A is Hermitian, then F is a Hermitian
form. If A is real symmetric, then F is a quadratic form.

6.1 Quadrics and conics


6.1.1 Quadrics
Definition (Quadric). A quadric is an n-dimensional surface defined by the
zero of a real quadratic polynomial, i.e.

xT Ax + bT x + c = 0,

where A is a real n × n matrix, x, b are n-dimensional column vectors and c is a


constant scalar.

6.1.2 Conic sections (n = 2)

6.2 Focus-directrix property


Definition (Conic sections). The eccentricity and scale are properties of a conic
section that satisfy the following:
Let the foci of a conic section be (±ae, 0) and the directrices be x = ±a/e.
A conic section is the set of points whose distance from focus is e× distance
from directrix which is closer to that of focus (unless e = 1, where we take the
distance to the other directrix).

18
7 Transformation groups IA Vectors and Matrices (Definitions)

7 Transformation groups
7.1 Groups of orthogonal matrices
Definition (Orthogonal group). The orthogonal group O(n) is the group of
orthogonal matrices.
Definition (Special orthogonal group). The special orthogonal group is the
subgroup of O(n) that consists of all orthogonal matrices with determinant 1.

7.2 Length preserving matrices


7.3 Lorentz transformations
Definition (Minkowski inner product). The Minkowski inner product of 2
vectors x and y is
hx | yi = xT Jy,
where  
1 0
J=
0 −1
Then hx | yi = x1 y1 − x2 y2 .
Definition (Preservation of inner product). A transformation matrix M pre-
serves the Minkowski inner product if

hx|yi = hM x|M yi

for all x, y.
Definition (Lorentz matrix). A Lorentz matrix or a Lorentz boost is a matrix
in the form  
1 1 v
Bv = √ .
1 − v2 v 1
Here |v| < 1, where we have chosen units in which the speed of light is equal to
1. We have Bv = Htanh−1 v

Definition (Lorentz group). The Lorentz group is a group of all Lorentz matrices
under matrix multiplication.

19

You might also like