Lecture 4 LinAlg

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Lecture 4

Linear maps

Linear maps Lecture 4 1 / 40


Matrix multiplication... Again

Definition
Let k, m, n ∈ N, let A be an m × n matrix and let B be an n × k matrix
with column vectors b1 ,...,bk . We then define the matrix product AB as
the matrix with Abj as its jth column vector, where 1 ≤ j ≤ k.

Linear maps Lecture 4 2 / 40


Matrix multiplication... Again

Let k, m, n ∈ N, let A be an m × n matrix, B be an n × k matrix with


column vectors b1 ,...,bk and consider the systems Ax = b and By = x,
where x ∈ Rn , y ∈ Rk and b ∈ Rm .
Then

b = Ax = A(By) = A(y1 b1 + ... + yk bk ) = y1 Ab1 + ... + yk Abk


 
= Ab1 ... Abk y = (AB)y.

So if we want to find y, rather then first calculating x and then calculating


y, we can find y directly by solving (AB)y = b instead.

Linear maps Lecture 4 3 / 40


Formal definition of a function
Let A and B be sets. A function (or map) from A to B is a rule that
associates with each element in A exactly one element of B. We denote
this by
f : A → B.
A function is often simply denoted by f , when it is clear from context
what A and B are. The set A is called the domain of f and the set B is
called the codomain of f . Given an element a ∈ A, its associated element
in B is denoted by f (a), and is called its image. We also define the range
or (image) of f
range(f ) = {f (x) : x ∈ A} ⊆ B.
Note that range(f ) ⊆ B but B could be larger than range(f ).

Linear maps Lecture 4 4 / 40


Formal definition of a function

range

A B
(domain) (codomain)

Linear maps Lecture 4 5 / 40


Function / map / mapping / transformation / ...
In the context of linear algebra, we are interested in functions that
transform vectors to other vectors, particularly those that preserve addition
and scaling. Thus we will mostly be referring to functions as
transformations, more rarely as maps, and very rarely as functions (to
avoid confusion with concepts from calculus). A transformation will
typically be labelled by T or T : V → W , where V and W are sets of
vectors, e.g. Rn and Rm for certain m, n ∈ N.

Linear maps Lecture 4 6 / 40


A transformation of R2
Example
Let T : R2 → R2 be a transformation given by T (x, y ) = (x + y 2 , y ).
y y
1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
x
-0.2 0.5 1.0 1.5 2.0 x -0.2 0.5 1.0 1.5 2.0

T (0, 0) = (0, 0) T (0, 31 ) = ( 19 , 31 )


T (1, 0) = (1, 0) T (0, 32 ) = ( 49 , 32 )
T (0, 1) = (1, 1) T (1, 31 ) = (1 + 19 , 31 )
T (1, 1) = (2, 1) T (1, 23 ) = (1 + 49 , 32 )

Linear maps Lecture 4 7 / 40


Linear transformations
Definition
A linear transformation is a function T : Rn → Rm satisfying
T (u + v) = T (u) + T (v)
T (c u) = cT (u)
for any real number c and vectors u, v in Rn .

Linear maps Lecture 4 8 / 40


Two obvious examples
Example (the zero map)
The map T : Rn → Rm defined by T (u) = 0 is linear.

Reason:
T (u + v) = 0 = 0 + 0 = T (u) + T (v)
T (cu) = 0 = c 0 = cT (u)

Example (the identity map)


The map T : Rn → Rn defined by T (u) = u is linear.

Reason:
T (u + v) = u + v = T (u) + T (v)
T (cu) = c u = cT (u)
Linear maps Lecture 4 9 / 40
Zero goes to zero
Fact
If T : Rn → Rm is a linear transformation then T (0) = 0.

Indeed, T (0) = T (0 · 0) = 0 · T (0) = 0. Note that in the above equation,


the “0” that appears on the left-hand side is in Rn , and the one that
appears on the right-hand side is in Rm .
Example
The transformation T : R2 → R2 given by T (x1 , x2 ) = (x1 + 1, x2 − 3) is
not linear since
T (0, 0) = (1, −3) ̸= (0, 0).

Linear maps Lecture 4 10 / 40


Linear combinations are preserved
Fact
If T : Rn → Rm is a linear transformation then for all c1 , . . . , ck ∈ R and
u1 , . . . , uk ∈ Rn we have

T (c1 u1 + · · · + ck uk ) = c1 T (u1 ) + · · · + ck T (uk )

Linear maps Lecture 4 11 / 40


Recall the map x 7→ Ax
If
   
a11 a12 ··· a1n x1
 a21 a22 ··· a2n   x2 
A=  = [a1 a2 . . . an ] and x = 
   
.. .. .. .. .. 
 . . . .   . 
am1 am2 · · · amn xn

then Ax = x1 a1 + x2 a2 + · · · + xn an . Note: The domain of x 7→ Ax is Rn


and the codomain is Rm . Points in the domain are of the form
x = (x1 , . . . , xn ) and Ax must be a vector in Rm . The range of T is
Span{a1 , . . . , an }.

Linear maps Lecture 4 12 / 40


It’s all matrices
Theorem
For every linear transformation T : Rn → Rm we can find a unique m × n
matrix A so that T (x) = Ax for every v in Rn . We say that A is the
matrix of the linear transformation T . In fact, A is the m × n matrix
whose j-th column is the vector T (ej ):
 
A = T (e1 ) T (e2 ) · · · T (en )

Proof. Let x = (x1 , . . . , xn ) ∈ Rn . Then x = x1 e1 + x2 e2 + · · · + xn en . By


linearity of T we have

T (x) = x1 T (e1 ) + x2 T (e2 ) + · · · + xn T (en )


= [T (e1 ) T (e2 ) · · · T (en )] x.

This matrix must be unique; if B is a matrix such that T (x) = Bx for


all x, the vectors Be1 , . . . , Ben are exactly the columns of B.
Linear maps Lecture 4 13 / 40
Some observations
The matrix of a linear transformation tells us the image of the basis.
If we know the image of the basis, we can determine the image of any
vector x in Rn :
Simply write x as a linear combination of the basis and use the
linearity of T .
The matrix of the identity map id : Rn → Rn given by id(x) = x is the
identity matrix  
1 0 ··· 0
 0 1 ··· 0 
In =  . . .
 
. 
 .. .. . . .. 
0 0 ··· 1

Linear maps Lecture 4 14 / 40


Rotation
Example
Consider the transformation T : R2 → R2 which rotates vectors
counter-clockwise by an angle θ.
T( 
u)


u

Is this a linear map?


Linear maps Lecture 4 15 / 40
Rotation
Example (continued)
Let us find the matrix of T . By definition (of sine and cosine), we have:
   
1 cos(θ)
T =
0 sin(θ)

T(
e1 )
sin(θ)

θ
cos(θ) 
e1

Linear maps Lecture 4 16 / 40


Rotation
Example (continued)
   
0 − sin(θ)
T =
1 cos(θ)


e2

T(
e 2) cos(θ)

θ
-sin(θ)

Linear maps Lecture 4 17 / 40


Rotation
Example (continued)
 
cos(θ) − sin(θ)
Thus the matrix of the rotation by θ is A =
sin(θ) cos(θ)

Linear maps Lecture 4 18 / 40


Horizontal expansion and contraction
Example
What is the matrix A of the linear transformation depicted?
1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


-0.2 -0.2

Linear maps Lecture 4 19 / 40


Horizontal expansion and contraction
Example
What is the matrix A of the linear transformation depicted?
1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


-0.2 -0.2

 
2 0
Answer: A = .
0 1

Linear maps Lecture 4 20 / 40


Horizontal expansion and contraction
Example
What is the matrix A of the linear transformation depicted?
1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


-0.2 -0.2

 
2 0
Answer: A = . What is the matrix B of the transformation from
0 1
the right image to the left one?

Linear maps Lecture 4 21 / 40


Horizontal expansion and contraction
Example
What is the matrix A of the linear transformation depicted?
1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


-0.2 -0.2

 
2 0
Answer: A = . What is the matrix B of the transformation from
0 1
 
1/2 0
the right image to the left one? Answer: B = .
0 1

Linear maps Lecture 4 22 / 40


Vertical expansion and contraction
Example
What is the matrix A of the linear transformation depicted?

2.0 2.0

1.5 1.5

1.0 1.0

0.5 0.5

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0

Linear maps Lecture 4 23 / 40


Vertical expansion and contraction
Example
What is the matrix A of the linear transformation depicted?

2.0 2.0

1.5 1.5

1.0 1.0

0.5 0.5

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0

 
1 0
Answer: A = .
0 2

Linear maps Lecture 4 24 / 40


Vertical expansion and contraction
Example
What is the matrix A of the linear transformation depicted?

2.0 2.0

1.5 1.5

1.0 1.0

0.5 0.5

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0

 
1 0
Answer: A = . What is the matrix B of the transformation from
0 2
the right image to the left?

Linear maps Lecture 4 25 / 40


Vertical expansion and contraction
Example
What is the matrix A of the linear transformation depicted?

2.0 2.0

1.5 1.5

1.0 1.0

0.5 0.5

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0

 
1 0
Answer: A = . What is the matrix B of the transformation from
0 2
 
1 0
the right image to the left? Answer: B = .
0 1/2

Linear maps Lecture 4 26 / 40


Shearing
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


-0.2 -0.2

Linear maps Lecture 4 27 / 40


Shearing
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0


-0.2 -0.2

 
1 3/4
Answer: A = .
0 1

Linear maps Lecture 4 28 / 40


Rotation (around the origin)
Example
What is the matrix A of the linear transformation depicted?

1.0 1.0

0.5 0.5

-1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0

-0.5 -0.5

-1.0 -1.0

Linear maps Lecture 4 29 / 40


Rotation (around the origin)
Example
What is the matrix A of the linear transformation depicted?

1.0 1.0

0.5 0.5

-1.0 -0.5 0.5 1.0 -1.0 -0.5 0.5 1.0

-0.5 -0.5

-1.0 -1.0

 
0 1
Answer: A = .
−1 0

Linear maps Lecture 4 30 / 40


Reflection through the x-axis
Example
What is the matrix A of the linear transformation depicted?

1.0 1.0

0.5 0.5

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0

-0.5 -0.5

-1.0 -1.0

Linear maps Lecture 4 31 / 40


Reflection through the x-axis
Example
What is the matrix A of the linear transformation depicted?

1.0 1.0

0.5 0.5

0.5 1.0 1.5 2.0 0.5 1.0 1.5 2.0

-0.5 -0.5

-1.0 -1.0

 
1 0
Answer: A = .
0 −1
Linear maps Lecture 4 32 / 40
Reflection through x = y
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-0.2 0.2 0.4 0.6 0.8 1.0 1.2 -0.2 0.2 0.4 0.6 0.8 1.0 1.2
-0.2 -0.2

Linear maps Lecture 4 33 / 40


Reflection through x = y
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-0.2 0.2 0.4 0.6 0.8 1.0 1.2 -0.2 0.2 0.4 0.6 0.8 1.0 1.2
-0.2 -0.2

 
0 1
Answer: A = .
1 0
Linear maps Lecture 4 34 / 40
Projection onto the x-axis
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-0.2 0.2 0.4 0.6 0.8 1.0 -0.2 0.2 0.4 0.6 0.8 1.0
-0.2 -0.2

Linear maps Lecture 4 35 / 40


Projection onto the x-axis
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-0.2 0.2 0.4 0.6 0.8 1.0 -0.2 0.2 0.4 0.6 0.8 1.0
-0.2 -0.2

 
1 0
Answer: A = .
0 0
Linear maps Lecture 4 36 / 40
Projection onto the y -axis
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-0.2 0.2 0.4 0.6 0.8 1.0 -0.2 0.2 0.4 0.6 0.8 1.0
-0.2 -0.2

Linear maps Lecture 4 37 / 40


Projection onto the y -axis
Example
What is the matrix A of the linear transformation depicted?

1.2 1.2

1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

-0.2 0.2 0.4 0.6 0.8 1.0 -0.2 0.2 0.4 0.6 0.8 1.0
-0.2 -0.2

 
0 0
Answer: A = .
0 1
Linear maps Lecture 4 38 / 40
Injectivity and Surjectivity
Definition
A transformation T : Rn → Rm is
one-to-one if each b in Rm is the image of at most one x in Rn ;
onto if each b in Rm is the image of at least one x in Rn .

Theorem
Let T : Rn → Rm be a linear transformation. Then T is one-to-one if and
only if the equation T (x) = 0 has only the trivial solution.

Proof. Since T is linear, we must have T (0) = 0. If T is one-to-one, two


distinct vectors cannot have the same image so T (x) = 0 cannot have any
additional solutions. If T is not one-to one then we can find u ̸= v and
T (u) = T (v). By linearity T (u − v) = T (u) − T (v) = 0. This completes
the proof. q.e.d.

Linear maps Lecture 4 39 / 40


Injectivity (one-to-one) and Surjectivity (onto)

Corollary
Let T : Rn → Rm be given by T (x) = Ax. Then:
a) T is one-to-one if and only if the columns of A are linearly
independent;
b) T is onto if and only if the columns of A span Rm .

Remark
The theorem on the previous slide shows that it is enough to check
injectivity at 0. Every other failure of injectivity will just be a translation
of this one, so it is the solution set of T (x) = 0 that tells us how much
“loss of information” there is when T is applied to Rn .

Linear maps Lecture 4 40 / 40

You might also like