Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

MIT 18.

06 Exam 1, Fall 2018 Solutions


Johnson

Problem 1 (30 points):


You have the matrix
 
0 0 0 0 1

 0 0 1 0 −1 

A=
 1 2 −1 0 1 
 0 2 0 1 0 
0 0 0 2 1
(a) Find matrices P, U, L for a P A = LU factorization of A. Hint: look at A
carefully first: if you find the right permutation P (a matrix to re-order
the rows) it will be simple.
 
−2
 2 
 
(b) Compute x = A−1 b where b =   2 .

 1 
−4

Solution:
(a) If we look at this matrix, we see that we can put it in upper triangular
form just by reordering the rows. The necessary reordering of the rows
can be described by the permutation matrix
 
0 0 1 0 0
 0 0 0 1 0 
 
P =  0 1 0 0 0 

 0 0 0 0 1 
1 0 0 0 0
The product P A is then in upper triangular form, so that
   
1 2 −1 0 1 1 0 0 0 0
 0 2 0 1 0  0 1 0 0 0
   
U =  0 0 1 0 −1   , L = 0 0 1 0
 0.
 0 0 0 2 1  0 0 0 1 0
0 0 0 0 1 0 0 0 0 1

1
(b) We want to compute x = A−1 b, which is equivalent to finding
 xsuch
 that
2
1
 
 2 , which
Ax = b. Multiplying this equation by P gives P Ax = P b =  
−4
−2
then gives us the upper-triangular system
    
1 2 −1 0 1 x1 2
 0 2 0 1 0   x2   1 
    
 0 0 1 0 −1  x3  =  2  .
    
 0 0 0 2 1  x4  −4
0 0 0 0 1 x5 −2
This gives us a triangular system of equations we can then solve via back
substitution:
x5 = −2 =⇒x5 = −2
2x4 + x5 = −4 =⇒x4 = −1
x3 − x5 = 2 =⇒x3 = 0
2x2 + x4 = 1 =⇒x2 = 1
x1 + 2x2 − x3 + x5 = 2 =⇒x1 = 2
The solution is then

2
1
 
 0 .
x = 
−1
−2

Problem 2 (30 points):


A is a 3 × 5 matrix. One of your Harvard friends performed row operations on
A to convert it to rref
 form, but did something weird—instead of getting
 the
usual R = I F , they reduced it to a matrix in the form F I instead.
In particular, their row operations gave:
 
2 3 1 0 0
A  4 5 0 1 0 .
6 7 0 0 1
(a) Find a basis for N (A).
(b) Give a matrix M so that if you multiply A by M (on the left or right?)
then the same row operations as the ones used by your Harvard friend
will give a matrix in the usual rref form:
 
1 0 0 2 3
either M A or AM  0 1 0 4 5 .
0 0 1 6 7

2
Solution:
(a) Row operations always preserve the null space N (A), i.e. any solution
 to
Ax = 0 will be preserved by row operations. Let H = F I be the
weird row-reduced matrix obtained by our Harvard friend. We can still
seek special solutions to Hx = 0 using the usual method. Columns 3, 4
and 5 are the pivot columns, while columns 1 and 2 are the free columns.
We therefore look for two special solutions:
   
1 0
0 1
   
x1  ,s2 = y1  .
s1 =    
x2  y2 
x3 y3
       
x1 −2 y1 −3
We can then see that x2  = −4 and y2  = −5 , i.e. the
x3 −6 y3 −7
negative entries of each column of F . This gives us a basis for the null
space of A:
   
1 0
0 1
   
−2 , −3 .
   
−4 −5
−6 −7

(b) We want to first reorder the columns of H so that it is in the usual rref
form. Recall that column operations are equivalent to multiplying on the
right by an appropriate matrix. A matrix that will put the columns of H
in the correct order is the following permutation matrix
 
0 0 0 1 0
0 0 0 0 1
 
M = 1 0 0 0 0

0 1 0 0 0
0 0 1 0 0

The matrix R = HM will then be in the usual rref form. Remember that
our Harvard friend performed row operations to put our matrix A into
the weird form H, and row operations won’t change the column order. In
particular, recall that row operations are equivalent to multiplying by an
appropriate matrix on the left, so there exists a matrix E so that EA = H.
The product R = HM = EAM = E(AM ) is then in the usual rref form.
So peforming the same row operations as our Harvard friend on the matrix
AM will give us a matrix in the usual rref form.

3
Problem 3 (10 points):
In class, when we derived the LU factorization, we initially found L by mul-
tiplying a sequence of elementary elimination matrices, one to eliminate below
each pivot. (We later found a more clever way to get L just by writing down
the multipliers from the elimination steps, no arithmetic required.)
If A is a non-singular m × m matrix and we compute L in the “naive” way,
by directly multiplying the elementary elimination matrices (by the usual rows×
columns method, no tricks), how would the cost to compute L (the number of
scalar-arithmetic operations) scale with m? (That is, roughly proportional to
m, m2 , m3 , m4 , m5 , 2m , or...?)

Solution:
Suppose A is a non singular, m × m matrix, and we have performed row opera-
tions to put A in upper triangular form. To do this, we need to eliminate every
entry below each pivot. Eliminating beneath each pivot can be described by
an elementary elimination matrix, so in general we will need m − 1 elimination
matrices to put A in upper triangular form:

Em−1 ...E1 A =U.

We can then find L by calculating the inverse of the product of elimination


matrices, i.e. L = (Em−1 ...E1 )−1 = E1−1 ...Em−1
−1
. Finding the inverse of each
of the elimination matrices is trivial (we just multiply all off diagonal elements
by −1). So finding L requires us to multiply (m − 1) of these m × m inverse
elimination matrices together. Multiplying two m×m matrices together requires
∼ m3 scalar arithmetic operations (assuming we do not use any tricks to make
this matrix multiplication more efficient—in particular, we don’t exploit the
fact that the Ek matrices have a very special form and are mostly zero). So
multiplying (m−1) such matrices will require ∼ m4 scalar arithmetic operations.
So finding L in this naive way requires a number of scalar-arithmetic operations
that scales proportional to m4 .

Problem 4 (30 points):


Here are some miscellaneous questions that require little calculation:
(a) Is V a vector space or not? (For multiplication by real scalars and the
usual ± operations.) If false, give a rule of vector spaces that is violated:
 
1
(i) A is a 3 × 6 matrix. V = all solutions x to Ax =  2  .
3
(ii) A is a 3 × 6 matrix. V = all 6 × 2 matrices X where AX = 0 (the
3 × 2 zero matrix).
(iii) V = all 3 × 3 singular matrices A.

4
(iv) V = all 3 × 3 matries whose diagonal entries average to zero.
(v) V = all differentiable functions f (x) with f 0 (0) = 2f (0). (f 0 is the
derivative.)
(vi) V = all functions f (x) with f (x + y) = f (x)f (y).
 
1
(b) Give a matrix A whose null space is spanned by  2 .
−1

(c) Give
 a nonzero
 matrix A whose column space is in R3 but does not include
1
 2 .
−1

Solution:
(a) Is V a vector space or not?
(i) Thisis not a vector space, since it does not contain the zero vector
0
0
 
0
x= 0 .

 
0
0
(ii) This is a vector space (it is closely related to, but is not the same
as, thenull space of A). It definitely contains the zero vector X =

0 0
0 0
 
0 0
0 0, and if we take any two matrices X1 , X2 in our set V , then
 
 
0 0
0 0
the linear combination aX1 + bX2 will still be in our set V , since
A(aX1 + bX2 ) = aAX1 + bAX2 = 0.
(iii) This is not a vector space. The set does contain the zero matrix
(the zero matrix is singular), and it is closed under multiplication

1 0 0
by scalars. However, consider two matrices A1 = 0 1 0 and
  0 0 0
0 0 0
A2 = 0 0 0 which are both singular (they are in rref and
0 0 1
neither has three pivots). Their sum is A1 + A2 = I, and the identity
matrix is not singular. Therefore the set is not closed under matrix
addition.

5
(iv) This is a vector space. Consider two 3 × 3 matrices A and B with
diagonal entries (a1 , a2 , a3 ) and (b1 , b2 , b3 ), respectively, where a1 +
a2 + a3 = b1 + b2 + b3 = 0. Any linear combination λA + µB will have
diagonal entries (λa1 +µb1 , λa2 +µb2 , λa3 +µb3 ). The average of these
diagonal entries is then 31 [(λa1 +µb1 )+(λa2 +µb2 )+(λa3 +µb3 )] = 0.
(v) This is a vector space. Consider two functions f (x) and g(x) obeying
this rule. Then any linear combination λf (x) + µg(x) will also obey
this rule, since λf 0 (0) + µg 0 (0) = 2[λf (0) + µg(0)].
(vi) This is not a vector space. It contains the zero function, but is
not closed under scalar multiplication (or addition): if f (x + y) =
f (x)f (y), and g(x) = 2f (x), then g(x + y) = 2f (x + y) 6= g(x)g(y) =
4f (x)f (y) = 4f (x + y). For example, the function f (x) = ex is in
this set, but the function g(x) = 2ex is not.

(You might be interested to learn that this is a famous property of ex-


ponential functions. In fact, the only real-valued, anywhere-continous
functions that satisfy this rule are the zero function f (x) = 0 and
functions of the form f (x) = ekx for any k ∈ R.)
 
1
(b) We want to find a matrix whose null space is spanned by v =  2  . Such
−1
a matrix must have three columns, so that this vector can be an element
of the null space. It must have rank two, so that there are no other vectors
in a basis for its null space: the null space must be one-dimensional to be
spanned by v. Examples of possible matrices are:
 
  1 0 1
−1 0 −1 
, 0 1 2
0 2 4
0 0 0

You might be tempted to write a matrix with one row, like 1 0 −1 ,
which indeed has v in its nullspace, but such a matrix has other vectors
in its null space also—this matrix is rank 1, so it has a two-dimensional
nullspace that is not spanned by v. A matrix of all zeros would be even
worse—it would have rank 0, with a 3d nullspace that contains v but also
contains every other 3-component vector.
3
 whose column space is a subspace of R , but
(c) We want to find amatrix
1
does not include  2 . Such a matrix must have three rows, but can
−1
 
1
have any number of columns, provided that  2  is not in the span of
−1
the columns, which means that the matrix necessarily has rank less than

6
or equal to two. Examples of possible matrices are:
     
1 1 0 1 0 1
1 , 0 0 , 0 1 2
1 0 1 0 0 0

Note that, to know that a vector is not in the column space, you must be
sure that any linear combination of the columns of the matrix cannot give
you that vector. So, for example, the matrix
 
1 1
2 2
1 2
 
1
does not work. Even though  2  does not appear explicitly as one of its
−1
columns, this vectoris inthe column
  space
 because you can get it by the
1 1 1
linear combination  2  = 3 2 −2 2. If you pick a 3×2 matrix at
−1 1  2
1
random, it is pretty unlikely to have  2  in its column space. On the
−1
other hand, if you pick a 3 × 3 matrix at random, it is probably rank 3 and
hence will contain every 3-component vector. Another common point of
confusion here is that whether a vector is in C(A) is not directly related to
whether it is in N (A), so this problem is quite different from the previous
part.

You might also like