MIT 18.06 Exam 1, Fall 2018 Solutions Johnson: Problem 1 (30 Points)
MIT 18.06 Exam 1, Fall 2018 Solutions Johnson: Problem 1 (30 Points)
Solution:
(a) If we look at this matrix, we see that we can put it in upper triangular
form just by reordering the rows. The necessary reordering of the rows
can be described by the permutation matrix
0 0 1 0 0
0 0 0 1 0
P = 0 1 0 0 0
0 0 0 0 1
1 0 0 0 0
The product P A is then in upper triangular form, so that
1 2 −1 0 1 1 0 0 0 0
0 2 0 1 0 0 1 0 0 0
U = 0 0 1 0 −1 , L = 0 0 1 0
0.
0 0 0 2 1 0 0 0 1 0
0 0 0 0 1 0 0 0 0 1
1
(b) We want to compute x = A−1 b, which is equivalent to finding
xsuch
that
2
1
2 , which
Ax = b. Multiplying this equation by P gives P Ax = P b =
−4
−2
then gives us the upper-triangular system
1 2 −1 0 1 x1 2
0 2 0 1 0 x2 1
0 0 1 0 −1 x3 = 2 .
0 0 0 2 1 x4 −4
0 0 0 0 1 x5 −2
This gives us a triangular system of equations we can then solve via back
substitution:
x5 = −2 =⇒x5 = −2
2x4 + x5 = −4 =⇒x4 = −1
x3 − x5 = 2 =⇒x3 = 0
2x2 + x4 = 1 =⇒x2 = 1
x1 + 2x2 − x3 + x5 = 2 =⇒x1 = 2
The solution is then
2
1
0 .
x =
−1
−2
2
Solution:
(a) Row operations always preserve the null space N (A), i.e. any solution
to
Ax = 0 will be preserved by row operations. Let H = F I be the
weird row-reduced matrix obtained by our Harvard friend. We can still
seek special solutions to Hx = 0 using the usual method. Columns 3, 4
and 5 are the pivot columns, while columns 1 and 2 are the free columns.
We therefore look for two special solutions:
1 0
0 1
x1 ,s2 = y1 .
s1 =
x2 y2
x3 y3
x1 −2 y1 −3
We can then see that x2 = −4 and y2 = −5 , i.e. the
x3 −6 y3 −7
negative entries of each column of F . This gives us a basis for the null
space of A:
1 0
0 1
−2 , −3 .
−4 −5
−6 −7
(b) We want to first reorder the columns of H so that it is in the usual rref
form. Recall that column operations are equivalent to multiplying on the
right by an appropriate matrix. A matrix that will put the columns of H
in the correct order is the following permutation matrix
0 0 0 1 0
0 0 0 0 1
M = 1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
The matrix R = HM will then be in the usual rref form. Remember that
our Harvard friend performed row operations to put our matrix A into
the weird form H, and row operations won’t change the column order. In
particular, recall that row operations are equivalent to multiplying by an
appropriate matrix on the left, so there exists a matrix E so that EA = H.
The product R = HM = EAM = E(AM ) is then in the usual rref form.
So peforming the same row operations as our Harvard friend on the matrix
AM will give us a matrix in the usual rref form.
3
Problem 3 (10 points):
In class, when we derived the LU factorization, we initially found L by mul-
tiplying a sequence of elementary elimination matrices, one to eliminate below
each pivot. (We later found a more clever way to get L just by writing down
the multipliers from the elimination steps, no arithmetic required.)
If A is a non-singular m × m matrix and we compute L in the “naive” way,
by directly multiplying the elementary elimination matrices (by the usual rows×
columns method, no tricks), how would the cost to compute L (the number of
scalar-arithmetic operations) scale with m? (That is, roughly proportional to
m, m2 , m3 , m4 , m5 , 2m , or...?)
Solution:
Suppose A is a non singular, m × m matrix, and we have performed row opera-
tions to put A in upper triangular form. To do this, we need to eliminate every
entry below each pivot. Eliminating beneath each pivot can be described by
an elementary elimination matrix, so in general we will need m − 1 elimination
matrices to put A in upper triangular form:
4
(iv) V = all 3 × 3 matries whose diagonal entries average to zero.
(v) V = all differentiable functions f (x) with f 0 (0) = 2f (0). (f 0 is the
derivative.)
(vi) V = all functions f (x) with f (x + y) = f (x)f (y).
1
(b) Give a matrix A whose null space is spanned by 2 .
−1
(c) Give
a nonzero
matrix A whose column space is in R3 but does not include
1
2 .
−1
Solution:
(a) Is V a vector space or not?
(i) Thisis not a vector space, since it does not contain the zero vector
0
0
0
x= 0 .
0
0
(ii) This is a vector space (it is closely related to, but is not the same
as, thenull space of A). It definitely contains the zero vector X =
0 0
0 0
0 0
0 0, and if we take any two matrices X1 , X2 in our set V , then
0 0
0 0
the linear combination aX1 + bX2 will still be in our set V , since
A(aX1 + bX2 ) = aAX1 + bAX2 = 0.
(iii) This is not a vector space. The set does contain the zero matrix
(the zero matrix is singular), and it is closed under multiplication
1 0 0
by scalars. However, consider two matrices A1 = 0 1 0 and
0 0 0
0 0 0
A2 = 0 0 0 which are both singular (they are in rref and
0 0 1
neither has three pivots). Their sum is A1 + A2 = I, and the identity
matrix is not singular. Therefore the set is not closed under matrix
addition.
5
(iv) This is a vector space. Consider two 3 × 3 matrices A and B with
diagonal entries (a1 , a2 , a3 ) and (b1 , b2 , b3 ), respectively, where a1 +
a2 + a3 = b1 + b2 + b3 = 0. Any linear combination λA + µB will have
diagonal entries (λa1 +µb1 , λa2 +µb2 , λa3 +µb3 ). The average of these
diagonal entries is then 31 [(λa1 +µb1 )+(λa2 +µb2 )+(λa3 +µb3 )] = 0.
(v) This is a vector space. Consider two functions f (x) and g(x) obeying
this rule. Then any linear combination λf (x) + µg(x) will also obey
this rule, since λf 0 (0) + µg 0 (0) = 2[λf (0) + µg(0)].
(vi) This is not a vector space. It contains the zero function, but is
not closed under scalar multiplication (or addition): if f (x + y) =
f (x)f (y), and g(x) = 2f (x), then g(x + y) = 2f (x + y) 6= g(x)g(y) =
4f (x)f (y) = 4f (x + y). For example, the function f (x) = ex is in
this set, but the function g(x) = 2ex is not.
6
or equal to two. Examples of possible matrices are:
1 1 0 1 0 1
1 , 0 0 , 0 1 2
1 0 1 0 0 0
Note that, to know that a vector is not in the column space, you must be
sure that any linear combination of the columns of the matrix cannot give
you that vector. So, for example, the matrix
1 1
2 2
1 2
1
does not work. Even though 2 does not appear explicitly as one of its
−1
columns, this vectoris inthe column
space
because you can get it by the
1 1 1
linear combination 2 = 3 2 −2 2. If you pick a 3×2 matrix at
−1 1 2
1
random, it is pretty unlikely to have 2 in its column space. On the
−1
other hand, if you pick a 3 × 3 matrix at random, it is probably rank 3 and
hence will contain every 3-component vector. Another common point of
confusion here is that whether a vector is in C(A) is not directly related to
whether it is in N (A), so this problem is quite different from the previous
part.