Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

9/11/2017 Coursera | Online Courses From Top Universities.

Join for Free

Support Vector Machines


Quiz, 5 questions

1 1.  Suppose you have trained an SVM classi er with a Gaussian kernel, and it learned the
point following decision boundary on the training set:

You suspect that the SVM is under tting your dataset. Should you try increasing or decreasing
C ? Increasing or decreasing σ 2 ?

It would be reasonable to try increasing C . It would also be reasonable to try


decreasing σ 2 .

It would be reasonable to try decreasing C . It would also be reasonable to try


decreasing σ 2 .

It would be reasonable to try increasing C . It would also be reasonable to try


increasing σ 2 .

It would be reasonable to try decreasing C . It would also be reasonable to try


increasing σ 2 .

1 2.  The formula for the Gaussian kernel is given by similarity(x, l(1) ) = exp (−
||x−l
(1)

2
2
||
) .
point

The gure below shows a plot of f 1 = similarity(x, l


(1)
) when σ 2 .
= 1

Which of the following is a plot of f 1 when σ 2 = 0.25 ?

https://www.coursera.org/learn/machine-learning/exam/PLdSa/support-vector-machines 1/3
9/11/2017 Coursera | Online Courses From Top Universities. Join for Free

Support Vector Machines


Quiz, 5 questions

Figure 4.

Figure 3.

Figure 2.

https://www.coursera.org/learn/machine-learning/exam/PLdSa/support-vector-machines 2/3
9/11/2017 Coursera | Online Courses From Top Universities. Join for Free

1 3.  The SVM solves


Support
point
Vector Machines
Quiz, 5 questions m (i) T (i) (i) T (i) n 2
minθ  C ∑ y cost1 (θ x ) + (1 − y )cost0 (θ x )+∑ θj
i=1 j=1

where the functions cost0 (z) and cost1 (z) look like this:

The rst term in the objective is:

m (i) T (i) (i) T (i)


C∑ y cost1 (θ x ) + (1 − y )cost0 (θ x ).
i=1

This rst term will be zero if two of the following four conditions hold true. Which are the two
conditions that would guarantee that this term equals zero?

 For every example with y (i) = 0 , we have that θT x(i) ≤ 0 .

For every example with y (i) = 0 , we have that θT x(i) ≤ −1 .

For every example with y (i) = 1 , we have that θT x(i) ≥ 0 .

For every example with y (i) = 1 , we have that θ T


x
(i)
≥ 1 .

1 4.  Suppose you have a dataset with n = 10 features and m = 5000 examples.
point
After training your logistic regression classi er with gradient descent, you nd that it has
under t the training set and does not achieve the desired performance on the training or
cross validation sets.

Which of the following might be promising steps to take? Check all that apply.

Increase the regularization parameter λ.

Create / add new polynomial features.

Use an SVM with a linear kernel, without introducing new features.

Use an SVM with a Gaussian Kernel.

1 5.  Which of the following statements are true? Check all that apply.
point
If you are training multi-class SVMs with the one-vs-all method, it is

not possible to use a kernel.

Suppose you have 2D input examples (ie, x(i) ∈ R


2
). The decision boundary of the
SVM (with the linear kernel) is a straight line.

The maximum value of the Gaussian kernel (i.e., sim(x, l(1) )) is 1.

If the data are linearly separable, an SVM using a linear kernel will

return the same parameters θ regardless of the chosen value of

C (i.e., the resulting value of θ does not depend on C ).

I, Mark R. Lytell, understand that submitting work that isn’t my own may result in permanent failure of this course
or deactivation of my Coursera account. Learn more about Coursera’s Honor Code

Submit Quiz

  

https://www.coursera.org/learn/machine-learning/exam/PLdSa/support-vector-machines 3/3

You might also like