Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

I M. Tech. – I Sem.

(CSE) L T C
303
Program Elective I
(16CS5010) MACHINE LEARNING
Course Objectives:
To learn the concept of how to learn patterns and concepts from data without being explicitly programmed in
various IOT nodes.
 To design and analyze various machine learning algorithms and techniques with a modern
 outlook focusing on recent advances.
 Explore supervised and unsupervised learning paradigms of machine learning.
 To explore Deep learning technique and various feature extraction strategies.

Course Outcomes:
After completion of course, students would be able to:
 Extract features that can be used for a particular machine learning approach in various IOT

applications.
 To compare and contrast pros and cons of various machine learning techniques and to get an

insight of when to apply a particular machine learning approach.


 To mathematically analyze various machine learning approaches and paradigms.

UNIT I
Supervised Learning (Regression/Classification)
 Basic methods: Distance-based methods, Nearest-Neighbours, Decision
 Trees, Naive Bayes Linear models: Linear Regression, Logistic Regression, Generalized Linear Models
 Support Vector Machines, Nonlinearity and Kernel Methods
 Beyond Binary Classification: Multi-class/Structured Outputs, Ranking

UNIT II
Unsupervised Learning
 Clustering: K-means/Kernel K-means
 Dimensionality Reduction: PCA and kernel PCA
 Matrix Factorization and Matrix Completion
 Generative Models (mixture models and latent factor models)

UNIT III
Evaluating Machine Learning algorithms and Model Selection, Introduction to Statistical Learning Theory,
Ensemble Methods (Boosting, Bagging, Random Forests)
UNIT IV
Sparse Modeling and Estimation, Modeling Sequence/Time-Series Data, Deep Learning and Feature Representation
Learning Scalable Machine Learning (Online and Distributed Learning) A selection from some other advanced
topics, e.g., Semi-supervised Learning, Active Learning, Reinforcement Learning, Inference in Graphical Models,
Introduction to Bayesian Learning and Inference.
UNIT V
Recent trends in various learning techniques of machine learning and classification methods for IOT applications,
Various models for IOT applications. R18 M.Tech – CSE
Text /References:
1. Kevin Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, 2012
2. Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements of Statistical Learning,
Springer 2009 (freely available online)

3. Christopher Bishop, Pattern Recognition and Machine Learning, Springer, 2007.

Machine Learning - 15CS73

COURSE DESCRIPTION
Machine Learning is concerned with computer programs that automatically improve their performance through
experience. This course covers the theory and practical algorithms for machine learning from a variety of
perspectives. We cover topics such as Bayesian networks, decision tree learning, statistical learning methods,
unsupervised learning and reinforcement learning. The course covers theoretical concepts such as inductive bias,
Bayesian learning methods. Short programming assignments include hands-on experiments with various learning
algorithms. This course is designed to give a graduate-level student a thorough grounding in the methodologies,
technologies, mathematics and algorithms currently needed by people who do research in machine learning. 

COURSE OBJECTIVES
This course will enable students to,

1. Define machine learning and understand the basic theory underlying machine learning.
2. Differentiate supervised, unsupervised and reinforcement learning
3. Understand the basic concepts of learning and decision trees.
4. Understand neural networks and Bayesian techniques for problems appear in machine learning
5. Understand the instant based learning and reinforced learning
6. Perform statistical analysis of machine learning techniques.

COURSE OUTCOMES
After studying this course, the students will be able to

1. Choose the learning techniques and investigate concept learning


2. Identify the characteristics of decision tree and solve problems associated with
3. Apply effectively neural networks for appropriate applications
4. Apply Bayesian techniques and derive effectively learning rules
5. Evaluate hypothesis and investigate instant based learning and reinforced learning
CONTENT
Module – 1   Introduction, Concept Learning
Module – 2   Decision Tree Learning
Module – 3   Artificial Neural Networks
Module – 4   Bayesian Learning
Module – 5   Evaluating Hypothesis, Instance Based Learning, Reinforcement Learning

Introduction: Computers that see

You're standing at a crowded railway station waiting for a friend. As you wait, hundreds of people
pass you by. Each one looks different, but when your friend arrives you have no problem picking
her out of the crowd. Recognizing people's faces is something we humans do effortlessly, but
how would you program a computer to recognize a person? You could try to make a set of rules.
For example, your friend has long black hair and brown eyes, but that could describe literally
billions of people. What is it about her that you actually recognize? Something about the shape of
her nose or the curve of her chin? But can you put it into words? The truth is that we can
recognize people without ever really knowing how we do it. We cannot describe every detail of
how we recognize someone. We just know how to do it. The trouble is that to program a
computer, we need to break the task down into its little details. That makes it very difficult or even
impossible to program a computer to recognize faces. Face recognition is an example of a task
that people find very easy, but that is very very hard for computers. These tasks are often called
artificial intelligence or AI. You're now going to learn about the type of computing technique that
can solve a lot of these AI problems and that's revolutionising what computers can do that
technique is machine learning.

Machine Learning Module -1 Questions:


1. Define the following terms:
o Learning
o LMS weight update rule
o Version Space
o Consistent Hypothesis
o General Boundary
o Specific Boundary
o Concept
2. What are the important objectives of machine learning?
3. Explain find-S algorithm with given example. Give its application.  
o
Example Sky AirTemp Humidity Wind Water Forecast EnjoySport
1 Sunny Warm Normal Strong Warm Same Yes

2 Sunny Warm High Strong Warm Same Yes

3 Rainy Cold High Strong Warm Change Yes

4 Sunny Warm High Strong Cool Change Yes


2. What do you mean by a well –posed learning problem? Explain the important features that are required to
well  define a learning problem
3. Explain the inductive biased hypothesis space and unbiased learner
4. What are the basic design issues and approaches to machine learning?
5. How is Candidate Elimination algorithm different from Find-S Algorithm
6. How do you design a checkers learning problem
7. Explain the various stages involved in designing a learning system
8. Trace the Candidate Elimination Algorithm for the hypothesis space H’ given the sequence of training
examples from Table 1.
H’= < ?, Cold, High, ?,?,?>v
9. Differentiate between Training data and Testing Data
10. Differentiate between Supervised, Unsupervised and Reinforcement Learning
11. What are the issues in Machine Learning
12. Explain the List Then Eliminate Algorithm with an example
13. What is the difference between Find-S and Candidate Elimination Algorithm
14. Explain the concept of Inductive Bias
15. With a neat diagram, explain how you can model inductive systems by equivalent deductive systems.
16. What do you mean by Concept Learning?

Machine Learning Module-2 Questions


1. Give decision trees to represent the following boolean functions
o A ˄˜B
o A V [B ˄ C]
o A XOR B
o [A ˄ B] v [C ˄ D]
2. Consider the following set of training examples:
o
Instance Classification a1 a2

1 + T T

2 + T T

3 - T F

4 + F F

5 - F T

6 - F T
o (a) What is the entropy of this collection of training examples with respect to the target function
classification?
(b) What is the information gain of a2 relative to these training examples?.
3. NASA wants to be able to discriminate between Martians (M) and Humans (H) based on the following
characteristics: Green ∈{N, Y} , Legs ∈{2,3} , Height ∈{S, T}, Smelly ∈{N, Y}
o Our available training data is as follows:

a) Greedily learn a decision tree using the ID3 algorithm and draw the tree .

b)

(i) Write the learned concept for Martian as a set of conjunctive rules (e.g., if (green=Y and legs=2 and height=T and
smelly=N), then Martian; else if ... then Martian;...; else Human).

(ii) The solution of part b)i) above uses up to 4 attributes in each conjunction. Find a set of conjunctive rules using
only 2 attributes per conjunction that still results in zero error in the training set. Can this simpler hypothesis be
represented by a decision tree of depth 2? Justify.

4.Discuss Entropy in ID3 algorithm with an example

5.Compare Entropy and Information Gain in ID3 with an example.

6. Describe hypothesis Space search in ID3 and contrast it with Candidate-Elimination algorithm.

7. Relate Inductive bias with respect to Decision tree learning

8. Illustrate Occam’s razor and relate the importance of Occam’s razor with respect to ID3 algorithm.
9. List the issues in Decision Tree Learning. Interpret the algorithm with respect to Overfitting the data.

10. Discuss the effect of reduced Error pruning in decision tree algorithm.

11. What type of problems are best suited for decision tree learning

12. Write the steps of ID3Algorithm

13. What are the capabilities and limitations of ID3

14. Define (a) Preference Bias (b) Restriction Bias

15. Explain the various issues in Decision tree Learning

16. Describe Reduced Error Pruning

17. What are the alternative measures for selecting attributes

18. What is Rule Post Pruning

 
Machine Learning Module-3 Questions.

1) What is Artificial Neural Network?

2) What are the type of problems in which Artificial Neural Network can be applied.

3) Explain the concept of a Perceptron with a neat diagram.

4) Discuss the Perceptron training rule.

5) Under what conditions the perceptron rule fails and it becomes necessary to apply the delta rule

6) What do you mean by Gradient Descent?

7) Derive the Gradient Descent Rule.

8) What are the conditions in which Gradient Descent is applied.

9) What are the difficulties in applying Gradient Descent.

10)Differentiate between Gradient Descent and Stochastic Gradient Descent

11)Define Delta Rule.

12)Derive the Backpropagation rule considering the training rule for Output Unit weights and Training Rule for
Hidden Unit weights

13)Write the algorithm for Back propagation.

14) Explain how to learn Multilayer Networks using Gradient Descent Algorithm.
15)What is Squashing Function?

Machine Learning Module-4 Questions

1) Explain the concept of Bayes theorem with an example.

2) Explain Bayesian belief network and conditional independence with example.

3) What are Bayesian Belief nets? Where are they used?

4) Explain Brute force MAP hypothesis learner? What is minimum description length principle

5) Explain the k-Means Algorithm with an example.

6) How do you classify text using Bayes Theorem

7) Define (i) Prior Probability (ii) Conditional Probability (iii) Posterior Probability

8) Explain Brute force Bayes Concept Learning

9) Explain the concept of EM Algorithm.

10)What is conditional Independence?

11) Explain Naïve Bayes Classifier with an Example.

12)Describe the concept of MDL.

13)Who are Consistent Learners.

14)Discuss Maximum Likelihood and Least Square Error Hypothesis.

15)Describe Maximum Likelihood Hypothesis for predicting probabilities.

16) Explain the Gradient Search to Maximize Likelihood in a neural Net.

Machine Learning Module-5 Questions

1. What is Reinforcement Learning?

2. Explain the Q function and Q Learning Algorithm.

3. Describe K-nearest Neighbour learning Algorithm for continues valued target function.

4. Discuss the major drawbacks of K-nearest Neighbour learning Algorithm and how it can be corrected

5. Define the following terms with respect to K - Nearest Neighbour Learning :


i) Regression ii) Residual iii) Kernel Function.
6.Explain Q learning algorithm assuming deterministic rewards andactions?

7.Explain the K – nearest neighbour algorithm for approximating a discrete – valued functionf : Hn→ V with pseudo
code

8. Explain Locally Weighted Linear Regression.

9.Explain CADET System using Case based reasoning.


10. Explain the two key difficulties that arise while estimating the Accuracy of Hypothesis.
11.Define the following terms
a. Sample error b. True error c. Random Variable
d. Expected value e. Variance f. standard Deviation

12. Explain Binomial Distribution with an example.

13. Explain Normal or Gaussian distribution with an example.

You might also like