Download as pdf or txt
Download as pdf or txt
You are on page 1of 109

Artificial Intelligence

RCA-403
Syllabus
UNIT-I INTRODUCTION: - Introduction to Artificial Intelligence, Foundations and
History of Artificial Intelligence, Applications of Artificial Intelligence, Intelligent
Agents, Structure of Intelligent Agents. Computer vision, Natural Language
Possessing.
UNIT-II INTRODUCTION TO SEARCH: - Searching for solutions, uniformed
search strategies, informed search strategies, Local search algorithms and
optimistic problems, Adversarial Search, Search for Games, Alpha - Beta
pruning.
UNIT-III KNOWLEDGE REPRESENTATION & REASONING: - Propositional
logic, Theory of first order logic, Inference in First order logic, Forward &
Backward chaining, Resolution, Probabilistic reasoning, Utility theory, Hidden
Markov Models (HMM), Bayesian Networks.
Syllabus
UNIT-IV MACHINE LEARNING: - Supervised and unsupervised learning,
Decision trees, Statistical learning models, learning with complete data - Naive
Bayes models, Learning with hidden data – EM algorithm, Reinforcement
learning.

UNIT-V PATTERN RECOGNITION: - Introduction, Design principles of


pattern recognition system, Statistical Pattern recognition, Parameter estimation
methods - Principle Component Analysis (PCA) and Linear Discriminant Analysis
(LDA), Classification Techniques – Nearest Neighbor (NN) Rule, Bayes Classifier,
Support Vector Machine (SVM), K – means clustering.
UNIT-II
Introduction to Search: Problem solving through AI

Problem solving is the method to reach the desired goal or finding a solution to a
given situation. This method of solving problem through AI involves the process of
defining the search space, deciding start state and goal state and then finding the
path from start state to goal state through search space.
The movement from start state to goal state is guided by set of rules
specifically designed for that particular problem called the production rules. The
production rules are the valid moves described by the problems
Introduction to Search: Problem solving through AI

State Space representation of problem


• A State is a representation of problem elements at a given moment.
• A State Space is the set of all states reachable from the initial state.
− A state space forms a graph in which the nodes are states and the arcs
between nodes are actions.
− In the state space, a path is a sequence of states connected by a sequence of
actions.
• Problem: It is the question which is to be solved.
• Search is the process of finding the solution in state space.
• Well-defined problem has three major components: initial state, final (goal)
state, space including transition function or path function.
• A Solution of the problem is a path from initial state to goal state
Introduction to Search: Problem solving through AI

Example: Vacuum Cleaning Agent


States: The state is determined by
both the agent location and the dirt
locations
Initial State: Any state can be
designated as the initial state.
Actions: Left, Right, and Suck.
Goal: This checks whether all the
squares are clean.
Path Cost: Each steps costs 1, so the
path cost is the number of steps in
the path.
Introduction to Search: Problem solving through AI

Example: 8 Puzzle Problem


The eight tile puzzle consist of a 3 by 3 (3*3) square frame board which holds 8
movable tiles numbered 1 to 8. One square is empty, allowing the adjacent tiles to
be shifted. The objective of the puzzle is to find a sequence of tile movements that
leads from a starting configuration to a goal configuration.

7 4 2 1 2 3
1 5 8 4
6 3 8 7 6 5
Introduction to Search: Problem solving through AI
Example: 8 Puzzle Problem
States: It specifies the location of each of the 8 tiles and the blank in one of the nice
squares.
Initial state: Any state can be designated as the initial state.
Goal: Many goal configurations are possible one such is shown in the figure
Legal moves ( or state): They generate legal states that result from trying the four
actions-
• Blank moves left
• Blank moves right
• Blank moves up
• Blank moves down
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Introduction to Search: Problem solving through AI
Example: 8 Puzzle Problem (Search/ State Space Representation)
Introduction to Search: Problem solving through AI

Example: Water Jug Problem - (Search/ State Space Representation)


AI Toy Problem vs. Real-world Problem
Toy Problem Real-World Problem
Concise & exact description No single agreed-upon description
Use for illustration purposes People care about the solution
Use for performance comparison
Example: Example:
1. Water-Jug Problem 1. Travelling Salesperson Problem
2. Missionaries & Cannibal Problem 2. Robot Navigation
3. Cryptarithmetic Problem 3. Route Finding
4. 8 Puzzle Problem 4. Web Search
5. 8 Queen Problem
6. Vacuum Cleaning
Problem Representation in AI
The most common methods of problem representation in AI are:-
1. State Space Representation
2. Problem Reduction
PRODUCTION SYSTEM
The production system is a model of computation that can be applied to implement
search algorithms and model human problem solving. Such problem-solving
knowledge can be packed up in the form of little quanta called productions. It
consists of:
• Set of rules (Condition → Action)
• Database
• Control strategy
• Rule applier
Problem Representation in AI
Water-Jug Problem:
Statement: We are given 2 jugs, a 4-litre one and a 3-litre one. Neither have any
measuring markers on it. There is a pump that can be used to fill the jugs with water.
How can we get exactly 2 litres of water in to the 4-liter jugs?

The state space for this problem can be defined as


{(x, y) where x = 0,1,2,3,4 & y = 0,1,2,3}
‘x’ represents the number of liters of water in the 4-litre jug and ‘y’ represents the
number of litre of water in the 3-litre jug. The initial state is (0, 0) that is no water
on each jug. The goal state is to get (2, n) for any value of ‘n’.

PRODUCTION SYSTEM
Problem Representation in AI
Assumptions:
• We can fill a jug from
the pump.
• We can pour water out
of a jug to the ground.
• We can pour water from
one jug to another.
• There is no measuring
device available.
The production rules for
“WATER-JUG” Problem
are formulated as →
PRODUCTION SYSTEM
Problem Representation in AI

One solution is applying the rules in the


sequence 2, 9, 2, 7, 5, 9. The solution is
presented in the following table:-
Rule Water in 4 litre Water in 3 litre
Applied jug jug
Start State 0 0
2 0 3
9 3 0
2 3 3
7 4 2
5 0 2
9 2 0
PRODUCTION SYSTEM
Problem Representation in AI
Missionaries & Cannibals
Statement: In this problem, three missionaries and three cannibals must cross a river
using a boat which can carry at most two people, under the constraint that, for both
banks, that the missionaries present on the bank cannot be outnumbered by
cannibals. The boat cannot cross the river by itself with no people on board.

ASSUMPTION: Consider that both the missionaries (M) and cannibals(C) are on
the same side of the river.

PRODUCTION SYSTEM
Problem Representation in AI
The production rules for
“Missionaries &
Cannibals” Problem are
formulated as →

PRODUCTION SYSTEM
Problem Representation in AI
One solution is applying
the rules in the sequence
(5,2,7,10,3,6,3,10,7,10,7).
The solution is presented
in the RHS table →

PRODUCTION SYSTEM
Problem Representation in AI
Water-Jug Problem:
Statement: We are given 2 jugs, a 4-litre one and a 3-litre one. Neither have any
measuring markers on it. There is a pump that can be used to fill the jugs with water.
How can we get exactly 2 litres of water in to the 4-liter jugs?

The state space for this problem can be defined as


{(x, y) where x = 0,1,2,3,4 & y = 0,1,2,3}
‘x’ represents the number of liters of water in the 4-litre jug and ‘y’ represents the
number of litre of water in the 3-litre jug. The initial state is (0, 0) that is no water
on each jug. The goal state is to get (2, n) for any value of ‘n’.

PRODUCTION SYSTEM
Cryptarithmetic Problem
A Constraint Satisfaction Problem where the digits of some numbers are
represented by letters (or symbols). Each letter represents a unique digit. The goal is
to find the digits such that a given mathematical equation is verified
Constraints are:
• No two letters have same value
• Sum of digits must be as shown in the problem
• There should be only one carry forward
Letter Digit
Example 1: T O
G O T 2
----------------------- O 1
O U T G 8
(1) (0) (2) U 0
Letter Digit
Example 2: S E N D S
M O R E E
_______________________________
N
M O N E Y
D
Example 3: C R O S S M
R O A D S O
________________________________ R
D A N G E R
Y

Example 4: B A S E
B A L L
________________________________
G A M E S
AI & Search Process
Searching can be defined as a sequence of steps that transforms the initial state to
the goal state. The searching process in AI can be broadly classified into two major
types:
1. Brute Force Search or uninformed or blind search.
2. Heuristic Search or informed search.
Measuring Problem-Solving Performance
• Completeness
• Optimality
• Time Complexity
• Space Complexity
AI & Search Process
Uninformed or Blind search or Brute Force Search: Uninformed search
algorithms do not have additional information about state or search space other than
how to traverse the tree, so it is also called blind search.

Various Uninformed search are:


• Breadth-first search
• Uniform-cost search
• Depth-first search and Depth-limited search
• Iterative deepening depth-first search
• Bidirectional search
AI & Search Process
Heuristic Search or informed search: Informed search algorithm contains an array
of knowledge such as how far we are from the goal, path cost, how to reach to goal
node, etc. ... This knowledge help agents to explore less to the search space and find
more efficiently the goal node.
Various Informed search are:
• Hill Climbing
• Best-First Search
• A* Algorithm
• AO* Algorithm
• Beam Search
• Constraint Satisfaction
• Min-Max Search & Alpha - Beta pruning
Uninformed Search Techniques
Breadth First Search (BFS)

• Breadth First Search (BFS) searches breadth-wise in the problem space.


• BFS was invented in late 1950s by E. F. Moore, who used it find shortest path
out of a maze.
• Breadth-First search is like traversing a tree where each node is a state which
may be a potential candidate for solution.
• It expands nodes from the root of the tree and then generates one level of the
tree at a time until a solution is found.
• It is very easily implemented by maintaining a queue (FIFO) of nodes
Uninformed Search Techniques
Breadth First Search (BFS): Consider the following State Space Search A
B
C
NODE_LIST
D
E
F
G
H
BFS traverse is ABCDEFGHIJ I
(Explore nodes). As J is the GOAL J
node
Uninformed Search Techniques
Breadth First Search (BFS):

8-Puzzle Problem
Uninformed Search Techniques
Breadth First Search (BFS)

Advantages:
• Finds the path of minimal length to the goal - If there is more than one solution
then BFS can find the minimal one that requires less number of steps.
• If there is a solution, BFS will definitely find it out.
Disadvantages:
• Amount of memory is proportional to the number of nodes stored,
• If the solution is farther away from the root, breath first search will consume
lot of time.
Exercise 1: Consider the following State
Space Search. Here node ‘A’ is the source or
start or initial node and node ‘G’ is the goal
node. Use a breadth-first search to find G in
the following search tree
Exercise 2:
Uninformed Search Techniques
Performance Measure of Breadth First Search (BFS)

• Completeness: it is easy to see that breadth-first search is complete that it visit


all levels given that d factor is finite, so in some d it will find a solution.
• Optimality: breadth-first search is not optimal until all actions have the same
cost (because it always finds the shallowest node first)
• Space complexity: O(bd)
• Time Complexity: O(bd)

Note: b is branching factor and d is the depth of a tree


Uninformed Search Techniques
Uniform Cost Search (UCS)

• Uniform-cost search expands nodes in order of their cost from the root.
• Uniform-cost is guided by path cost rather than path length like in BFS
• The algorithms starts by expanding the root, then expanding the node with the
lowest cost from the root, the search continues in this manner for all nodes.
The nodes are stored in a priority queue.
• Uniform Cost Search can also be used as Breadth First Search if all the edges
are given a cost of 1.
Uninformed Search Techniques
Uniform Cost Search (UCS): Consider the following State Space Search

• Initialization: {[S, 0]}


• {[S→A , 1], [S→G, 12]}
• {[S→A→C, 2], [S→A→B, 4], [S→G, 12]}
• {[S→A→C→D, 3], [S→A→B, 4], [S→A→C→G, 4 ],
[S→G, 12]}
• {[S→A→B, 4], [S→A→C→G, 4], [S→A→C→D→G, 6],
[S→G, 12]}
• {[S→A→C→G, 4], [S→A→C→D→G, 6],
[S→A→B→D, 7], [S→G, 12]}

Gives the final output as S→A→C→G.


Final Path is SACG
Exercise 1: Consider the state
space graph given. Using
Uniform Cost Search Algorithm
find out the minimum cost to
reach the goal node (G).
Exercise 2: Consider
the state space graph
given. Using Uniform
Cost Search Algorithm
find out the minimum cost
to reach to any goal node
(G1, G2 or G3).
Uninformed Search Techniques
Uniform Cost Search (UCS)

Advantages:
• Guaranteed to find the least-cost solution. Uniform cost search is optimal
because at every state the path with the least cost is chosen.
Disadvantages:
• Exponential storage required.
• Open list must be kept sorted (as a priority queue).
• It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an
infinite loop.
Uninformed Search Techniques
Performance Measure of Uniform Cost Search (UCS)
• Completeness: It is obvious that UCS is complete if the cost of each step
exceeds some small positive integer, this to prevent infinite loops..
• Optimality: UCS is always optimal in the sense that the node that it always
expands is the node with the least path cost.
• Space complexity: UCS is guided by path cost rather than path length so it is
hard to determine its complexity in terms of b and d, so if we consider C to be
the cost of the optimal solution, and every action costs at least e, then the
algorithm worst case is O(bC/e).
• Time Complexity: O(bC/e)

Note: C is cost of optimal solution and every action costs at least e


Uninformed Search Techniques
Depth First Search (DFS)

• DFS progresses by expanding the first child node of the search tree that
appears and thus going deeper and deeper until a goal node is found, or until it
hits a node that has no children. Then the search backtracks, returning to the
most recent node it hasn’t finished exploring. DFS investigated in 19th century
by French Mathematician Charles Pierre as a strategy for solving mazes.
• DFS is implemented using STACK (Last In First Out)
Uninformed Search Techniques
Depth First Search (DFS): Consider the following State Space Search
A

B C

B F G
B F M
B F
B K L

DFS traverses ACGMFL


(Explore nodes). As L is the
GOAL node
Uninformed Search Techniques
Depth First Search (DFS)

Advantages:
• DFS consumes very less memory space
Disadvantages:
• There is no guarantee of finding the goal node
Uninformed Search Techniques
Performance Measure of Depth First Search(DFS)

• Completeness: DFS is not complete


• Optimality: DFS is not optimal
• Time complexity: O(bm).
• Space Complexity: O(bm)

Note: b is branching factor and m is the maximum depth


Uninformed Search Techniques
Depth Limited Search (DLS)
The unbounded tree problem appeared in DFS can be fixed by imposing a limit on
the depth that DFS can reach, this limit we will call depth limit l, this solves the
infinite path problem. Consider the following state graph with Depth is 2 and Goal
node is J
Uninformed Search Techniques
Performance Measure of Depth Limited Search (DLS)
• Completeness: The limited path introduces another problem which is the case
when we choose l < d, in which is our DLS will never reach a goal, in this case
we can say that DLS is not complete.
• Optimality: One can view DFS as a special case of the depth DLS, that DFS
is DLS with l = infinity. DLS is not optimal even if l > d.
• Space complexity: O(bl)
• Time Complexity: O(bl)

Note: b is branching factor and l is the limited depth (l < d) as d is depth of tree
Uninformed Search Techniques
Depth-First Iterative Deepening Search

• It is a search strategy resulting when you combine BFS and DFS, thus
combining the advantages of each strategy, taking the completeness and
optimality of BFS and the modest memory requirements of DFS.
• IDS works by looking for the best search depth d, thus starting with depth
limit 0 and make a BFS and if the search failed it increase the depth limit by 1
and try a BFS again with depth 1 and so on – first d = 0, then 1 then 2 and so
on – until a depth d is reached where a goal is found.
Uninformed Search Techniques
Depth-First Iterative Deepening Search Example:
Uninformed Search Techniques
Performance Measure of Depth-First Iterative Deepening Search (IDS)

• Completeness: IDS is like BFS, is complete when the branching factor b is


finite
• Optimality: IDS is also like BFS optimal when the steps are of the same cost.
• Space complexity: O(bd)
• Time Complexity: O(bd)

Note: b is branching factor and d is depth of tree


Uninformed Search Techniques
Bi-directional Search
Bidirectional Search, as the name implies, searches in two directions at the same
time: one forward from the initial state and the other backward from the goal

We can consider bidirectional approach when-


• Both initial and goal states are unique and completely defined.
• The branching factor is exactly the same in both directions.
Uninformed Search Techniques
Bi-directional Search: Consider the following state graph, suppose we need to find
if there exists a path from vertex 0 to vertex 14.
Uninformed Search Techniques
Bi-directional Search
Advantages:
• The merit of bidirectional search is its speed. Sum of the time taken by two
searches (forward and backward) is much less than the O(bd) complexity.
• It requires less memory.
Disadvantages:
• Implementation of bidirectional search algorithm is difficult because additional
logic must be included to decide which search tree to extend at each step.
• One should have known the goal state in advance.
• The algorithm must be too efficient to find the intersection of the two search
trees.
• It is not always possible to search backward through possible states.
Uninformed Search Techniques
Performance Measure of Bi-directional Search

• Completeness: Bidirectional search is complete when we use BFS in both


searches, the search that starts from the initial state and the other from the goal
state.
• Optimality: Bidirectional search is optimal when BFS is used and paths are of
a uniform cost – all steps of the same cost.
• Space complexity: O(bd/2)
• Time Complexity: O(bd/2)

Note: b is branching factor and d is depth of tree


• b = Branching factor
Uninformed Search Techniques • d = Depth of the shallowest solution
• m = Maximum depth of the search tree
• l = Depth Limit
Comparison of Uninformed Search
Heuristic (Informed) Search Techniques
Generate and Test Search
Generate and Test search guarantee to find a
solution if done systematically and there exist a
solution. It is the simplest heuristic search
technique which used DFS with backtracking.
Algorithm:
Step 1: Generate a possible solution
Step 2: Test & see if this is the expected
solution
Step 3: If the solution has been found QUIT
else GOTO step 1.
Heuristic (Informed) Search Techniques
Hill Climbing Search:
• Hill climbing algorithm is a local search algorithm which continuously moves in
the direction of increasing elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak value where no
neighbor has a higher value. Hill climbing does not look ahead beyond the
immediate neighbors of the current state
• Hill Climbing is the variant of Generate and Test method. The Generate and Test
method produce feedback which helps to decide which direction to move in the
search space.
• Greedy approach: Hill climbing is sometimes called greedy local search because it
grabs a good neighbor state without thinking ahead about where to go next.
• No backtracking: It does not backtrack the search space, as it does not remember
the previous states.
Heuristic (Informed) Search Techniques
Hill Climbing Search:
Example our aim is to find
a path from S to M associate
heuristics with every node.
h(n) = heuristic function as
its evaluation function
Heuristic (Informed) Search Techniques
Hill Climbing Search:
Advantages:
• Useful for AI problems where knowledge of the path is not important, so in
obtaining the problem solution, it is not recorded.
• It is also helpful to solve pure optimization problems where the objective is to
find the best state according to the objective function.
Disadvantages:
This technique works but as it uses local information that’s why it can be
fooled. The algorithm doesn’t maintain a search tree, so the current node data
structure need only record the state and its objective function value. It assumes
that local improvement will lead to global improvement.
Heuristic (Informed) Search Techniques
Hill Climbing Search:
Disadvantages:
Local maximum: It is a state which is better than its neighboring state however
there exists a state which is better than it(global maximum). This state is better
because here the value of the objective function is higher than its neighbors.
Plateau: It is a flat region of state space where neighboring states have the same
value.
Ridge: It is region which is higher than its neighbors but itself has a slope. It is a
special kind of local maximum.
Heuristic (Informed) Search Techniques
Hill Climbing Search: Disadvantages
Heuristic (Informed) Search Techniques
Hill Climbing Search: Disadvantages (Local Maxima)
Heuristic (Informed) Search Techniques
Types of Hill Climbing Search:
• Simple Hill Climbing
• Steepest-Ascent Hill Climbing
• Stochastic Hill Climbing
• Simulated Annealing Hill Climbing
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It
only evaluates the neighbor node state at a time and selects the first one which
optimizes current cost and set it as a current state. It only checks it's one
successor state, and if it finds better than the current state, then move else be in the
same state. This algorithm has the following features:
• Less time consuming
• Less optimal solution and the solution is not guaranteed
Steepest-Ascent algorithm
• Analyze the starting situation. Stop and return success if it's a goal state. If not,
the initial state should be set as the current state.
• Follow these instructions again and again until a solution is found, or the
situation stays the same.
• Choose a state that hasn't yet been used to modify the existing state.
• Create a new "best state" that is initially equivalent to the existing state and then
apply it to create the new state.
• Execute these to assess the new state.
• Stop and return success if the current state is a goal state.
• If it is superior to the best state, make it the best state; otherwise, keep going by
adding another new state to the loop.
• Set the ideal situation as the current situation.
• Exit.
Stochastic hill climbing: It is the exact opposite of the methods that were previously
explained. With this method, the agent doesn't look up the values of nearby nodes. The
agent chooses a neighboring node entirely at random, moves to that node, and then
determines whether to continue this path based on the heuristic of that node.
1. Evaluate the initial state. If it is a goal state then stop and return success. Otherwise,
make the initial state the current state.
2. Repeat these steps until a solution is found or the current state does not change.
a. Select a state that has not been yet applied to the current state.
b. Apply the successor function to the current state and generate all the neighbor
states.
c. Among the generated neighbor states which are better than the current state
choose a state randomly (or based on some probability function).
d. If the chosen state is the goal state, then return success, else make it the current
state and repeat step 2 of the second point.
3. Exit from the function.
Simulated annealing algorithm
• A hill-climbing algorithm that never makes “downhill” moves toward states with lower
value (or higher cost) is guaranteed to be incomplete, because it can get stuck on a local
maximum.
• In contrast, a purely random walk—that is, moving to a successor chosen uniformly
at random from the set of successors—is complete but extremely inefficient.
• Therefore, it seems reasonable to combine hill climbing with a random walk in
some way that yields both efficiency and completeness.
• Idea: escape local maxima by allowing some “bad” moves but gradually decrease their
size and frequency.
• The simulated annealing algorithm, a version of stochastic hill climbing where some
downhill moves are allowed.
• Annealing: the process of gradually cooling metal to allow it to form stronger
crystalline structures
• Simulated annealing algorithm: gradually “cool” search algorithm from Random Walk
to FirstChoice Hill Climbing
Heuristic (Informed) Search Techniques
Best First Search:
• In the Best-First Search, “best-first” refers to the method of exploring the node
with the best “score” first.
• Search start from root node, node to be expanded next is selected on the basis of
evaluation function f(n). An evaluation function is used to assign a score to each
candidate node. Node having lowest value of f(n) is selected first as it indicates
that the goal is nearest from this node.
• It is implemented using priority queue, highest priority is given to the node have
least f(n) value.
8 Puzzle Problem – A Solution Case
Example 1: Consider the given state space graph. Find the path and path cost using
Best First Search
Node being
Chile Nodes Available Nodes Node Chosen
expanded
S {S→B, 1} {S→A, 4} {S→B, 1} {S→A, 4} S→B, 1
{S→A, 4}
S→B→E, 3 S→B→E, 3
B S→B→E, 3
S→B→F, 4
S→B→F, 4
{S→A, 4}
S→B→E→G1, 6 S→A, 4
E S→B→F, 4S→B→E→G1, 6
S→B→E→H, 7
S→B→E→H, 7
S→B→F, 4
S→A→C, 5
S→A→C, 5
A S→A→D, 6 S→B→F, 4
S→A→D, 6
S→B→E→G1, 6
S→B→E→H, 7

S→A→C, 5
S→B→F→G1,5
S→B→F→G1,5 S→B→F→I, 6
F
S→B→F→I, 6 S→A→D, 6
S→B→E→G1, 6
S→B→E→H, 7
Heuristic (Informed) Search Techniques
Example 2

Path is SACBHI
Heuristic (Informed) Search Techniques
Example 3: Consider the given state space graph. Find the path and path cost
using Best First Search
Heuristic (Informed) Search Techniques
Example 4: Find the path using Best First Search!!!
Heuristic (Informed) Search Techniques
Example 5: Find the path using Best First Search!!!
Heuristic (Informed) Search Techniques
Best First Search:
Advantages:
• It is more efficient than that of BFS and DFS.
• Time complexity of Best first search is much less than Breadth first search.
• The Best first search allows us to switch between paths by gaining the
benefits of both breadth first and depth first search. Because, depth first is
good because a solution can be found without computing all nodes and
Breadth first search is good because it does not get trapped in dead ends.
Disadvantages:
• Sometimes, it covers more distance than our consideration
Heuristic (Informed) Search Techniques
A* Algorithm
• Starting from a specific starting node of a graph, it aims to find a path to the given
goal node having the smallest cost
• A* Algorithm is the specialization of Best First Search in which the cost associated
with a node is f(n) = g(n) + h(n), where g(n) is the cost of the path from the initial
state to node n and h(n) is the heuristic estimate or the cost or a path from node n to
a goal. Thus, f(n) estimates the lowest total cost of any solution path going through
node n. At each point, a node with lowest f value is chosen for expansion.
• A* algorithm guides an optimal path to a goal if the heuristic function h(n) is
admissible, meaning it never overestimates actual cost
• The * represents that the algorithm is admissible as it guarantees to give optimal
solution.
8 Puzzle Problem:
Example of
Heuristic Value/
Function
Heuristic (Informed) Search Techniques
A* Algorithm

Where,
• f(n) = evaluation function.
• g(n) = actual cost of current node from start node.
• h(n) = heuristic value i.e., estimated cost of current node from goal node.
Heuristic (Informed) Search Techniques
A* Algorithm: Consider the following graph, the numbers written on edges
represent the distance between the nodes and the numbers written on nodes
represent the heuristic value.

Find the most cost-effective path to reach from start state A to final state J using A*
Algorithm.
Heuristic (Informed) Search Techniques
A* Algorithm: Consider the following graph, the numbers written on edges
represent the distance between the nodes and the numbers written on nodes
represent the heuristic value.

Find the most cost-effective path to reach from start state S to final state G using
A* Algorithm.
Example 1
Example 2: Apply the steps of A* algorithm to find the shortest path from A to Z
Example 3: Apply the steps of A* algorithm to find the shortest path from S to F or M
Heuristic (Informed) Search Techniques
A* Algorithm:
Advantages:
• It is complete and optimal.
• It is the best algorithm, there is no other optimal algorithm guaranteed to
expand fewer nodes than A*
Disadvantages:
• Although being the best path finding algorithms, but A* search algorithm
does not produce the shortest path always because it heavily depends/ relies
on heuristics.
Heuristic (Informed) Search Techniques
A* Algorithm:
Advantages:
• It is complete and optimal.
• It is the best algorithm, there is no other optimal algorithm guaranteed to
expand fewer nodes than A*
Disadvantages:
• Although being the best path finding algorithms, but A* search algorithm
does not produce the shortest path always because it heavily depends/ relies
on heuristics.
Admissibility of A* Algorithm

The evaluation function in A* looks like this:

f(n) = g(n) + h(n)


f(n) = Actual cost + Estimated cost
Here,
n = current node.
f(n) = evaluation function.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.
Conditions:
h(n) ≤ h*(n) ∴ Underestimation
h(n) ≥ h*(n) ∴ Overestimation
Heuristic (Informed) Search Techniques
AO* Algorithm is based on problem decomposition i.e. breakdown goal into
simpler sub-goals. This decomposition generates generate arcs called AND arcs
and several arcs may emerge from single node called OR arcs. That’s why called it
AND-OR Graph. It is also called problem reduction search algorithm.
Nodes in the graph point to a number of successor nodes, all of which must
be solved in order for the arc to point to a solution up to its parent nodes. Each
node in the graph will also have a heuristic value associate with it.
f(n) = g(n)+h(n)
It is an efficient method to explore a solution path.
Example: Consider the AO graph given, AO *algorithm expand
next to which node?
Heuristic (Informed) Search Techniques

AO* Algorithm Example 1: Consider the following AO graph, which of the


following node(s), identified by their heuristic value, could the algorithm
expand/refine next?
Heuristic (Informed) Search Techniques

AO* Algorithm Example 2:


Graph (RHS) represents an AO
graph with the values labeled in.
The value in a single line circle is
an estimate of cost. The value in a
double lined circle, a SOLVED
node, is the actual value. Each
edge is labeled with a different
cost. What is the value of the root
node for the optimal solution for
the AO graph?
Example 3:
UGC NET CSE | January 2017: Consider the following AO graph:

Which is the best node to expand by next AO* algorithm?


1. A
2. B
3. C
4. B and C
Example Note: AO* doesn’t explore all the solution paths once it got the solution

J
(1)
I
(1)
Heuristic (Informed) Search Techniques
AO* Algorithm:

Advantages:
• It is an efficient method to explore a solution path
• Solution is guaranteed by using AO* algorithm
Disadvantages:
• Sometimes for unsolvable nodes, it can’t find the optimal path. Also, AO*
does not explore all the solution path once it got solution
Adversarial Search
It relates to competitive environment in which the agent goals are conflict giving
rise to adversarial search.
There are two methods for game playing:
1. Min-Max Procedure
2. Alpha-Beta Pruning (Cut-offs)
Min-Max Search: Min-Max strategy is a simple strategy for two player game.
Here, one player is called “maximizer” and the other called “minimizer.
Maximizer tries to maximize its score while minimizer tries to minimize
maximizer’s score. The minimax algorithm performs a depth-first search algorithm
for the exploration of the complete game tree.
It is also assumed that the maximizer makes the first move (not essential, as
minimizer can also make first move). The maximizer, always tries to go a position
where the static evaluation function value is the maximum positive value.
Adversarial Search
Min-Max Search: Consider the following tree graph example

The maximizer being the player to make the first move, and will move to node D
because the static evaluation function value for that is maximum. . The same above
figure shows that if the minimizer has to make the first move, it will go to node B
because the static evaluation function value at that node will be advantageous to it.
Once the static evaluation function is applied at the leaf nodes, backing up values
can begin. First we compute the backed-up values at the parents of the leaves.
Step-1: In the first step, the algorithm Working of Min-Max Algorithm
generates the entire game-tree and apply the
utility function to get the utility values for the
terminal states. In the given tree diagram,
let's take A is the initial state of the tree.
Suppose maximizer takes first turn which has
worst-case initial value =- infinity, and
minimizer will take next turn which has
worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value
for the Maximizer, its initial value is -∞, so
we will compare each value in terminal state
with initial value of Maximizer and
determines the higher nodes values. It will
find the maximum among the all.
For node D max(-1,- -∞) => max(-1,4)= 4
For Node E max(2, -∞) => max(2, 6)= 6
For Node F max(-3, -∞) => max(-3,-5) = -3
For node G max(0, -∞) = max(0, 7) = 7

Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with
+∞, and will find the 3rd layer node values.
• For node B= min(4,6) = 4
• For node C= min (-3, 7) = -3
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
nodes value and find the maximum value for the root node. In this game tree, there are
only 4 layers, hence we reach immediately to the root node, but in real games, there
will be more than 4 layers.
For node A max(4, -3)= 4
Adversarial Search - Alpha-Beta Pruning
The method that we are going to look in this article is called alpha-beta pruning. If
we apply alpha-beta pruning to a standard minimax algorithm, it returns the same
move as the standard one, but it removes (prunes) all the nodes that are possibly not
affecting the final decision.
Alpha-Beta pruning is a search algorithm that seeks to decrease the number
of nodes that are evaluated by mini-max algorithm in its search tree.
• α is a value which is best for Max player (highest value)
• β is a value which is best for Min player (lowest value)
Each node will keep its α-β values, and pruning done by following way:
• For Min node, if β<= α of max ancestor, PRUNE.
• For Max node, if β>= α of min ancestor, PRUNE.
Working of Alpha-Beta Pruning

Step 1: At the first step the, Max player


will start first move from node A where
α= -∞ and β= +∞, these value of alpha
and beta passed down to node B where
again α= -∞ and β= +∞, and Node B
passes the same value to its child D.
Step 2: At Node D, the value of α will be
calculated as its turn for Max. The value
of α is compared with firstly 2 and then
3, and the max (2, 3) = 3 will be the value
of α at node D and node value will also 3.
Step 3: Now algorithm backtrack to
node B, where the value of β will
change as this is a turn of Min,
Now β= +∞, will compare with the
available subsequent nodes value,
i.e. min (∞, 3) = 3, hence at node B
now α= -∞, and β= 3.
In the next step, algorithm traverse
the next successor of Node B which
is node E, and the values of α= -∞,
and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5
and β= 3, where α>=β, so the right successor of E will be pruned, and algorithm will not
traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack
the tree, from node B to node A. At node A, the
value of alpha will be changed the maximum
available value is 3 as max (-∞, 3)= 3, and β=
+∞, these two values now passes to right
successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values
will be passed on to node F.
Step 6: At node F, again the value of α will be
compared with left child which is 0, and
max(3,0)= 3, and then compared with right
child which is 1, and max(3,1)= 3 still α
remains 3, but the node value of F will become
1.
Step 7: Node F returns the node value 1 to node
C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min
(∞, 1) = 1. Now at C, α=3 and β= 1, and again it
satisfies the condition α>=β, so the next child of
C which is G will be pruned, and the algorithm
will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here
the best value for A is max (3, 1) = 3. Following
is the final game tree which is the showing the
nodes which are computed and nodes which has
never computed. Hence the optimal value for
the maximizer is 3 for this example.
Adversarial Search - Alpha-Beta Pruning
Consider the following state space graph, what’s the value of Root?
Adversarial Search - Alpha-Beta Pruning
The optimal value of the maximizer will be 3.
Example 1: What nodes could have been pruned from the search
using alpha-beta pruning
Example 2: What nodes could have been pruned from the search
using alpha-beta pruning
Example 3: Consider the following state space graph

1. What is value at the root, using minimax alone?


2. What nodes could have been pruned from the search using alpha-beta pruning?
Show values of alpha and beta
End of UNIT-II

You might also like