Professional Documents
Culture Documents
Tek Lectures 6
Tek Lectures 6
Figure 52: A generic knowledge-based agent. (Russell and Norvig, 2003, Figure 7.1)
Database (e.g. relational) contains explicit facts, and answers queries by fetching and
combining the required facts into results.
E.g. “list the names of all those students who have registered to both courses TEK
and LAI ”.
Knowledge base contains also sentences written in some knowledge representation lan-
guage, and answers queries by inferring results from these facts and sentences.
• The idea is to give our agent in Figure 52 a (much) more flexible and powerful
kind of memory than e.g. a database.
• We choose the two logics discussed here as our two languages.
• Tell (KB , φ) adds this sentence φ into the knowledge base KB .
• Ask (KB , ψ) asks whether or not this sentence ψ follows from the sentences φ
in KB .
97
• Its PEAS description is:
Performance measure:
– +1000 points for picking up the gold — this is the goal of the agent
– −1000 points for dying = entering a square containing a pit or a live
Wumpus monster
– −1 point for each action taken, and
– −10 points for using the arrow trying to kill the Wumpus — so that the
agent should avoid performing unnecessary actions.
Environment: A 4 × 4 grid of squares with. . .
– the agent starting from square [1, 1] facing right
– the gold in one square
– the initially live Wumpus in one square, from which it never moves
– maybe pits in some squares.
The starting square [1, 1] has no Wumpus, no pit, and no gold — so the agent
neither dies nor succeeds straight away.
Actuators: The agent can. . .
turn 90◦ left or right
walk one square forward in the current direction
grab an object in this square
shoot the single arrow in the current direction, which flies in a straight line
until it hits a wall or the Wumpus.
Sensors: The agent has 5 true/false sensors which report a. . .
stench when the Wumpus is in an adjacent square — directly, not diagonally
breeze when an adjacent square has a pit
glitter when the agent perceives the glitter of the gold in the current square
bump when the agent walks into an enclosing wall (and then the action had
no effect)
scream when the arrow hits the Wumpus, killing it.
• Let us consider an example gives as Figure 54 on how the agent might reason and
act in this Wumpus world.
98
Stench Breeze
4 PIT
Breeze
Breeze
3 Stench PIT
Gold
Stench Breeze
2
Breeze Breeze
1 PIT
START
1 2 3 4
Figure 53: An example Wumpus world. (Russell and Norvig, 2003, Figure 7.2)
99
1,4 2,4 3,4 4,4 A = Agent 1,4 2,4 3,4 4,4
B = Breeze
G = Glitter, Gold
OK = Safe square
1,3 2,3 3,3 4,3 P = Pit 1,3 2,3 3,3 4,3
S = Stench
V = Visited
W = Wumpus
1,2 2,2 3,2 4,2 1,2 2,2 3,2 4,2
P?
OK OK
1,1 2,1 3,1 4,1 1,1 2,1 3,1 4,1
A P?
A
V B
OK OK OK OK
(a) (b)
(a) (b)
Figure 54: A Wumpus adventure. (Russell and Norvig, 2003, Figures 7.3 and 7.4)
100
Top right: Agent A is cautious, and will only move to OK squares.
– Agent A walks into [2, 1], because it is OK, and in the direction where
agent A is facing, so it is cheaper than the other choice [1, 2].
Agent A also marks [1, 1] Visited.
– Agent A perceives a Breeze but nothing else.
– Agent A infers: “At least one of the adjacent squares [1, 1], [2, 2] and
[3, 1] must contain a Pit. There is no Pit in [1, 1] by my background
knowledge β. Hence [2, 2] or [3, 1] or both must contain a Pit.”
– Hence agent A cannot be certain of either [2, 2] or [3, 1], so [2, 1] is a dead
end for a cautious agent like A.
Bottom left: Agent A has turned back from the dead end [2, 1] and walked to
examine the other OK choice [1, 2] instead.
– Agent A preceives a Stench but nothing else.
– Agent A infers using also earlier percepts: “The Wumpus is in an adjacent
square. It is not in [1, 1]. It is not in [2, 2] either, because then I would
have sensed a Stench in [2, 1]. Hence it is in [1, 3].”
– Agent A infers using also earlier inferences: “There is no Breeze here, so
there is no Pit in any adjacent square. In particular, there is no Pit in
[2, 2] after all. Hence there is a Pit in [3, 1].”
– Agent A finally infers: “[2, 2] is OK after all — now it is certain that it
has neither a Pit nor the Wumpus.”
This reasoning is too complicated for many animals — but not for the logical
agent A.
Bottom right:
1. Agent A walks to the only unvisited OK choice [2, 2]. There is no Breeze
here, and since the the square of the Wumpus is now known too, [2, 3] and
[3, 2] are OK too.
2. Agent A walks into [2, 3] and senses the Glitter there, so he grabs the gold
and succeeds.
• From this viewpoint, the actual syntax in which the logical statements are written
as formulas is just a “print-out” of the underlying data structure.
101
– Formal semantics for logic(s) uses the term interpretation or model = a possible
world w+ a “dictionary” telling how the different parts of the statement are
to be understood in w.
We rarely need to make this distinction.
(The meaning of a modal logic involves not just one but several possible worlds —
the different viewpoints.)
• Then a given statement φ stands for the set of all those possible worlds w such
that φ is true in w.
Formal semantics calls these w the models of φ and denotes this set as Mod(φ).
• E.g. Mod(KB ) after agent A has walked into [2, 1] and perceived only the Breeze
there is shown in Figure 55 and consists of all the worlds such that
– there is Breeze in [2, 1], and
– all the possible Pit placements around it, according to his background knowl-
edge β, but
– the other squares of the worlds are not drawn, because agent A does not know
anything more than β about them (yet).
• Here KB is considered to be the combined statement
where these αi are the individual statements that have been added into KB using
tell Tell until now.
Formal semantics often considers the theory {β, α1 , α2 , α3 , . . . , αt } instead, to permit
also infinite theories.
γ entails δ
which is written as
γ |= δ
and means that “if γ is true, then δ must also be true”. For worlds, this is the
same thing as
Mod(γ) ⊆ Mod(δ).
E.g.
KB |= α1
in Figure 55. Figure 56 shows another example where KB does not entail α2 .
102
2 PIT
2
Breeze
1
Breeze
1 PIT
1 2 3
1 2 3
KB
1
2 PIT
2 PIT
2
Breeze
Breeze
1 PIT
1
Breeze
1
1 2 3
1 2 3
1 2 3
2 PIT PIT
2 PIT
Breeze
1
Breeze
1 PIT
2 PIT PIT
1 2 3
1 2 3
Breeze
1 PIT
1 2 3
Figure 55: The possible worlds according to KB and an entailed α1 . (Russell and Norvig,
2003, Figure 7.5(a))
103
2 PIT
2
Breeze
1
Breeze
1 PIT
2
KB 1 2 3
1 2 3
2 PIT
2 PIT
2
Breeze
Breeze
1 PIT
1
Breeze
1
1 2 3
1 2 3
1 2 3
2 PIT PIT
2 PIT
Breeze
1
Breeze
1 PIT
2 PIT PIT
1 2 3
1 2 3
Breeze
1 PIT
1 2 3
104
Sentences Sentence
Entails
Semantics
Semantics
Representation
World
Figure 57: The world and its representation. (Russell and Norvig, 2003, Figure 7.6)
• These sets Mod(φ) are the same belief sets Bφ as in the agent’s belief state space B.
• If you think it is strange that the agent’s knowledge is a belief set, then recall the
philosopher Plato’s (“Platon” in Finnish) (Greece, 429-4347 BCE) dictum
verify that KB |= ψ
provide some appropriate answer explaining why.
This agent view is constructive, even when it uses a classical logic: a blunt “because
it just is” would not be a sufficient answer for action.
KB |= ψ
and an answer by manipulating the data structures for KB and ψ instead of con-
sidering the possible worlds w.
There are just too many w to consider — in general, even infitely many!
105
• We say “statement ψ can be derived (or inferred ) from KB by the proof system P”,
in symbols
KB ⊢P ψ
when this computation can be done using a certain specialized kind of search prob-
lem P not discussed yet.
• An early (1931) celebrated deep logical result is Kurt Gödel’s 1st incompleteness
theorem:
There cannot be any such proof system N with all these three properties
for the natural numbers N.
• On the other hand, e.g. neither “Is it the case that loves?” nor “Is it the case that
Juliet?” is a meaningful question.
We shall later present another logic (the predicate logic) where we can address also
such parts of sentences.
• Note also that “Is it the case that Romeo loves?” is another meaningful question
— but this is a different sentence than Eq. (23).
106
• The syntax used in this course and its book Russell and Norvig (2003, Figure 7.7)
for propositional logic is:
Sentence → Atomic | Complex
Atomic → true | false | Symbol
Complex → ¬ Sentence
| (Sentence ∧ Sentence)
| (Sentence ∨ Sentence)
| (Sentence ⇒ Sentence)
| (Sentence ⇔ Sentence)
– The predicate Symbol s are variable names which stand for different sentences.
E.g. we might choose the name X to stand for Eq. (23) when we are writing
the background knowledge β, and so on.
– We suppress writing nested parentheses by stipulating that the connectives
¬, ∧, ∨, ⇒, ⇔ are in descending binding power.
– We also stipulate that ∧, ∨ are associative, but ⇒, ⇔ are not.
107
‘⇔’ is ‘=’.
(This connection between logical proofs and calculating with truth values is
behind the calculational proofs approach, which is handy in Computer Science,
e.g. in formal program construction (Backhouse, 2003).)
• The part of the background knowledge β for our Wumpus world which deals with
pits can be written in propositional logic as follows:
– Let the Symbol Pi,j (or Bi,j ) stand for “There is a Pit (or Breeze) in square
[i, j]”.
– “There is no pit in [1, 1]” is then simply
¬P1,1 . (24)
All these sentences go into the background knowledge β, and they are true
in all Wumpus worlds.
– Then we add into KB also the Breeze percepts at the top right Wumpus world
of Figure 54:
¬B1,1 (29)
B2,1 . (30)
• For our classical propositional logic, we can even select this P to be also complete
and effective.
• Here we take this P to be some program which the agent is running, and consider
some approaches to writing such a program.
108
• We say that a formula α is. . .
φ ∨ ¬φ
– That is, the implication ‘⇒’ is at the language level the same concept as en-
tailment ‘|=’ is at its semantic level.
– It is a starting point for developing new logics: the concept “entails” can be
understood in several ways (such as “γ must be somehow relevant to δ”) —
what kind of “implication” arises if we understand it like this?
– It also means that we get our program P for answering the question “γ |= δ?”
if we develop an algorithm for the question “Is γ ⇒ δ valid?”.
– This question can in turn be answered by looking only at the logical form of
this formula γ ⇒ δ.
• For propositional logic, this question “Is γ ⇒ δ valid?” is in turn the complement
of a search problem:
109
= trying out all different assignments of true or false into the Symbol s appear-
ing in the given formula — the other Symbol s not in it don’t need any value
and stopping with the answer yes if we get the formula to be true.
• Validity can be solved in the same way, but stopping with no if we get the formula
to be false.
• This method collects into an initially empty data structure v the values currently
assigned to Symbol s.
– If a symbol X has not been assigned any value yet, then v[X] = ⊥.
– Then RowValue(α, v) = the which the given formula α gets with these currently
assigned values v[Y ] — but not all Symbol s Y have been assigned a value yet.
– Hence we must permit this ‘⊥’ also in our truth tables:
∧ ∨ ⇒ ⇔
false other false other true other
other false false other other other (33)
true other other true other other
other true other true true other
true = false
false = true
⊥ = ⊥.
simply because it might try all the 2n distinct assignments v to the Symbol s in α.
• However, there are no SAT algorithms which could always guarantee a yes/no
answer in less than exponentially many steps in n (unless P = NP).
• One way to present a proof system P is as a collection of inference rules, like the
following Modus Ponens:
α α⇒β
β (34)
Premisses above the line list all that must hold before this rule can be applied.
Conclusion below the line gives what can then be inferred.
110
TruthTable(α, v)
1 r ← RowValue(α, v)
2 if r = ⊥
3 then X ← some Symbol in α such that v[X] = ⊥
4 return TruthTable(α, v extended with v[X] ← true) or
TruthTable(α, v extended with v[X] ← false)
5 else return r
RowValue(β, v)
1 if β is a Symbol
2 then return v[β]
3 if β is of the form ¬φ
4 then return RowValue(φ, v)
5 if β is of the form φ ⊗ ψ
6 then return the entry for RowValue(φ, v), RowValue(ψ, v) and ‘⊗’
in Eq. (33)
7 return β (since β ∈ B must now be the case)
Figure 58: SAT via truth-table enumeration. Modified from Russell and Norvig (2003,
Figure 7.10).
Forward direction reads “if I have already inferred these premisses, then I can infer
this conclusion too”.
Backward direction reads “I want to infer this conclusion, so I will try to infer
these premisses next”.
• Soundness of P requires that its inference rules like Eq. (34) satisfy the condition
This can in turn be checked (e.g.) by computing the corresponding truth table.
• E.g. our agent could infer as follows in the initial top left situation of the Wumpus
world shown as Figure 54:
background Eq. (26)
⇔ -elimination
(B1,1 ⇒ P1,2 ∨ P2,1 ) ∧ (P1,2 ∨ P2,1 ⇒ B1,1 )
∧-elimination
P1,2 ∨ P2,1 ⇒ B1,1
contraposition
percept Eq. (29) ¬B1,1 ⇒ ¬(P1,2 ∨ P2,1 )
Eq. (34)
¬(P1,2 ∨ P2,1 )
de Morgan
¬P1,2 ∧ ¬P2,1 (36)
111
∧-elimination on the right
α∧β α∧β
α β
contraposition or flipping ‘⇒’ around with ‘¬’
α⇒β
¬β ⇒ ¬α
The associativity and commutativity of ‘∨’ lets us pick any disjunct in these reso-
lution steps.
112
• In both of these two resolution steps, the common disjunct β is just a single Symbol .
– We are going to rewrite our KB so that this will always be the case.
– The reason is that an unmodified KB would not contain very many possible
choices for β.
• The form into which we are going to rewrite our KB can be defined as follows:
Literal is Symbol or ¬ Symbol .
Clause is a (possibly empty) disjunction of literals.
Then our KB is in Conjunctive Normal Form (CNF) if it is a conjunction of clauses.
• The full resolution rule takes the form “you can form the resolvent of two clauses if
the same Symbol X occurrs positively in one of them and negatively in the other”:
one clause the other clause
z }| { z }| {
p1 ∨ p2 ∨ p3 ∨ · · · ∨ pm ∨ X ¬X ∨ q1 ∨ q2 ∨ q3 ∨ · · · ∨ qn
p 1 ∨ p 2 ∨ p 3 ∨ · · · ∨ p m ∨ q 1 ∨ q2 ∨ q 3 ∨ · · · ∨ qn
By the associativity and commutativity of ‘∨’ this common symbol X can appear
anywhere inside these two clauses.
• We must also factor the resolvent: if some literal would appear more than once,
only one copy is retained.
• Or alternatively we can think that a clause is a set of literals:
{p1 , p2 , p3 , . . . , pm , X} {¬X, q1 , q2 , q3 , . . . , qn }
{p1 , p2 , p3 , . . . , pm , q1 , q2 , q3 , . . . , qn }
This form includes the factoring rule implicitly.
• A given formula can be converted into CNF with the following 4 steps:
1. Replace each occurrence of ‘⇔’ with the corresponding two occurrences of ‘⇒’
as in Eq. (37).
E.g. Eq. (26) of our Wumpus world background knowledge becomes
(B1,1 ⇒P1,2 ∨ P2,1 ) ∧ (P1,2 ∨ P2,1 ⇒B1,1 ). (39)
2. Replace each occurrence of α ⇒ β with the (classically) equivalent ¬α ∨ β.
E.g. Eq. (39) becomes
(¬B1,1 ∨P1,2 ∨ P2,1 ) ∧ (¬(P1,2 ∨ P2,1 )∨B1,1 ). (40)
3. Move each ‘¬’ towards the Symbol s using Eq. (38). If you get a double negation
like ¬¬α, then erase them, leaving only α.
E.g. Eq. (40) becomes
(¬B1,1 ∨ P1,2 ∨ P2,1 ) ∧ ((¬P1,2 ∧¬P2,1 ) ∨ B1,1 ). (41)
4. Finally move each ‘∧’ from under any ‘∨’ by using their distibutivity, which
permits replacing α ∨ (β ∧ γ) with (α ∨ β) ∧ (α ∨ γ) and so on.
E.g. Eq. (41) becomes
(¬B1,1 ∨ P1,2 ∨ P2,1 ) ∧ (¬P1,2 ∨B1,1 ) ∧ (¬P2,1 ∨B1,1 ). (42)
If we now undo step 2, then we see that Eq. 42 does indeed say the same thing
as the original Eq. (26), but in a different way:
(B1,1 ⇒P1,2 ∨ P2,1 ) ∧ (P1,2 ⇒B1,1 ) ∧ (P2,1 ⇒B1,1 ).
113