chp02 3 ExpertSystem

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 44

STIN1013

Introduction to Artificial
Intelligence

 EXPERT SYSTEM

1
Definition

From the Father of Expert System:


An intelligent program that uses


knowledge and reasoning procedure


to solve problems that require
significant human expertise for their
solutions.

 - Edward Feigenbaum

Definition
Fro m o th e rs:
A syste m th a t u se s h u m a n K ca p tu re d in a T u rb a n & A ro n so n
co m p u te r to so lve p ro b le m s th a t o rd in a rily ( 2001 )
re q u ire h u m a n e xp e rtise
A co m p u te r p ro g ra m th a t re p re se n ts a n d Ja ckso n ( 1 9 9 9 )
re a so n s w ith K o f so m e sp e cia list su b je ct
w ith a vie w to so lvin g p ro b le m o r g ivin g
aA dco
vice
m p u te r p ro g ra m d e sig n e d to m o d e lth e D u rkin ( 1 9 9 4 )
p ro b le m -so lvin g a b ility o f a h u m a n e xp e rt
A co m p u te r p ro g ra m th a t e m u la te s th e A w a d (1 9 9 6 )
re a so n in g o f h u m a n e xp e rts in a p ro b le m
d o m a in
Some facts about ES
 As a field,
 It is a branch of AI

 As a technology,
 It is the most widely applied AI technology
 Among the first to be commercialized

 As an application,
 It is a computer program
 It ‘transfers’ (i.e. it acquires and represents)
practical knowledge (i.e. expertise/rules of
thumb/heuristic) from human expert to
computer
Some other facts about
ES
 It reasons (or it ‘thinks’) with what it
‘transfers’

 It can either
 support decision makers (by
recommending decisions) or
 ‘replace’ them (by making decisions
on behalf of experts, releasing
them from routine tasks).

3.
Some other facts about
ES
 After all,ES replicates a human
expert.
3.
A n exper is o n e w h o
t sse s sp e cia lize d skill,
p o sse
exp e rie n ce , a n d kn o w le d g e
th a t m o st p e o p le d o n o t
h a ve a lo n g w ith th e a b ility
to a p p ly th is kn o w le d g e
u sin g tricks, sh o rtcu ts, a n d
ru le s- o f-th u m b to re so lve a
p ro b le m e fficie n tly.
H a rm o n & K in g ( 1 9 8 5 )
S o m e o th e r fa cts a b o u t
ES
 Therefore, ES possess expertise
E x p e rtise is
 an extensive, task-specific knowledge held
by experts
 hard to capture. Capturing it is a major
issue in ES development, and became a
major concern of Knowledge Acquisition
researchers.

 With the expertise stored in its knowledge base,


ES can provide expertise-based solutions
which are imaginative, accurate and efficient.
.. and some facts about an
expert
E xp e rt p e rso n n e l is a va lu a b le a sse t fo r
a n y o rg a n iza tio n ( a s th e y a re g o o d a t
so lvin g o rg a n iza tio n a l p ro b le m s, p la n n in g
e tc .) … ye t th e y a re
 Pe rish a b le a n d irre p la ce a b le
 G e o g ra p h ica lly sta tic
 N o t a va ila b le 2 4 / 7
 E m o tio n a lly a ffe cte d
 fe e l fe a r, stre ss e tc … like o th e r h u m a n
b e in g s.
 C o stly
 to tra in a n d to co n su lt
Reasons for building ES
So, we have good reasons to build an ES!!

3 main reasons:

 To replace human expert


 To assist human expert
 To gain competitive advantage

Reason 1: To replace
expert
 To replace  to eliminate

 Reasons for replacing experts


 To preserve their expertise
 To disseminate their expertise in less
expensive manner
 To make expertise available after hours
 To make expertise available at several
locations
 To free experts from routine thus they can
focus on the other critical tasks
 To avoid experts from danger
Reason 2: To assist
experts
 ES as an aided tool to
 improve human expert’s productivity
 maintain consistency in their
decisions
 deal with the complexity of the tasks
 make available the information that is
difficult for experts to recall
Reason 3: To gain
competitive advantage
Due to the benefits this technology
can offer
 Exemplar:
 Digital Equipment Corporation
▪ R1/XCON
 American Express
▪ Authorizer’s Assistant
 Coopers & Lybrand
▪ ExperTax

ES Component

Interface
Knowledge base

Inference engine

User

Working memory
… in comparison with
humans
T h e co m p o n e n ts a ctu a lly m im ic w h a t is in
h u m a n s.
L o n g - te rm m e m o ry
=
K n o w le d g e b a se
S
E

= interface
B ra in N E n v iro n m e n t
=
= Inference engine S Pe o p le / se n so r e tc
O th a t p ro vid e in p u t
R to o u r b ra in
S h o rt- te rm m e m o ry
=
W o rkin g m e m o ry
Component 1:
Knowledge Base
 Knowledge base  “contains the domain
knowledge”
 Facts
 Heuristics or rules that direct use of
knowledge to solve specific problems in a
particular domain.
 Typical representation:
 Rules (IF x AND y THEN z @ x  y  z)
 Example (for predicting weather):
▪ IF cloudy = yes AND temperature = low AND
humidity = high THEN it will rain.
▪ IF cloudy = no AND temperature = high AND
humidity = low THEN it will sunny
Component 2: Working
Memory
 A storage area for current data
 i.e. facts entered by user during consultation
with ES (e.g. symptoms of a disease)
 Input data can also be loaded from
external storage such as databases,
spreadsheets or sensors.
 Also a place where intermediate
conclusions or the new facts inferred by
ES are stored
 Non-permanent – content will be deleted
when the session ends.
Component 3: Inference
Engine
 Known as rule interpreter in rule-based
ES.
 Is modelled after human expert’s
reasoning.
 Typically, inference engine utilized 2
control strategies:
 Backward Chaining (goal driven)
▪ determine fact in the conclusion to prove the
conclusion is true.
 Forward Chaining (data driven)
▪ premise clause match situation then assert
conclusion.

Component 4: User
Interface
 Facilitates all communication between user and ES.
 Communication are in natural language style, interactive
and follow closely the conversation between humans.

 Two types of interaction:
 ES ask for information through questions, provide
the results and display the explanation.
 User supply answers, receive the results or query
the system (i.e. getting explanation)

Component 5: Explanation
sub-system
 Two types of explanation:
 WHY
▪ Explain why the system asked the question.
 HOW
▪ Explain how ES arrived at the conclusion.

 Justify the validity of the system’s findings 
increase user confidence and trust
ES Development Process
K MANAGER

defines K strategy
initiates K development projects
facilitates K distribution

ENGINEER/
ANALYST
EXPERT elicits knowledge from PROJECT
manages MANAGER
elicits
requirements
validates from
delivers
analysis models to
KS manages
uses

USER

designs &
implements K SYSTEM
DEVELOPER
Source : Schreiber et al . ( 2000 )
ES Development Process
 Knowledge engineering
 Methodology for building an ES
6 phases of knowledge engineering:
 Problem assessment
 Knowledge acquisition
 Design
 Testing
 Documentation
 Maintenance
ES Development Process
ES Development Tool
 Choice of tools and approaches for developing
ES includes:
 Programming languages
 Support aids and tools
 Ready-to-use customized packages for
industry and government
 ES shells

 Which tools to adopt depends on:


 The nature of the problem
 The skill of the builder
 The function ES is expected to perform
(either diagnoses or monitoring)
Main player in ES
Development
D o m a in exp e rt
•Pro vid e kn o w le d g e o r m e th o d to so lve
p ro b le m

K n o w le d g e E n g in e e r
•G a in kn o w le d g e fro m exp e rt
•Tra n sfe r/ re p re se n t kn o w le d g e
in to a co m p u te r

U se r
•C a n b e th e e n d
u se r o r exp e rt
h im se lf
Task/Paradigm
 ES has been applied to perform/solve the
following task/problem
 Control – meeting certain
standards/specifications
 Design – configuring objects under specific
constraints
 Diagnosis – inferring malfunction/disease
and recommend solutions/treatment
 Planning – designing actions
 Monitoring – comparing observation to
expectation
 Selection – identifying the best choice(s)
from a list of actions
Task/Paradigm
 ES has been applied to perform/solve the
following task/problem
 Interpretation – infer situation description
from observation
 Prediction – infer likely consequences of the
given situation
 Debugging – prescribe remedies for
malfunction
 Repair – execute a plan to administer a
prescribed remedy
 Instruction – diagnose, debug and correct
student’s misconception

Benefits & Limitations
B E N E FIT S LIM IT A T IO N S
R e d u ce d e cisio n m a kin g tim e W o rk w e llo n ly w ith in a n a rro w
d o m a in o f kn o w le d g e
Im p ro ve p ro d u ctio n o p e ra tio n s C a n m a ke m ista ke s
In cre a se o u tp u t a n d R isk o f kn o w le d g e q u ickly
p ro d u ctivity b e co m e o b so le te
C a n b e u se d a s to o ls fo r sta ff O n g o in g re lia n ce o n e xp e rts
tra in in g
R e te n tio n o f sca rce e xp e rtise K n o w le d g e is n o t a lw a ys
a va ila b le
U p g ra d e p e rfo rm a n ce D ifficu lt to e xtra ct e xp e rtise
fro m h u m a n e xp e rts
R e la tive ly a ffo rd a b le e xp e rtise U se r la ck o f tru st ca n im p e d e
u se
Im p ro ve q u a lity o f
p ro d u cts/ se rvice s
ES .. How it works?
 In general, ES works by matching the
facts with its knowledge base content,
and display the output to user
 Modus
R12 : A  Ponens !

B
Knowledge base

Interface


Inference engine
B A
User

Working
memory
ES .. How it works?
 In detail, it depends on what control
strategy each ES’s inference engine
utilizes, either forward chaining or
backward chaining or both
 The principle of chaining is governed by
modus ponens.
 ABC
 A
 B
 C
 Chaining signifies linking of a set of
pertinent rules.
Some important
concepts
 Goal rule
 A rule in which its conclusion is not a
premise of any other rules in the
knowledge base
 E.g.
 R1: (A  B)  C  D
 R2: D  G  T
 R3: P  Q  B

 R2 is the goal rule as its conclusion, T, is


NOT one of the premises of the other rule
(i.e. R1 and R3)

Some important
concepts
 Sub-goal rule
 A rule in which its conclusion is also a
premise of the other rules or in the goal
rule.
 E.g.
 R1: (A  B)  C  D
 R2: D  G  T
 R3: P  Q  B
 R1 and R3 are sub-goal rule as their
conclusions are premises of the other rules
▪ Conclusion of R1, i.e. D, is a premise of
R2
▪ Conclusion of R3, i.e. B, is a premise of
Some important
concepts
 Primitive premise
 A premise that is not a conclusion of any
other rules
 E.g.
 R1: (A  B)  C  D
 R2: D  G  T
 R3: P  Q  B

 A, C, G, P and Qare primitive premises.


WHY? … Look at the THEN part of R1, R2
and R3
▪ None of these rules has either A or C or G or P or
Q as a conclusion.
Some important
concepts
 Non-primitive premise
 A premise that is also a conclusion of the
other rule(s)
 E.g.
 R1: (A  B)  C  D
 R2: D  G  T
 R3: P  Q  B
 B and D are non-primitive premises. WHY?
… Look at the IF part of R1 and R3.
▪ R1 has D as a conclusion while R3 has B as a
conclusion.
▪ Both B and D are premises, at the same time
they also are conclusions, thus they are non-
Some important
concepts
 “Rule fire”
 A “rule fire” means “rule is concluded”. In
other words, it refers to a state where the
conclusion of that rule is proved as true,
because its premise(s) is true”
 E.g.
 R1: (A  B)  C  D
▪ If A and B are true, or if C is true, then
we say “R1 fire” with a conclusion D
true.

 “Rule not fire”


 Is a vice versa of “rule fire” … due to its
Backward Chaining
 Backward chaining overview
 An Inference strategy that attempts to prove
a hypothesis by gathering supporting
information

 The system works from the goal by chaining


rules together to reach a conclusion or
achieve a goal

 In other words, it start with the goal, and


then looks for all relevant, supporting
premises that lead to achieving the goal.
Backward Chaining
Steps
1.Identify the goal.
2.Search for the goal rule.
3.If found, check its premise(s).
 If premise is primitive, check if it is in the working
memory, ask user a question if it is not there.
 If premise is non-primitive, jump to the rule where it
belongs to as a conclusion (sub-goal rule). Repeat
Step 3.
4.Repeat Step 3 by jumping and firing all relevant
sub-goal rules.
5.Finally, fire the goal rule.
Backward Chaining
Steps
 With example:
 R1: (A and B) or C implies D
 R2: D and G implies T ….. goal rule
1.
2.Identify the goal …. T.
3.Identify the goal rule ….. R2.
4.Check R2’s first premise, i.e. D
▪ D is non-primitive …. it belongs to R1 as
conclusion, so jump to R1.
▪ Check R1’s premises.
 A is primitive. If it is in working memory,
continue checking B. If not, ASK user a
question. If the answer is ‘yes’, continue
checking B. If ‘no’, check C.
Backward Chaining
Steps
4.Either A and B are true or C is true, fire R1.
5.Jump back to R2.
6.Repeat 3 with R2’s second premise, i.e. G.
▪ G is primitive. If it is in working memory,
fire R2. Otherwise, R2 fail to fire and
therefore, the goal cannot be proven.
7.End of step.

Forward Chaining
 Forward Chaining overview
 An Inference strategy that begins with a set of
known facts, derives new facts using rules
which premises match the known facts,
continues until goal reached or no more
rules matches.

 Begins with known data and works forward to


see if any conclusions (new information)
can be drawn.

Forward Chaining Steps

1.Get initial data and place it in working memory.


2.Scan the rules searching for matched premises.
3.If found
 fire the rule
 add its conclusion to working memory.
4.Repeat Steps 2 & 3 until no more match or goal is
achieved.
Forward Chaining Steps
 With example:
 R1: (A and B) or C implies D
 R2: D or G implies T
1.Get initial data.
2.Scan the rules in sequence.
▪ If A and B are true, or C is true, R1 fires and D is
inserted into working memory. D will cause R2
to fire when R2 is scanned. Process is then
terminated as all rules have been scanned, and
no more match can be done.
▪ If none of A, B and C true, continue scan the next
rule, i.e. R2. If G is true, R2 fires and T is
inserted into working memory. Process is then
terminated as all rules have been scanned, and
no more match can be done.
Forward Chaining Steps
 What if T is in R1?
 R1: (A and B) or T implies D
 R2: H or G implies T
1.Get initial data.
2.Scan the rules in sequence.
▪ If none of A, B and T is true, scanning is continued
with R2.
▪ If H or G is true, R2 fires and T is inserted into
working memory. End of 1st cycle with
conclusion T.
▪ The 2nd cycle of scanning and firing rules begins. T
is now in working memory, therefore R1 fires. D
is concluded and inserted into working memory.
End of 2nd cycle with conclusions T and D. The
most recent, i.e. D, becomes the final
Conflict Resolution
 Conflict resolution
 A process to determine which rule to fire
(when the contents of the WM can cause
>1 rule to fire)
 Resolution strategy:
▪ Establish the goal and stop the system when the
goal is attained
▪ The order of the rules that conclude the goal is
important (the engine will fire the first one
located).
▪ Assign rules with the priority values (reflect rule
preferences)
▪ The system scans the rules, determines the rules
to fire, and fire the ones with highest priority.
Backward vs Forward
Chaining
Attribute Backward Chaining Forward Chaining
Also known as Goal-driven Data-driven
Starts from Possible conclusion New data
Processing Efficient Somewhat wasteful
Aims for Necessary data Any conclusion (s)
Approach Conservative/cautious Opportunistic
Practical if Number of possible final Combinatorial explosion
answers is reasonable or a creates an infinite number of
set of known alternatives is possible right answers
available
Appropriate for Diagnostic application Scheduling and monitoring
Example of Selecting a specific type of Making changes to corporate
application investment pension fund

You might also like