Reasoning using rules and logic - Artificial Intelligence

We can use the knowledge specified by the rules and logic sentences to infer new knowledge by a process called ‘reasoning’. There are various ‘rules of inference’ that allows us to do this.For example, if we know that an animal A1 has pointed teeth, has forward facing eyes and has claws, i.e. Has_pointed_teethA1 and Has_forward_eyesA1 and Has_clawsA1 are all true,then we can infer from Rule R4 that animal A1 is a carnivore, i.e. CarnivoreA1.This particular rule of inference is called ‘Modus Ponens’, or Implication Elimination, as it creates a new sentence by removing the implication symbol, =>.

Some rules of inference are listed in Table below. These rules are self-evident and follow directly from the meanings of the different symbols. For example, the Modus Ponens rule follows directly from the meaning of the => symbol. The And Elimination, And Introduction, Or Introduction and Double Negation Elimination rules follow directly from the meanings of the ^, v and __ symbols. Similarly, the other three rules result from the meaning of the symbols. Further rules and more details can be found in Russell and Norvig.

Table Some rules of inference for first order logic (from Russell and Norvig).

Some rules of inference for first order logic

An example proof showing how some of these rules of inference can be applied to the animal classification problem is shown in Table below.

Table A sample proof in first order logic.

sample proof in first order logic

At the beginning,some observations have been noted concerning an animal being observed on the African savannah. The goal is to determine which animal is being observed. The proof starts with converting the observations into first order logic form using the existential quantifier (P1). P2 to P5 uses existential elimination to infer individual predicates concerning an animal labelled A1. The remaining part of the proof uses Universal Elimination and Modus Ponens to arrive at the conclusion that the animal is a zebra – that is, Zebra (A1) is proven to be true.

Rule-based expert systems mainly make use of two forms of reasoning based on the automated application of the Modus Ponens rule of inference. ‘Forward reasoning’ (also called ‘forward chaining’) works in a forward direction by first starting with the known facts and then trying to derive a specific goal using a chain of inference by repeated application of Modus Ponens. ‘Backward reasoning’ (also called ‘backward chaining’) works in the opposite direction, starting with the goal first, then working backwards until it has been proved.

In the Knowledge Representation model, clicking on the go-reasoning button starts the reasoning process. If the KR-type slider is set to “rules”, the user will be asked to select which type of reasoning they wish to use–whether forward or backward. Both of these types of reasoning use search to explore an n-dimensional space that represents the knowledge. This n-dimensional space is shown by the event map depicted in the NetLogo environment when the setup-knowledge button has been clicked in the Interface. As the reasoning process proceeds, the event map is animated with walker agents depicted using the person shape (in the same manner that the walker agents were depicted exploring the search spaces related to the different search problems described in the previous chapter). This animation of the reasoning process clearly reinforces the analogy of reasoning as a search process.

Figure below provides two screenshots of how this looks for the New Zealand birds classification problem. The left hand image shows a screenshot at the completion of forward reasoning,which has led to the conclusion that the bird being classified is a Kea. The screenshot shows the path the reasoning process has taken to get to the goal using thicker links. In this case the path has been direct. The first state visited was the [can fly = “Yes] state in the middle right of the image. This caused the following question to be asked of the user – “Can the bird fly?”. The path then goes to the next state labelled [parrot = “Yes”]. This is shown by the link being slightly wider than other links that have not been traversed. This caused the question “Is the bird a parrot?” to be asked. The path then goes to the state labelled [alpine = “Yes”] which generates the question “Is the bird an alpine bird?”.

Eventually,the path ends up at the final state labelled [bird = “Kea”] which is a goal state,and so the conclusion is reached that the bird is a Kea. This path is followed because a single rule is being matched–in this case, the rule is the first one in the knowledge-base as defined in NetLogo Code, with antecedents list ["can fly" "Yes"] ["parrot" "Yes"] ["alpine" "Yes"] and consequents list ["bird" "Kea"].

How the event map for the New Zealand birds rules-base is animated using forward/strong reasoning (left image) and backward reasoning (right image) .

event map for the New Zealand birds rules-base is animated using forward/strong reasoning (left image) and backward reasoning (right image) .

The right hand image in Figure shows how the backward reasoning proceeds. At the beginning, the user is asked which goal they wish to prove. If the user selects [bird Kea], then a walker agent is drawn at the state labelled [bird = “Kea”] to show that it is the goal state. The reasoning process then tries to prove the goal by asking the same questions as for the forward reasoning process. In the image, the path has been traversed as far as the [alpine = “Yes”] state, with one more link yet to be traversed needed to prove the goal.

This particular example is relatively straightforward. For a more complicated example, such as classifying a zoo animal as a cheetah, the reasoning process will result in the animation jumping around the event map. The forward reasoning process, in particular, can result in many queries being asked before a conclusion is reached, as shown in Figure below.

How the event map for the zoo animals rules-base is animated using forward reasoning.

event map for the zoo animals rules-base is animated using forward reasoning

For forward reasoning, the process starts from the known facts and proceeds forward by trying to find a rule in the rules-base for which the antecedents match against the facts. Each time a rule matches, it is ‘fired’ and the consequents are added as new facts to the knowledge-base. A rule can only be matched once. A complete ‘match-fire cycle’ occurs when all the rules in the rules-base have been processed. If at least one rule has been fired, then a new cycle is started at the first rule in the rules-base. The process continues until no further rules are fired during a cycle.

NetLogo Code : The procedure that performs forward reasoning.

to go-rules-forward-reasoning
;; performs forward reasoning on the problem
let fired-count length fired-rules
print selected-problem
let goal-attrib [goal-attribute] of selected-problem
set cycle-count cycle-count + 1
output-number "Cycle " cycle-count
loop
[
foreach sort rules
[ ; for each rule in the rules base
if not member? ? fired-rules
[
if (match-rule? selected-problem ?)
[
fire-rule ?
output-fired-rule ?
]
if (attribute-found? goal-attrib)
[ user-message (word "The " goal-attrib " is a "
fact-get goal-attrib ".")
stop ]
]
]
if (fired-count = length fired-rules)
[ user-message
"No new rules fired this cycle. Sorry - I can't help you."
stop ]
]
end

The go-rules-forward-reasoning procedure has a loop that performs the match-fire cycle. This loop processes in sorted order each rule in the rules-base (i.e. the rules agentset). It first checks whether the rule has been fired previously, and if it hasn’t, then it tries to match the rule by calling the matchrule? reporter (shown in NetLogo Code). If this reporter returns true,then it will fire the rule by calling the fire-rule procedure. The procedure output-fired-rule will output information to the Output area in the Interface as a trace of the reasoning process if the switch trace-on has been set to On.

NetLogo Code: The code that defines how the rules are matched and fired.

to-report match-rule? [problem this-rule]
; tries to match this-rule to facts in the facts-base
; otherwise asks user for data if attribute is not in non-input-attributes
list
let attribute ""
let val ""
let value ""
let question ""
let choices []
let this-walker nobody
set this-walker start-new-walker nobody "" ""
foreach [antecedents] of this-rule
[
set attribute first ?
set value last ?
update-walker this-walker attribute value
if not attribute-found? attribute
; not found, so need to ask user if we can
[ ifelse (member? attribute [non-input-attributes] of problem)
[ report false ] ; no we can't ask user
[ ; ask user a question
set question get-input-question problem attribute value
set choices get-input-choices problem attribute
set val user-one-of question choices
fact-put attribute val
]
]
if not fact-found? attribute value
[ report false ]
]
report true
end
to fire-rule [this-rule]
; fires the matched rule this-rule
let attribute ""
let value ""
let new-walker nobody
foreach [consequents] of this-rule
[
set attribute first ?
set value last ?
ask active-walker
[ hatch-walkers 1 [ set new-walker self ]] ; clone active walker
update-walker new-walker attribute value
fact-put attribute value
set fired-rules fput this-rule fired-rules
]
end

The match-fire? reporter processes each of the antecedents of the rule.It first extracts the attribute and its value from the current item in the antecedent list; for example,for [“can fly” “Yes”], the attribute is “can fly” and the value is “Yes”. The attribute combined with its value represents a specific proposition that needs to be true – in this case, that proposition that the bird can fly. Representing the proposition as an event in an event map, the stream name is set to the attribute, and the event that occurs on the stream is set to the value.

The match-fire? reporter then checks to see whether a value for the attribute has not already been allocated a value on the facts-base. If it hasn’t, it will ask the user what the value is if the attribute is an input attribute. Then the reporter will finally check whether the attribute’s value matches the one specified by the rule. The fire-rule procedure processes each of the consequents of the rule. First it hatches a new walker that moves to the new event state in the event map based using the update-walker procedure. This is done to animate the event map visualisation to show how the reasoning process is proceeding. Then the procedure puts a new fact consisting of the attribute and its value into the factsbase, and finally, adds the rule to the list of fired rules so that it won’t be fired again.

The facts-base is implemented using the table extension as shown in NetLogo Code.

NetLogo Code : The code that defines various procedures and reporters for the facts-base.

to-report attribute-found? [attribute]
; reports true if the attribute is in the facts database
; reports false otherwise
report (table:has-key? facts-base attribute)
end
to-report fact-found? [attribute value]
; reports true if the attribute and value are in the facts database
; reports false otherwise
report (value = table:get facts-base attribute)
end
to-report fact-get [attribute]
; gets the value associated with the attribute from the facts-base.
report table:get facts-base attribute
end
to fact-put [attribute value]
; puts the attribute value pair into the facts-base.
table:put facts-base attribute value
end

The code implements three reporters: attribute-found?, which reports true if the attribute by itself is found in the facts-base (i.e. regardless of whatever value is associated with it); fact-found?, which reports true if the both the attribute and specific value is found in the facts-base; and fact-get, which returns the value associated with the attribute. The procedure fact-put inserts a specific attribute and its value into the table.

A problem with forward reasoning is that the search to find a particular goal can be inefficient as it can fire many rules unrelated to the goal.In this case, backward or goal-driven reasoning may be more appropriate. Backward reasoning employs a ‘find-prove’ process rather than a ‘match-fire’ process as for forward reasoning. Given a specific goal, the backward reasoning process first searches the rulesbase to find rules that have the goal in their THEN parts (i.e.as consequents). If a rule is found, and the IF part matches, then the rule is fired and the goal is proved.Otherwise, a new sub-goal is set up to try to prove the IF part that does not match. The process then recursively repeats the reasoning process with the new sub-goal, which may generate further sub-goals that need to be proved, until it has proven all the IF parts for the original goal and subsequent sub-goals. The code for backward reasoning implemented in the Knowledge Representation model is shown in NetLogo Code.

NetLogo Code : The procedure that performs backward reasoning.

to go-rules-backward-reasoning
;; performs backward reasoning on the problem
let goal-choices []
let goal-attrib ""
let goal-value ""
let attribute ""
let value ""
set goal-attrib [goal-attribute] of selected-problem
foreach sort rules
[ ; add each goal in all the rules to the goal-choices
foreach [consequents] of ?
[
set attribute first ?
set value last ?
if (attribute = goal-attrib) and
(not member? value goal-choices)
[ set goal-choices lput ? goal-choices ]
]
]
let goal user-one-of "Which goal do you wish to proove?" goal-choices
set goal-value (last goal)
start-new-walkers goal-attrib goal-value
ifelse not find-prove? selected-problem goal-attrib goal-value
[ user-message "No more rules found. The goal is not proven." ]
[ update-walker active-walker goal-attrib goal-value
user-message (word "The goal is proven! The " goal-attrib " is a "
goal-value ".")]
end

The code first generates goal-choices which is the list of all the goals from all of the rules.The user is then asked which goal they wish to prove. A walker is then drawn on the event map at the chosen goal state, and the reasoning process starts by calling the find-prove? reporter. This returns true if a rule can be found to prove the goal or false if not, and the procedure then informs the user of the result. The code for the find-prove? reporter is shown in NetLogo Code.

NetLogo Code :The code that defines how the rules are found and proven.

to-report find-goal? [this-rule goal-attrib goal-val]
;; reports if the goal is in the consequents of this_rule
let attribute ""
let val ""
let value ""
let response ""
foreach [consequents] of this-rule
[
set attribute first ?
set value last ?
if (attribute = goal-attrib) and (value = goal-val)
[ report true ]
]
report false
end
to-report prove-goal? [problem this-rule goal-attrib goal-val]
; tries to prove this-rule from facts in the facts-base
; otherwise asks user for data if attribute is not in non-input-attributes
list
; otherwise recursively calls the procedure find-prove? to see if
; the non-input-attributes can be proved
let attribute ""
let val ""
let value ""
let question ""
let choices []
let this-walker nobody
set this-walker start-new-walker nobody "" ""
foreach [antecedents] of this-rule
[
set attribute first ?
set value last ?
update-walker this-walker attribute value
if not attribute-found? attribute
; not found, so need to ask user if we can
[ ifelse (member? attribute [non-input-attributes] of problem)
[ ifelse not find-prove? problem attribute value
[ report false ] ; no we can't ask user
[ fact-put attribute value ]
]
[ ; ask user a question
set question get-input-question problem attribute value
set choices get-input-choices problem attribute
set val user-one-of question choices
fact-put attribute val
]
]
if not fact-found? attribute value
[ report false ]
]
update-walker this-walker goal-attrib goal-val
report true
end
to-report find-prove? [problem goal-attrib goal-val]
;; recursive procedure called by the go-backward-reasoning procedure
let this-walker nobody
output-goal goal-attrib goal-val
foreach sort rules
[ ; for each rule in the rules base
if (find-goal? ? goal-attrib goal-val)
[
if (prove-goal? problem ? goal-attrib goal-val)
[
start-new-walkers goal-attrib goal-val
output-fired-rule ?
report true
]
]
]
report false
end

The find-prove? reporter calls two further reporters, find-goal? and prove-goal?. For each rule in the rule-base, it first checks to see if the goal is found by calling the find-goal? reporter, then if found, tries to prove it by calling the prove-goal? reporter. The find-goal? Reporter returns true if the attribute and value specified by the goal-attrib and goal-val parameters is found in one of the consequents of a specific rule this-rule passed by parameter. The provegoal? reporter tries to first prove the specific rule this-rule is true from facts in the facts-base. Otherwise it checks each of the antecedents of the rule to see if they are true, asking the user for data if the attribute is not an input attribute, otherwise it recursively calls the reporter find-prove? to see if the non-input attribute can be proved. The start-new-walker, start-new-walkers and update walker procedures are used to animate the event map to show how the reasoning process is proceeding.

Which type of reasoning should be used when designing a rule-based expert system? The best answer to this question is for the knowledge engineer to observe the expert to see which type of reasoning they are using.If the expert looks at the data first, and then infers new knowledge from it, then a forward reasoning process is most appropriate. On the other hand,if the expert begins with a hypothetical solution first, and then tries to find facts that will help prove the hypothesis, then backward reasoning is likely to be the most appropriate. Analysis and interpretation type problems lend themselves to forward reasoning solutions, whereas diagnosis type problems lend themselves to backward reasoning. DENDRAL, an expert system that determines the molecular structure of unknown soil is an example of the former, whereas MYCIN, an expert system for the diagnosis of infectious blood diseases is an example of the latter (Negnevitsky).

Most expert systems today now use both forward and backward reasoning. The primary method of inference used is the latter as it minimises the number of pointless queries which is a fault of the forward reasoning approach.However, an expert system can switch to forward reasoning when a new fact is added to the facts-base to make maximum use of the new data. Expert systems such as DENDRAL and MYCIN have taken a lot of expense and effort to develop (estimated at between 20 to 40 person years to complete). A contributory factor to this is the widely acknowledged difficulty of acquiring the knowledge from human experts–this problem is called the ‘knowledge acquisition bottleneck’.

However, there are now sophisticated expert system shells and toolboxes for intelligent systems that use technologies such as machine learning, neural networks and evolutionary computation that dramatically reduce the development time from years down to months. For a more comprehensive discussion of rule-based expert systems and intelligent systems in general, the reader should refer to Negnevitsky.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Artificial Intelligence Topics