Rather than trying to define what knowledge is, we can alternatively try to define different types of knowledge which can sometimes be easier. For example, a cheetah and a tiger are two types of animals with distinct characteristics that are relatively easier to define compared to trying to define what an animal is.
Defining different types of knowledge can help provide us with some insight into how we might build agents that exhibit knowledge (including knowledge-based systems considered as agents).
We can define an agent as having declarative knowledgeif it declares that some statement is true. To determine whether specific knowledge is declarative or not, an agent can ask the following question: “Can this knowledge be true or false?”; or put another way: “Is it true or false that X?” where X is the statement in question. If the question is grammatical, then it can be deemed to be declarative knowledge. For example, the following question makes sense: “Is it true or false that the New
Zealand moa is extinct?”; therefore the knowledge that the New Zealand moa is extinct is declarative. The following questions do not make sense: “Can riding a bicycle be true or false?” and “Is it true or false that riding a bicycle?” Contrast this question with the following that is grammatical and is therefore declarative knowledge if an agent knew it: “Is it true or false that the New Zealand moa can ride a bicycle?” In other words, when an agent declares that something is either true of false, then that knowledge is said to be declarative knowledge, and whether that knowledge is false or does not makes sense in the real world still does not stop it from being declarative knowledge.
Concerning truth (whether a declaration is either true and false), it should be noted that, in reality, absolutes are rare and very little of our knowledge is ever completely true, or completely false (as was pointed out in the previous section).
How does this definition of declarative knowledge fit with knowledge defined above as the absence of the need for search? Consider the following: an agent without prior knowledge of whether a declaration is true or false must first consult with another agent, or must explore or make observations of the environment to find out what is true or false. Both are actions that the agent must take in order to find the answer. The agent is said to know the answer already if it does not need to perform a searching action. We can define an agent as having procedural knowledgeif it knows how to perform a sequence of actions in order to ensure that a declaration X will become true. To determine whether specific knowledge is procedural or not, an agent can ask the following question: “What actions do I need to perform in order that I can declare that X is true? ” If the question makes sense,then it can be deemed to be procedural knowledge. For example, the following question makes sense: “What actions do I need to perform in order that I can declare that I am riding a bicycle is true”.
The question about whether knowledge in knowledge-based systems should be declarative or procedural has been a furious debate within the field of Artificial Intelligence. As often is the case with these “Is a Platypus a mammal?” type debates, there is merit on both sides of the argument as some knowledge is inherently declarative and other knowledge is inherently procedural – for example, when humans count they are applying a procedure, but factual information concerning the names of people and the names of locations on a map is declarative. The debate has provided insight into some of the issues concerning the categorization of knowledge, but from a pragmatic design based perspective, AI researchers focus more on building useful systems, and therefore will choose the most appropriate tool for the task at hand.
Task knowledgeis sometimes distinguished from procedural knowledge. Task knowledge can be considered to be a specialised form of procedural knowledge where the purpose of the actions is to solve a task (e.g. find answers to a specific question).Behaviouralknowledgeis knowledge that an agent has about the likely outcomes of behaviours (of itself and other agents). An agent has episodicknowledgeif it knows when a statement X became true.An agent has explanatory knowledgeif it can explain what caused the sequence of actions that led to the statement X becoming true.Finally, we can state that an agent hasinferred knowledge if it has used existing knowledge to determine new knowledge that was not available by any other means.
We can distinguish between the different types of knowledge by the types of questions an agent can answer correctly using their knowledge. Declarative knowledge can be used to answer “What is …?” and “Where is …?”questions; episodic knowledge can be used to answer “When did … occur?”questions; procedural and task knowledge can be used to help answer “How can I/you …?” questions; behavioural knowledge can be used to answer “What if I/you …?” questions; and inferred knowledge can be used to answer “If … is true, then is … true?” and “What if … were true?” questions. We can also broaden the meaning of inferred knowledge beyond the traditional logic based definition that involves the inference of whether some statement is true or false.
Consider the situation where an agent is having a conversation with another agent, such as when one person is talking to another, or perhaps when a person is talking to a chatbot. Let us say that this conversation is being observed by an outside agent (a third person, say, or perhaps a person observing the person-chatbot conversation within a Turing Test situation). This observer will be able to judge how well she thinks each of the agents have done in maintaining their side of the conversation. This will be determined by whether the observer feels that the responses are appropriate. In a sense, the observer has used her own knowledge of what is appropriate to make this judgement.
The agents also have done the same thing in attempting to maintain the conversation.This can be considered to be an example of inferred knowledge.In this case, the knowledge of what is an appropriate response cannot be directly obtained by consulting some lookup table of appropriate responses since the number of language statements can be considered to be unbounded (the number of things a person can say, and the number of responses, is essentially infinite).Instead, the appropriate response must be constructed (i.e. inferred) in some manner.
Artificial Intelligence Related Interview Questions
|Cloud Computing Interview Questions||IBM Cloud Computing Infrastructure Architect V1 Interview Questions|
|Machine learning Interview Questions||Amazon Cloud Computing Interview Questions|
|Web analytics Interview Questions||SAS DI Interview Questions|
|Base Sas Interview Questions||SAS Macro Interview Questions|
Artificial Intelligence Related Practice Tests
|Cloud Computing Practice Tests||IBM Cloud Computing Infrastructure Architect V1 Practice Tests|
|Web analytics Practice Tests||SAS DI Practice Tests|
Artificial Intelligence Tutorial
Agents And Environments
Frameworks For Agents And Environments
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.