What,however,are some good design objectives for Artificial Intelligence? To devise suitable objectives, we first need to propose an overall design goal. One design goal might be that we strive to mimic human intelligence. The overall motivation for doing this is two-fold–first, in order that we can build intelligent artefacts that might aid us in some manner, and secondly, in order that we might develop a better understanding of own intelligence as a result.
Yet our own intelligence is complex, puzzling and in many respects hidden from us. We have labels for aspects of our intelligence that we often use to distinguish ourselves from other animals, such as knowledgeable behaviour, intelligent behaviour, rationality, self-awareness , thoughtfulness and consciousness, and perhaps we can use these for our design objectives, as shown below.
Design for an Artificial Intelligence system.
Design Principle: An AI system should be an agent-oriented system.
Design Goal: An AI system should mimic human intelligence.
Design Objective: An AI system should act in a knowledgeable way.
Design Objective: An AI system should act intelligently.
Design Objective: An AI system should act rationally.
Design Objective: An AI system should act as if it is self-aware.
Design Objective: An AI system should act as if it thinks.
Design Objective: An AI system should act as if it is conscious.
An obvious failing with these design objectives is that they clearly break the SMARTER objectives mnemonic for all the major attributes:
So, clearly, these do not seem to be very good objectives. Yet the objective that a system act in a knowledgeable way(Design Objective), is the implicit objective for a knowledge-based system, Design Objective is the implicit objective for intelligent systems, and the others are key aspects of human intelligence that we will need to mimic to achieve Design Goal.
Clearly, we need to be much more specific with our design objectives.In addition, it is not clear how achievable each of these objectives is. At first glance, regarding the first objective, that of acting in a knowledgeable way, one could argue that such systems already exist today. Therefore, this objective might seem to be more readily achievable than the other objectives. But what does it really mean to act in a knowledgeable way? Moreover, what do we mean by ‘knowledge’ for that matter? If we mean that the agent-oriented system must have sufficient knowledge of its environment, itself and other agents in order that it can act in an knowledgeable manner, and demonstrate understanding of that knowledge, then achieving knowledge may be as difficult as achieving any of the other objectives.
It may in fact be the key to the other objectives, and once it is achieved, the others may perhaps be achieved more easily. One could also argue that the objective that the system exhibit intelligent behaviour (Design Objective) already covers the other objectives–a system must already exhibit knowledge, rationality, selfawareness, thoughtfulness and consciousness if it is to mimic intelligent human behaviour. However, it depends on how we define these properties. If we wish to use the term ‘intelligence’ in a manner similar to how we use it in English, this would suggest that a narrower definition might be more appropriate.
For example, we can say (in English) that a mathematician exhibits intelligence when solving an equation, and an inventor exhibits intelligence when creating a new system design that is patentable.Yet, computer systems have already demonstrated the ability to do both tasks in particular domains. So hence, one can claim that computers have already exhibited intelligence, at least in the narrow sense that the term is being used in this context. However, although everyone would agree that the mathematician and inventor are thoughtful and conscious, very few people would agree that these computer systems exhibit such properties.
An important aspect of intelligence is the ability to solve problems. AI systems have demonstrated a wide variety of problem-solving capabilities as described in Section, with varying degrees of success. However, AI systems have not yet demonstrated the ability to make a decent effort at solving all of these problems unaided, without the benefit of solutions devised by humans,by learning how to solve them from scratch by either being taught by an external teacher or by progressive improvement through trial and error. A mark of human intelligence is that we have the ability to solve complex tasks by starting out as novices and learning through experience how to become experts.Importantly, we can also adapt solutions from one problem domain to another, innovating as a result, and we also have the ability to come up with completely novel solutions.
One way of making our design objectives more specific is to clearly state how we are going to measure when they have been achieved. We can perhaps use the Turing Test as a candidate test for conversational intelligence to make Design Objective more specific. But what about the other design objectives? Are there other tests we can use,or invent,that might help us out? Indeed there is – for example, there exists a well-known test for self-awareness.
What about the other design objectives? Rationality is an attribute often assigned to intelligent agents–but what exactly do we mean when we use this term? Since our overall design goal is to mimic humans, we can look at ourselves for inspiration on how to define what might be rational behaviour as opposed to irrational behaviour. For example, it would not be rational for a person to harm himself or herself.
Neither would it be rational for that person, after finding out a cure for cancer, then to fail to tell other people about it. That is, we can regard (by common use of the term in natural language) that sharing of knowledge is a rational thing to do. Rationality is also associated with personal preferences–for example, one person might think that being a vegetarian is irrational, yet a vegetarian might think the opposite, that someone who ate meat is irrational instead.
We could initially define that an agent acts rationally if it always acts to ensure its own survival and the survival of its own family or others of its own kind. Then we could devise tests to see how the agent acts in situations where it must decide between various courses of actions. We can create these situations in some virtual environment or in a real environment for robotic agents. For example, does the robotic agent act in a rational manner similar to humans when it is confronted with a choice between safely exiting from a burning building or going through a wall of fire, to find out if there are other people still alive in the building, when it knows that such an action will almost certainly lead to its probable death? A rational being might consider its own survival first, whereas a robot without the ability to think in such a manner is simply following a programmed sequence of actions (i.e. it is neither rational or irrational, just a program).
Thoughtfulness and consciousness are considered by some to be the “holy grails” of Artificial Intelligence. It is not at all clear how we might go about measuring for these attributes. Perhaps the best thing we can do at the moment is to acknowledge the problem by leaving the design objectives for these attributes as vague as they are in Design, and put aside the problem until we have a better understanding of them, and how we might go about measuring when they have been achieved.
We can also consider an alternative test for intelligence. Often a term heard used in the games and movie industries is “suspension of disbelief”.That is, the goal of the creators is to suspend in the mind of the person playing the game or watching the movie their belief that it is not real–the longer the suspension of disbelief, the better the entertainment.In a sense, the games and movie designers are telling a story–they want people to be immersed in the narrative they have created, just as an author of a novel wishes her readers to be immersed in the story she has created . For an adequate AI test for believability, however, suspension is not sufficient.We need to go further and insist that the observer is not able to tell the difference to real life behaviour–even though, like the Turing Test, they know in advance that at least one of the agents they are observing is artificial rather than real.
Hence, we can use these insights to propose another candidate test for intelligence, one based on whether what is being observed is believable or not. If in a multi-player game, say–or a movie–the animation of a virtual agent is so good that you cannot tell the difference to a real agent, even though you know you are playing against or observing at least one computer agent, then the virtual agent is said to have passed the test.
We can now consider drafting variations to the original design objectives based on the above insights. Note that the design objectives listed below are more a work in progress (we are trying to make them measurable in some manner as a first step) rather than cast in stone; we should alter them using further insights gained during the design process. Other designers will craft different objectives to meet theirspecific needs. The real purpose for listing them here is to highlight that as potential AI designers ourselves, we need to make such design objectives explicit–and stated upfront at the beginning of any AI design project–rather than left unclear as has been a failing of many AI projects to date. We will explore various agent technologies in the next volume of this book series to see how realistic they are.
Design Modified design objectives for an Artificial Intelligence system.
Design Objectives for Believable Agents:
Design Objective 1:
An AI system should pass the believability test for acting in a knowledgeable way: it should have the ability to acquire knowledge;it should also act in a knowledgeable manner, by exhibiting knowledge–of itself, of other agents, and of the environment–and demonstrate understanding of that knowledge.
Design Objective 2:
An AI system should pass the believability test for acting in an intelligent and reasoning manner. It should be able to solve problems for itself, through observation and learning, and through reasoning. It should also be able to apply solutions from one problem domain to another without being shown how to do it.
Design Objective 3:
An AI system should pass the believability test for acting in a rational manner: firstly, by ensuring the best chances for survival of itself and its own family or others of its kind; secondly, by sharing the knowledge it has gained with other agents; and thirdly, by choosing to act according to its own personal preferences.
Design Objective 4:
An AI system should pass the Mirror Test and believability test for acting as if it is self-aware.
Design Objective 5:
An AI system should pass the believability test for acting as if it thinks and is conscious.
Design Objectives for Conversational Agents:
Design Objective 6:
An AI system should pass the Turing Test for intelligence,including a variation of the test outlined in Thought Experiment to test for rationality, thoughtfulness and consciousness.
Artificial Intelligence Related Interview Questions
|Cloud Computing Interview Questions||IBM Cloud Computing Infrastructure Architect V1 Interview Questions|
|Machine learning Interview Questions||Amazon Cloud Computing Interview Questions|
|Web analytics Interview Questions||SAS DI Interview Questions|
|Base Sas Interview Questions||SAS Macro Interview Questions|
Artificial Intelligence Related Practice Tests
|Cloud Computing Practice Tests||IBM Cloud Computing Infrastructure Architect V1 Practice Tests|
|Web analytics Practice Tests||SAS DI Practice Tests|
Artificial Intelligence Tutorial
Agents And Environments
Frameworks For Agents And Environments
All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.