Some approaches to Knowledge Representation and AI - Artificial Intelligence

Knowledge representation concerns the problem of how to express the knowledge in a knowledge base. A primary purpose of knowledge representation is to model intelligent behaviour with the assumption that intelligent behaviour for an agent requires knowledge of itself and other agents, of objects and their relationships, of how to solve tasks and of laws that govern the environment the agent finds itself in

The ‘Knowledge Representation Hypothesis’ (attributed to Smith) makes the supposition that for intelligent behaviour agents make use of a knowledge base that represents knowledge about the world in some manner (Brachman and Levesque). The ‘Knowledge Representation Controversy’ concerns how the knowledge should be represented, whether it should be by using primarily a symbol based approach, or by a non-symbolic approach.

Approaches to Knowledge representation in Artificial Intelligence

The symbolic approach is the classical approach to AI, sometimes called “Good Old Fashioned AI” or GOFAI. It is based on the ‘physical symbol system hypothesis’ that states that a system based on the processing of symbols has the “necessary and sufficient means for general intelligent action” (Newell and Simon). Symbols in this case are things that represent or stand for something else by association. Human language consists of symbols; for example, the word ‘moa’ is a symbol that represents the concept of the New Zealand bird depicted above. In the field of knowledge representation, symbols are usually denoted by an identifier in a programming language.

The symbolic approach to AI is a “knowledge-based” approach requiring the building of knowledge bases with substantial knowledge of each problem domain.It uses a top-down design philosophy consisting of several levels: the “knowledge level” at the top, which specifies all the knowledge that the system needs; the “symbol level”, where the knowledge is specified in symbolic structures and identifiers in some programming language (for example, using lists or tables in NetLogo); and the “implementation level”, which are the symbol processing operations that are actually implemented (Nilsson). There are immediate problems with a symbolic approach to representing knowledge. For example, try using symbols (e.g.words) to describe the following:

  • what the Mona Lisa painting on the right looks like (to a person blind from birth);
  • the sound of bagpipes (to a person born deaf);
  • the taste of milk;
  • how sandpaper feels like;
  • what coffee smells like.

Words seem inadequate for the task.Note that the five senses are all represented here – vision, hearing, taste, feeling, and smell; and using words to describe what we sense can be problematic – we are often left “struggling for words” is a common English expression often heard. Another common English expression says “a picture paints a thousand words”. Concerning the problem of how to express a person’s face using only symbols, for example the face in the painting of Mona Lisa, no amount of words seem to suffice, although the painting has been the subject of countless essays and books. An opposing approach, called ‘connectionism’, rejects the classical symbolic approach, and instead advocates that intelligent behaviour is the result of sub-symbolic processing – that is, the processing of stimuli rather than symbols; for example, using artificial neural networks that comprise interconnected networks of simple processing units.

Here the knowledge is stored as a pattern of weights of neuron connections. This approach uses a bottom-up design philosophy or ‘animat’ approach by first trying to duplicate stimuli-processing abilities and control systems of simpler animals such as insects, then by trying to proceed gradually up the evolutionary ladder with increasing complexity. This approach highlights the ‘symbol grounding problem’ – the problem of how symbols get their meaning and postulates the ‘physical grounding hypothesis’ that states that the meaning of symbols need to be grounded within an agent’s physical embodied experience through its interaction with the environment .

The issue whether knowledge and intelligent behaviour should be represented symbolically or nonsymbolically has been an ongoing debate within the field of Artificial Intelligence ever since the subsymbolic approach to AI emerged with the revival of connectionism in the mid 1980s and with Brooks’ ideas on a bottom-up, embodied, situated, behaviour-based approach to AI. As often is the case with these type of debates similar to the declarative-versus-procedural-knowledge debate mentioned above, there is merit on both sides of the argument as some tasks such as answering queries and rule-based reasoning are more naturally suited to a symbolic approach, and some types of knowledge present difficulties for either approach, such as facial recognition for symbolic processing, and natural language information for sub-symbolic processing using artificial neural networks.

A number of other approaches have been devised, some proposing a combination of symbolic and subsymbolic processing, such as situated automata (Kaelbling & Rosenchein) and conceptual spaces (Gärdenfors). We will examine the latter approach in more detail, as it is useful for highlighting some important issues concerning knowledge representation. Gärdenfors postulates that what is fundamental to our human cognitive abilities is our capacity for processing concepts and these emerge from a distributed connectionist representation at the lowest level where stimuli from receptors are processed, and combine to form symbolic structures at the highest level, as shown in Table below.

Table The conceptual spaces cognitive model (Gärdenfors).

conceptual spaces cognitive model (Gärdenfors)

Concepts form the basis of knowledge. A concept is a unit of meaning that represents an abstract idea or category. We can think of concepts as being analogous to atoms – atoms are the basic building blocks of matter,just as concepts are the basic building blocks of knowledge. In Gärdenfors approach, concepts are represented as regions in n-dimensional space, in an analogous way that a topographical map represents terrain, with similar concepts represented in geometric regions that are spatially located near to each other as shown in Figure below.

An example of concepts represented geometrically (Gärdenfors).

concepts represented geometrically (Gärdenfors)

In this example, different types of animals such as mammals, reptiles and birds, are located in different regions in the space. Gärdenfors uses a wide range of experimental evidence from many fields to support his arguments. For example, from the field of cognitive psychology, there is empirical evidence that humans make use of prototypes to provide a cognitive reference point for each concept.

Gärdenfor’s approach naturally lends itself to the representation of prototypes – a prototypical concept such as a robin, for example, used as a reference point for the concept of a bird, will be located centrally within the parent concept’s geometric space (i.e. a bird as in Figure). Concepts that aren’t used as prototypes will end up further away such as penguins and emus.

We can do a simple thought experiment to illustrate this.Think of a bird. Now which bird did you imagine? Experiments have shown that most people will think of a robin, sparrow or a similar bird rather than an emu or penguin. The mechanism Gärdenfors proposes for the construction of concepts is using a process based on Voronoi tessellation with the aid of prototypes to break the space up into convex regions as shown by the straight lines in Figure above.

The multi-dimensional geometry for a conceptual space is constructed from what Gärdenfors calls ‘quality dimensions’ which correspond to the different ways an agent’s stimuli are judged to be similar or different. The primary function of the quality dimensions is to represent various qualities or features of objects. Some examples are temperature, weight, brightness, pitch and the spatial dimensions such as height, width and depth. Gärdenfors calls these ‘domains’. He uses the concept of an ‘apple’ to illustrate the distinction between a domain and a region:

Table A representation of the concept ‘apple’ (Gärdenfors).

representation of the concept ‘apple’ (Gärdenfors)

As a further example, experiments with human perception of the colour domain show that the colour conceptual space is described using three quality dimensions – brightness, hue and saturation. (Taste for humans, in comparison, has four quality dimensions – sweetness, sourness, bitterness and salinity). The Colour Cylinder NetLogo model illustrates what this space looks like. This model draws a colour circle comprising various hue and saturation values for a specific brightness value which can be adjusted using an Interface slider between the integer range of 0 to 255, as shown in Figure below. For lower brightness values (see left image in the figure), the colours become progressively more darker in colour, with the entire circle becoming black when the brightness value is set at 0, whereas the colours taper towards white at the centre of the circle when the brightness slider is set at 255 (as in the right image).

Screenshots of the colour circles produced by the Colour Cylinder NetLogo model.

Screenshots of the colour circles produced by the Colour Cylinder NetLogo model.

Settings: hue-increment = 0.05 and saturation-increment = 0.05; brightness = 50 (left image), brightness = 100 (middle image), brightness = 255 (right image).

The Colour Cylinder displays the full range of possible colours across all hue, stauration and brightness values. Humans use a wide range of words in language to describe colour. Empirical evidence shows that across different cultures and ethnic backgrounds, humans agree on similar areas of colour space to refer to the basic colour terms such as red, green and blue. However, the Colour Cylinder model clearly shows that there are no clear boundaries between the colours, with one colour gradually merging into another. Therefore, it is impossible to be able to exactly demarcate the region associated with a specific colour.

For example, try drawing the boundary for the colour yellow in the right image of Figure above. We can clearly see where the colour yellow is, but the boundaries with the adjacent colours – red and green either side of it, and white in the centre – are fuzzy. The task of determining what is yellow gets even more difficult in three dimensions when we include the variation in brightness as well. This is why disagreements occur, since each human will have a slight variation in what they perceive the yellow region to be.

Empirical evidence show that variations in conceptual regions occur not just for colour categories but also for a variety of other basic categories such as taste. Gärdenfors also allows for abstract concepts, and provides a compelling explanation of why defining categorical knowledge is so difficult, and why a purely symbolic approach will never be completely satisfactory. The platypus, bat and rchaeopteryx depicted in Figure, for example, present difficulties in categorization, the latter particularly difficult as it finds itself straddling the border between the category of reptiles and the category of birds.

Gärdenfors also explains how the meaning of combinations of concepts such as ‘wooden spoon’ are determined from the correlations between the domains that are common to the separate concepts ‘wood’ and ‘spoon’. In this case the domains are size and material, and this results in us thinking of a wooden spoon as being large rather than small when visualising what it may look like. In some cases, the meaning of the concept is determined by the context in which it occurs. For example, tap water at the same lukewarm temperature can be perceived to be hot if placed in a glass, and cold if placed in a bathtub.

A patch of blue sky will vary in ‘blueness’ depending on the time of day, how bright it is, the cloudiness of the sky, whether it has been raining for the last month and so on. Gärdenfors also illustrates the important role context plays in determing meaning by the following example. The colour red has different meanings in the following concept combinations: ‘red book’ (the colour we think of is close to a standard definition of the colour red); red wine (close to the colour purple?); red hair (close to the colour copper?); red skin (tawny?); red soil (ochre?); and Redwood (pinkish brown?).

The conceptual spaces model overcomes some of the shortfalls of the previously disparate symbolic and sub-symbolic approaches to knowledge representation and AI, by postulating a middle level of representation. It provides a plausible explanation for aspects of human knowledge representation, such as categorization using prototypes, concept combination, and the role of context in moulding concept meaning, that present difficulties to the other models.

It also is relevant for the design of embodied, situated agents as it shows how we can build knowledge without the need for symbol-grounding semantics. It also relates knowledge to locations in n-dimensional spaces so that we can characterize intelligent behaviour (for example, thinking) as movement within that space in a manner analogous to using maps to navigate and represent topographical terrain.


All rights reserved © 2018 Wisdom IT Services India Pvt. Ltd DMCA.com Protection Status

Artificial Intelligence Topics