Implementing behaviour of Turtle Agents in NetLogo - Artificial Intelligence

In NetLogo, the behaviour of an agent is specified explicitly by the ask command. This defines the series of commands that each agent or agentset executes, in other words, the procedure that the agent is to perform. A procedure in a computer program is a specific series of commands that are executed in a precise manner in order to produce a desired outcome. However, we have to be careful to make a distinction between the actual behaviour of the agent and the mechanics of the NetLogo procedure that is used to define the behaviour. The purpose of much of the procedural commands is to manipulate internal variables including global variables and the agent’s own variables.

The latter reflects the state of the agent and can be represented as points in an n-dimensional space. However, this state is insufficient to describe the behaviour of the agent. Its behaviour is represented by the actions the agent performs which results in some change to its own state, to the state of other agents or to the state of the environment. The type of change that occurs represents the outcome of the behaviour.

Some example behaviours that we have already seen exhibited by agents in Netlogo models are: the food foraging behaviour of ant agents in the Ants model which results in the food being returned efficiently to the nest as an outcome; the nest building behaviour of termite agents in the Termites and State Machine Example models which results in the wood chips being placed in piles as an outcome; and the wall following behaviour of the turtle agents in the Wall Following Example model which results in the turtle agents all following walls in a particular direction as an outcome.

The Models Library in NetLogo comes with many more examples where agents exhibit very different behaviours. In most of these models, the underlying mechanisms are due to the mechanical application of a few local rules that define the behaviour. For example, the Fireflies model simulates the ability of a population of fireflies using only local interactions to synchronise their flashing as an outcome. The Heatbugs model demonstrates how several kinds of emergent behaviour can arise as an outcome from agents applying simple rules in order to maintain an optimum temperature around themselves.

The Flocking model mimics the behaviour of the flocking of birds, which is also similar to schooling behaviour of fish and the herding behaviour of cattle and sheep. This outcome is achieved without a leader, with each agent executing the same set of rules. The compactness of the NetLogo code in these models reinforces that complexity of behaviour does not necessarily correlate with the complexity of the underlying mechanisms.

Behaviour can be specified by various alternatives, such as by NetLogo procedures and commands, and by finite state automata. The latter is an abstract model of behaviour with a limited internal memory. In this format, behaviour can be considered as the result of an agent moving from one state to another state – or points in an n-dimensional space – as it can be represented as a directed graph with states, transitions and actions. In order to make the link between a procedure implemented in a programming language such as NetLogo and finite state automata, the wall following behaviour of NetLogo Code, repeated below, has been converted to an equivalent finite state machine in Figure .(ReferThe wall following behaviour of NetLogo Code converted to a finite state machine.)

NetLogo Code The wall following behaviour extracted from NetLogo Code.

The code has been converted to a finite state machine by organising the states into the ‘sense – think – act’ mode of operation as outlined in Section. Note that we are not restricted to doing the conversion in this particular way – we are free to organise the states and transitions in whatever manner we wish. In this example, the states and transitions as shown in the figure have been organised to reflect the type of action (sensing, thinking or acting) the agent is about to perform during the next transition out of the state. Also, regardless of the path chosen, the order that the states are traversed is always a sensing state followed by a thinking state then an acting state. This is then followed by another sensing state and so on. For example, the agent’s behaviour starts by a sensing state (labeled Sensing State 1) on the left middle of the figure. There is only one transition out of this state, and the particular sense being used is vision as the action being performed is to look for a wall on the preferred side (that is, the right side if following right hand walls, and the left side if following left hand walls).

The agent then moves onto a thinking state (Thinking State 1) that considers the information it has received from what it has just sensed. The thinking action the agent performs is to note whether there is a wall nearby or not. If there wasn’t, then the agent moves to an acting state (Acting State 1) that consists of performing the action of turning 90° in the direction of the preferred side. If there was a wall, then no action is performed (Acting State 2). Note that doing nothing is considered an action, as it is a movement of zero length. The agent will then move to a new sensing state (Sensing State 2) that involves the sensing action of looking for a wall ahead. It will repeatedly loop through the acting state (Acting State 3) of turning 90° in the opposite direction to the preferred side and back to Sensing State 2 until there is not a wall ahead. Then it will move to the acting state (Acting State 4) of moving forward 1 step and back to the start.

The wall following behaviour of NetLogo Code converted to a finite state machine.

behaviour of NetLogo Code converted to a finite state machine

The ‘Sense – Think – Act’ method of operation has limitations when applied to modelling real-life intelligent or cognitive behaviour, and an alternative approach embracing embodied, situated cognition was suggested. However, a question remains concerning how to implement such an approach since it effectively entails sensing, thinking and acting all occurring at the same time i. e. concurrently, rather than sequentially. Two NetLogo models have been developed to illustrate one way this can be simulated. The first model (called Wall Following Example) is a modification of the Wall Following Example model described in the previous chapter. The modified interface provides a chooser that allows the user to select the standard wall following behaviour or a modified variant. The modified code is shown in NetLogo Code .

NetLogo Code defining the modified wall following behaviour in the Wall Following Example model.

In order to simulate the concurrent nature of the modified behaviour, the original wall following behaviour has been split into three sub-behaviours – these are specified by the walk-modified-1, walk-modified-2 and walk-modified-3 procedures in the above code. The first procedure checks whether the agent is still following a wall, and turns to the preferred side if necessary. It then sets an agent variable, checked-following-wall? to true to indicate it has done this. The second procedure checks whether there is a wall ahead, turns in the opposite direction to the referred side if there is, and then sets the new agent variable way-is-clear? to indicate whether there is a wall ahead or not. The third procedure moves forward 1 step but only if both the way is clear ahead and the check for wall following has been done.

Essentially the overall behaviour is the same as before since all we have done is to split the original behaviour into three sub-behaviours – in other words, just doing this by itself does not achieve anything new. The reason for doing this is to allow us to execute the sub-behaviours in a non-sequential manner, independently of each other, in order to simulate ‘sensing & thinking & acting’ behaviour where ‘&’ indicates each is done concurrently, in no particular orderThis can be done in NetLogo using the askconcurrent command as shown in the go procedure in the code. This ensures that each agent takes turns executing the walk-modified procedure’s commands. The main difference compared to the standard behaviour is evident in this procedure. The interface to the model provides another chooser that allows the user to set a choose-sub-behaviours variable that controls how the sub-behaviours are executed. If this variable is set to ‘Choose-all-in-random order’, then all the three sub-behaviours will be executed as with the standard behaviour, but this time in a random order;otherwise, the variable is set to ‘Choose-one-at-random’, and only a single sub-behaviour is chosen.

Clearly the way the modified behaviour is executed is now discernibly different to the standard behaviour – although the former executes the same sub-behaviours of the latter, this is either done in no particular order, or only one out of three sub-behaviours is chosen each tick. And yet when running the model, the same overall results are achieved regardless of which variant of the model is chosen –each agent successfully manages to follow the walls that they find in the environment. There are minor variations between each variant, such as repeatedly going back and forth down short cul-de-sacs for the modified variants. The ability of the modified variants, however, to achieve a similar result as the original is interesting since the modified method is both effective and robust – regardless of when, and in what order the sub-behaviours are executed, the overall result is still the same.

A second NetLogo model, the Wall Following Events model, has been created to conceptualise and visualise the modified behaviour. This model considers that an agent simultaneously recognizes and processes multiple streams of ‘events’ that reflect what is happening to itself and in the environment (in a manner similar to that adopted in Event Stream Processing (ESP) (Luckham). These events occur in any order and have different types but are treated as being equivalent to each other in terms of how they are processed. Behaviour is defined by linking together a series of events into a forest of trees (one or more acyclic directed graphs) as shown in Figure. The trees link together series of events (represented as nodes in the graph) that must occur in conjunction with each other. If a particular event is not recorded on the tree, then that event is not recognized by the agent (i. e. it is ignored and has no effect on the agent’s behaviour). The processing of the events is done in a reactive manner – that is, a particular path in the tree is traversed by successively matching the events that are currently happening to the agent against the outgoing transitions from each node. If there are no outgoing transitions or none match, then the path is a dead end, at which point the traversal will stop. This is done simultaneously for every event; in other words, there are multiple starting points and therefore simultaneous activations throughout the forest network.

Screenshot of the Wall Following Events model defining the modified wall following behaviour.

Screenshot of the Wall Following Events model defining the modified wall following behaviour

In the figure, the event trees have been defined in order to represent the modified wall following behaviour defined above. Each node in the graph represents an event that is labelled by a stream identifier, separated by an “=”, followed by an event identifier. For example, the node labeled [motor-event = move-forward-1] identifies the motor event of moving forward 1 step. For this model of the behaviour, there are four types of events – sensing events, where the agent begins actively sensing on a specific sensory input stream (such as sight as in the figure); motor events, where the agent is performing some motion or action; sensed-object-events, which occur when a particular object is recognised by the agent; and abstract events, which are abstract situations that are the result of one or more sensory, motor and abstract events, and which can also be created or deleted by the agent from its internal memory (which records which abstract events are currently active). If a particular abstract event is found in memory, then it can be used for subsequent matching by the agent along a tree path.

For example, the node labelled [sensing event = use-sight] towards the middle right of the figure represents an event where the agent is using the sense of sight. Many events can occur on this sensory input channel, but only two events in particular are relevant for defining the wall following behaviour – these are both motor events, one being the action of looking ahead, and the other being the action of looking to the right. Then depending on which path is followed, different sensed-object events are encountered in the tree, either that a wall object is sensed, or nothing is sensed. These paths continue until either a final motor event is performed (such as turning 90° to the non-preferred side at the top right of the figure) or an abstract event is created (such as whether the wall is being followed has been checked at the bottom of the figure).

Note that unlike the Sense – Think – Act model depicted in Figure, this model of behaviour is not restricted to a particular order of events. Any type of event can ‘follow’ another, and two of the same type are also possible – for example in the path that starts on the left of the figure there are two abstract events after one another. Also note that use of the word ‘follow’ is misleading in this context. Although it adequately describes that one link comes after another on a particular path in the tree model, the event may in fact occur simultaneously, and the order as specified by the tree path is arbitrary and just describes the order that the agent will recognize the presence of multiply occurring events. For example, there is no reason why the opposite order cannot also be present in the tree;or an alternative order that will lead to the same behaviour (e. g. swapping the two abstract events at the bottom of the left hand path in the figure will have no effect on the agent’s resultant behaviour). The code used to create the screenshot is shown in NetLogo Code below.

NetLogo Code for the Wall Following Events model used to produce the screenshot in Figure (Screenshot of the Wall Following Events model defining the modified wall following behaviour.)

The code first defines two breeds, states and paths, which represent the transitions between states. Each state agent has three variables associated with it – depth, which is the distance from the root state for the tree; stream, which identifies the name of the specific type of event it is; and event, which is the name of the event. The event type is called a ‘stream’ as we are using an analogy of the appearance of the events as being similar to the flow of objects down a stream. Many events can ‘flow’ past, some appear simultaneously, but there is also a specific order for the arrival of the events in that if we choose to ignore a particular event, it is lost – we need to deal with it in some manner.

The setup procedure initialises the event trees by calling the add-events procedure for each path. This procedure takes a single parameter as input, which is a list of events, specified as pairs of stream names and event names. For example, for the first add-events call, the list contains five events:the first is a use-sight event on the sensing-event stream; the second is a look-to-right event on the motor-event stream; and so on. A directed path containing all the events in the event list is added to the event trees. If the first event in the list does not occur at the root of any existing tree, then the root of a new tree is created, and a non-branching path from the root is added to include the remaining events in the list. Otherwise, the first events on the list are matched against existing path, with new states added at the end when the events no longer match.

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Protection Status

Artificial Intelligence Topics