Boids - Artificial Intelligence

In 1986, Craig Reynolds devised a distributed model for simulating animal behaviour that involves coordinated motion such as flocking for birds, schooling for fish and herding for mammals. Reynolds observed the flocking behaviour of blackbirds, and wondered whether it would be possible to get virtual creatures to flock in the same way in a computer simulation in real-time. His hypothesis was that there were simple rules responsible for this behaviour.

The model he devised uses virtual agents called boids that have a limited form of embodiment similar to that used by the agents in the Vision Cone model described in Section. The behaviour of the boids is divided into three layers – action selection, steering and locomotion – as shown in Figure. The highest layer concerns action selection that controls behaviours such as strategy, goal setting, and planning. These are made up from steering behaviours at the next level that relate to more basic path determination tasks such as path following, and seeking and fleeing. These in turn are made up of locomotion behaviours related to the movement, animation and articulation of the virtual creatures. To describe his model, Reynolds uses the analogy of cowboys tending a herd of cattle out on the range when a cow wanders away from the herd. The trail boss plays the role of action selection – he tells a cowboy to bring the stray back to the herd.

The cowboy plays the role of steering, decomposing the goal into a series of sub-goals that relate to individual steering behaviours carried out by the cowboy-and-horse team. The cowboy steers his horse by control signals such as vocal commands and the use of the spurs and reins that result in the team moving faster or slower or turning left or right. The horse performs the locomotion that is the result of a complex interaction between the horse’s visual perceptions, the movements of its muscles and joints and its sense of balance.

The hierarchy of motion behaviours used for the Boids model (Reynolds).

hierarchy of motion behaviours used for the Boids model

Note that the layers chosen by Reynolds are arbitrary and more of a design issue reflecting the nature of the modelling problem. Reynolds himself points out that alternative structures are possible and the one chosen for modelling simple flocking creatures would not be appropriate for a different problem such as designing a conversational agent or chatbot.

Just as for real-life creatures, what the boids can see at any one point in time is determined by the direction they are facing and the extent of their peripheral vision as defined by a cone with a specific angle and distance. The cone angle determines how large a ‘blind’ spot they have – i.e. the part that is outside their range of vision directly behind their head opposite to the direction they are facing. If the angle of the cone is 360°, then they will be able to see all around them; if less than that, then the size of the blind spot is the difference between the cone angle and 360°.

A boid in NetLogo with an angle of 300°

boid in NetLogo with an angle of 300°

A boid can be easily implemented in NetLogo using the in-cone command as for the Vision Cone model. Figure is a screenshot of a boid implemented in NetLogo (see the Obstacle Avoidance 1 model, for example). The image shows the vision cone coloured sky blue with an (the size of the blind spot is therefore 60°). The turtle is drawn using the “directional-circle” shape at the centre of the image and coloured blue, with the white radius line pointing in the same direction as the current heading of the turtle. The width of the cone is dependent on the length parameter passed to the in-cone command and the patch size for the environment.

We will now see how some of these behaviours can be implemented in NetLogo. Note that, as with all implementations, there are various ways of producing each of these behaviours. For example, we have already seen wall following behaviour demonstrated by the Wall Following Example model described in the previous chapter, and by the Wall Following Example model described in this chapter. Although the behaviour is not exactly the same for both models, the outcome is effectively the same. Both models have agents that use the vision cone method of embodiment of Figure that is at the heart of the boids behavioural model.

Two models have been developed to demonstrate obstacle avoidance. Some screenshots of the first model, called Obstacle Avoidance 1, are shown in Figure. They show a single boid moving around an environment trying to avoid the white rows of obstacles – an analogy would be a moth trying to avoid bumping into walls as it flies around. The extent of the boids vision is shown by the sky coloured halo surrounding the boid – it has been set at length 8 in the model with an angle of 300°. The image on the left shows the boid just after the setup button in the interface has been pressed heading towards the rows of obstacles. After a few ticks, the edge of the boid’s vision cone bumps into the tip of the middle north-east pointing diagonal obstacle row (depicted by the change in the colour of the obstacle at the tip from white to red), then it turns around to its left approximately 80° and heads towards the outer diagonal. Its vision cone then hits near the tip of this diagonal as well, then finally the boid turns again and heads away from the obstacles in a north east facing direction as shown in the second image on the right.

Screenshots of the obstacle avoidance 1 model

Screenshots of the obstacle avoidance 1 model

The setup procedure places the boid at a random location in the environment, and calls the drawobstacles procedure to draw the white obstacles in the bottom half of the environment. The ask wanderers command in the go procedure defines the behaviour of the boid. The boid will do a right and left turn of a random amount, then move forward a certain amount as specified by the variable boid-speed defined in the interface.

Then the boid calls the avoid-patches procedure to perform the collision avoidance.In this procedure, first the sky coloured halo surrounding the boid is erased by setting sky coloured patches to black.Next, the vision halo is redrawn around the boid based on its current location – the rapid erasure followed by redrawing causes the boid to flicker much like a butterfly rapidly flapping its wings. The boid then performs the collision avoidance by backing away a distance equal to boid-speed, and does a left turn. The last part of the procedure sets the patches that have been collided with to red.

The behaviour of the three breeds of agents is defined by the wanderers, followers and avoiders procedures. The first defines the behaviour for the wanderer agent so that it wanders around in a semi-random fashion. The second defines the behaviour for the follower agent that consists of the agent first moving forward a user-defined amount according to the interface variables boid-speed and speed-scale. Then it uses the NetLogo in-radius reporter to detect whether the wanderer is in its circular field of vision whose size is defined by the radius-detection Interface variable.If there is, then it will move toward it. The avoider agent’s behaviour is defined in a similar manner, the only difference being that it heads in the opposite direction (180°) away from the wanderer instead of towards it.

The Flocking With Obstacles model is a modification of the Flocking model provided in the NetLogo Models Library. The library model uses the standard method for implementing flocking behaviour devised by Craig Reynolds. In this approach, the flocking emerges from the application of three underlying steering behaviours. These are: separation, where the boid tries to avoid getting too close to other boids; alignment, where the boid tries to move in the same direction as nearby boids; and cohesion, where the boid tries to move towards other boids unless they are too close. With the modified model,the user has the extra option of adding various objects into the environment, such as a coral reef, sea grass and a shark. This is in order to simulate what happens when the flock encounters one or more objects, and to better simulate the environment for a school of fish. Some screenshots of the modified model are shown in Figure.

Screenshots of the Flocking With Obstacles model.

Screenshots of the Flocking With Obstacles model.

The top left image shows the model at the start after the setup button in the interface has been pressed. The middle top image shows the model after it has been run for a short while and a school of turtle agents has formed. The right top image shows the model with collision patches added in the middle in the shape of a shark loaded immediately after the previous image was taken.

These patches cause the turtle agents to move away when they have collided with them. The bottom left image shows the background image overlaid onto the same image. The middle bottom image shows the school approaching the object from a different direction. The bottom right image shows the scene not long after – now the school has split into two sub-schools after the collision and are heading away from the object.

The collision patches work by the model stipulating that any patch that is not black is to be avoided; that is, all colours excluding black forces the boids to turn around 180° in order to avoid a collision. Another change to the model is that the speed of the boids can now be controlled from the interface to enable greater testing of individual movements and this also provides a means of analysing the reactions of the boids.

The code for the relevant parts of the modified model that define the behaviour of the agents is listed in NetLogo Code.

NetLogo Code The code for the Flocking With Obstacles model shown in Figure (Screenshots of the Flocking With Obstacles model)

The setup procedure creates a random population of turtle agents. The ask command in the go procedure defines the behaviour of the agents – it simply calls the flock procedure. Here the agent first checks to see if there are any other agents within its cone of vision, then if there are any, it looks for the nearest neighbour, and then applies the separation steering behaviour as defined by the separate procedure if it is too close. Otherwise it applies the alignment steering behaviour as defined by the align procedure followed by the cohesion steering behaviour as defined by the cohere procedure. These three procedures make use of either the turn-away or turn-towards procedures that make the boid turn away from or towards a particular reference heading given the boid’s current heading. The reference heading for the separation steering behaviour is the heading of the boid’s nearest neighbour, for the alignment steering behaviour it is the average heading of the boid’s flock mates, and for the cohesion steering behaviour it is the mean heading towards the boid’s flock mates.

In the simulation, a number of emergent phenomena can be witnessed.The flock quickly forms at the beginning when no obstacles have been loaded. A noticeable spinning effect can also be observed of the boids within the flock if the initial interface parameters are set as minimum - separation = 1.25 patches, max - align - turn = 15.00 degrees, max - cohere - turn = 15.00 degrees and max separate - turn = 4.00 degrees. When the school encounters an obstacle, it changes direction as a group with individual boids usually reacquiring the flock very quickly if they become separated. When enough of the boids have altered their course, the remainder of the school follows suit without ever having been in collision with the obstacle. Occasionally, the school will split into two separate schools heading in different directions as shown in the bottom right image of Figure.

Screenshots of the follow and Avoid model

Screenshots of the follow and Avoid model

All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd Protection Status

Artificial Intelligence Topics