Skip to main content

Starling_swarm_d7d02a2287884a02d797f83fb77777bdad037429.jpg

A Murmuration of Starlings - Image Credit: John Holmes

The behaviour of swarms of insects and birds has always fascinated scientists because it appears to demonstrate collective intelligence way beyond that of an individual member. A classic example of this in nature is Starling ‘murmuration’ when thousands of the birds fly in formation, executing rapid turns with incredible precision.

What makes a Swarm?

A swarm is a large group of identical organisms, natural or artificial, each possessing limited intelligence, that work together to achieve a higher goal beyond their individual capabilities. The key phrases here are identical organisms with limited intelligence. Obvious candidates in the natural world include ants, bees and termites. Each of these insects appears to have no powers of reasoning – they just react in a simple way to contact with their immediate neighbours in the group and the local environment. And yet working as a group, they are able to construct the most elaborate nests for the purpose of breeding. However, direct interaction with each other is not required: for example, ants mark a trail from a food source back to the nest by dropping pheromones for their colleagues to follow. This process of changing the environment (leaving a coded marker) to influence the action of subsequent agents is called Stigmergy. It leads to the emergence of a powerful collective intelligence that requires no central leadership or planning – swarm intelligence.

Natural versus Artificial

The concept of swarm intelligence arose out of a need to explain the ability of large insect colonies seeming to ‘think’ as a single entity. Given the simple behaviour patterns of the natural individuals, scientists felt they could be modelled by a relatively small set of mathematical ‘rules’. The rules were simple enough for swarm simulations to be run in the form of programs on the relatively low-performance computers of the time. In 1970 a mathematician called John Conway devised a simulation of a colony of imaginary single-cell organisms. The Game of Life as it was called, triggered a whole line of research into the way sophisticated group behaviour can emerge when pretty dumb artificial individuals interact. As long as they are all nearly identical and obey the same rules – a homogeneous group.

Researchers had seen the possibility of using robots to emulate swarm behaviour back in the 1950s, but making enough units for a sensibly-sized swarm was just too difficult and expensive. Thanks to microelectronics it’s now possible to build artificial swarms of very low-cost robots. They tend to be mobile with the ability to detect and communicate directly with their immediate neighbours. Of course, they cannot model all the functions of a living being: just certain specific behaviours like clustering to form specific patterns or shapes, following-the-leader, and surrounding a ‘threat’. In 2011 researchers at Harvard University demonstrated their design for simple, cheap-to-make swarm robot – the Kilobot, so-called because it was designed to help create artificial swarms of more than a thousand units.

 

In the following video, you can see a swarm of Kilobots programmed to create a variety of cluster shapes. Each robot is programmed with the same set of rules and only interacts with its immediate neighbours: there is no central control (except for group programming and start/stop).

 

Swarm Engineering

What are the advantages of using a large number of relatively primitive robots operating as a swarm to perform a given task, rather than a small number of much more powerful, fully-autonomous but collaborative machines? It depends on the application. Consider these practical benefits of swarm engineering:

  • Simplicity: A swarm robot while autonomous, is capable of performing only a limited number of actions. That makes for a smaller processor which suits small-robot functionality.
  • Scalability: Need a bigger workforce? Just add more identical robots. Because all interaction is local, they will become effective immediately.
  • Fault-tolerance: If robots stop working the workforce shrinks, but the group task continues to be performed, perhaps taking slightly longer.
  • Parallelism: The same task may be performed in different places by separate ‘teams’.

 

It’s easy enough to simulate Stigmergy in software; a lot more difficult to create the equivalent of ants’ pheromone indirect communication in hardware. Direct wireless communication based on Infrared, Bluetooth or WiFi is found on most robots designed for swarm research. This means that research into swarm mechanisms has divided roughly into two streams based on the ‘natural’ indirect and ‘artificial’ direct processes. See reference [1] below. A novel method of indirect communication based on a grid of floor-mounted RFID tags is described in reference [2].

Small Group or Large Swarm?

Implementing an artificial swarm system for an application that requires ‘intelligence’ is an attractive proposition given the practical advantages described above. It doesn’t replace one big autonomous robot in some situations, or even a group of such machines acting in cooperation.

Take a wide-area disaster site such as an earthquake zone for example. One could imagine a large number, perhaps thousands of bird- or insect-sized drones flying in widely-spaced formation searching for pockets of survivors. The first drone to spot something dives down and its immediate neighbours follow to carry out a more detailed search. Meanwhile, the others close-up and continue the high-altitude search. Given their physical size, the drones are limited to a sensing and reporting role, but that’s ideal for swarm operation over a large area. A long-running robotic-insect research programme aims to create such a swarm: The RoboBee Collective at Harvard University. This video shows the latest iteration of RoboBee in action:

 

On the ground, a much smaller group of heavy-duty robots may be directed to climb over the damaged terrain, heading towards the ‘hot-spot’ with supplies and rescue equipment. This group won’t work as a swarm; instead, individuals will collaborate as necessary, perhaps with human operator supervision.

Swarm Learning

It might seem that swarm programming should be fairly easy, as it just involves the same few lines of code for each robot. The difficulty lies in ensuring that the few simple rules that govern the interaction of each single robot with its neighbour, generate the required swarm behaviour to complete the task. The point is that a robot can be programmed to respond in a predictable manner to its sensory inputs, but the inputs themselves cannot be predicted with any certainty. That means the timing of individual actions and hence the overall swarm behaviour is hard to determine in advance. Take our flock of Starlings in the heading picture, the ‘rules’ governing the movement of each bird may be simple:

  • Fly in the same direction as the bird ahead.
  • Maintain a constant distance from each other, in all three dimensions.

 

Straight away you can see a problem with this: what governs the motion of the ‘leading’ birds? And what lies behind any decision to take-off or land? It suggests that when they are not swarming, the birds are individually autonomous, for example strutting about on a lawn digging their beaks in the ground searching for grubs. But when one flies off, the others follow. Once all are airborne, it may just be that the sudden seemingly choreographed changes in direction are due to random air currents causing slight changes in flight path which are then magnified by the swarm rules. At any rate, it suggests that creatures can switch at will from swarm to individual behaviour and back again.

Attempting to write hard and fast rules for a complex robot swarm task is extremely difficult because of the random influences on the swarm as a whole. It can be as bad as trying to calculate by hand all the weighting factors for a Deep Neural Network (DNN) used to recognise objects in a photographic image. In each case, there is no certain or deterministic outcome: only a list of probable outcomes from what’s called a stochastic system. A technique called Turing Learning can be used to ‘train’ a DNN to recognise objects. It can also be used to characterise swarm behaviour. See reference [3] below for more details.

Nanobots

We, humans, tend to regard swarms of anything in a negative way, particularly flying insects able to inflict painful stings. Alfred Hitchcock’s 1963 movie The Birds, although fictional, undoubtedly stoked fears of large flocks of birds at the time! Swarms of robots trigger the same reaction in movies such as I, Robot. Technically, the robots in the latter are centrally controlled and do not rely on swarm intelligence, but… What would be your nightmare scenario? Perhaps massive swarms of molecular-sized robots able to penetrate the skin and get into your brain via blood vessels. Yuck. But what if they could save your life? The idea has been around for a long time: in 1959 the physicist Richard Feynman wrote an essay essentially about making things by directly manipulating atoms. Included was a reference to the idea of using injected nanoscale robots to perform surgery in the human body. Nothing much happened until the 1980’s when the concepts of nanotechnology and micromachines as practical realities started to gain traction. Is it possible for devices so small to have enough intelligence? Well, we have the natural models of viruses and bacteria to work from. So far though, not much progress has been made; ‘molecular motors’ and gears have been produced in the lab. But the only practical realisation of microchip-scale mechanisms has been with the MEMS technology that has miniaturised gyroscopes, accelerometers and other sensors.

That leaves the science fiction writers to feed our imagination with what might happen in a future when nanobots with advanced swarm intelligence go bad. Read Michael Crichton’s 2002 novel Prey and be afraid. Or perhaps just a little worried.

References

[1] Swarm Robotics and minimalism. Amanda J.C. Sharkey

[2] From Ants to Service Robots: an Exploration in Stigmergy-Based Navigation Algorithms  A.Khaliq

[3] Turing learning: a metric-free approach to inferring behaviour and its application to swarms Wei Li, Melvin Gauci and Roderich Groß

If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.
DesignSpark Electrical Logolinkedin