Skip to main content

Is Number Five really Alive? (Number One is still pretty Dumb)

Finally, after months of late nights working in the lab, this collection of servomotors, microcontrollers and bits of bent metal is ready to come alive. All it needs now is a brain to give it intelligence, to allow it to think for itself, to operate autonomously: to be a robot. But what is ‘intelligence’ and can it be programmed in computer code?  I have a ‘Smart’ phone, but what does ‘Smart’ mean? One thing’s for sure, a lot of research is required just to work out where to start.

The first thing to establish is the level of apparent intelligence required for my application: will a few basic sensors driving a handful of yes/no logical decisions suffice, or many sensors with complex outputs requiring a big database of prior knowledge to decide on the next move? Well, if all I want is a toy robot or an educational one to teach basic control principles then the former will do. In the case of a wheeled robot all you need are low-level PID control loops (a few lines of code) working from wheel rotation sensors, to drive the motors and some ‘bump’ switch sensors. Small size is also assumed so if it goes berserk, no lives are put at risk! Make no mistake – this robot does have some ‘intelligence’ because it changes its behaviour in response to sensory inputs.

Understanding the environment

No, my robot is a rescue robot able to clamber over very rough terrain and carry a heavy load. If it makes a mistake it could flatten the very people it’s trying to help, or indeed the operator. Obviously I’m going to need some sophisticated sensors and computers able to interpret the sensor data. In other words my robot must not only be aware of objects around itself: it must know what they are and their significance to its mission. The toy robot can just bump into things and back off; the rescue robot must not ‘bump’ into anything. OK, so how far up a scale of intelligence have we climbed? Not very far. The processing requirements are getting pretty huge nevertheless: a spinning Lidar sensor will provide 3D data of the surrounding environment but this will need heavy-duty processing for objects to be ‘recognised’ and commands generated for those low-level motor control loops.

Object recognition

So now we have a very clever and expensive non-contact ‘bump’ sensor when what’s needed is object recognition and it must be done in ‘real time’. No problem. There have been major advances in pattern recognition in recent years and many industrial robots can select the correct parts they need from a storage rack (most of the time). Now my rescue robot has got to recognise far more complex objects from any angle. Not too difficult but for one thing: we’re in a disaster zone and the objects are damaged or partially concealed. In other words, structures are partially demolished and survivors may be partly or wholly buried. This goes way beyond matching images from a database – there are just too many possibilities and yet the human brain can remember fine details from something ‘learned’ decades before!

Memory and Recall

In theory, a memory will ‘remember’ everything that gets put into it and crucially, be able to retrieve that information on demand. We call this Random Access memory and all current computing technology uses it. Hard disk drives (not strictly random-access) now have capacities up into the terabyte range – that’s one million megabytes. This sounds a lot but you won’t get say 50 years of audio and video input data stored for instant recall – the kind of performance you expect from a human brain. Or not. Try to remember what happened on a particular day from last month for example, or replay in your head part of a movie you watched last year. The fact is the brain seems to remember very little detail about anything unless special learning techniques have been used. Worse, it will ‘invent’ memories if the pressure to recall is intense: a dozen witnesses to a violent criminal incident will come up with a dozen different descriptions of the attacker – ask any police detective. Whenever I try to remember the name of the famous Reading professor in Cybernetics , Kevin Warwick, I get Richard Dawkins instead. What’s wrong with my memory ‘address decode logic’? In this case, both names are stored, but the wrong one always comes back first. The brain can be very frustrating in the way important data is seemingly lost after a couple of days, but some trivial childhood incident can be recalled in vivid detail. When you ‘store’ a memory, links are formed and these can be reinforced by repeated access, forming the basis of the educational technique known as learning by rote. Even so, the expression ‘Use it or Lose it’ also applies: I can’t remember large chunks of the mathematics I was taught when I was 17.

Memory loss

In order for my rescue robot to remember all the things it needs to perform the relatively narrow function of locating survivors and aiding their continued survival or removal from danger, may mean operating a human-style ‘fading memory’. Some sophisticated DSP algorithms use ‘old’ data which degrades in importance as time passes. For our purposes though ‘forgetting’ cannot just be a function of time as some early data may be of vital importance. For instance as the robot moves through the disaster area it must keep a record of each survivor location or threat (position of radiation leak, say). This implies some complex processing at each key location to come up with a level of importance. Now this is where the real agonizing over machine versus human-like intelligence begins.

Statistical choices

In the 2004 movie ‘I,Robot’, the leading character Del Spooner likes retro-technology and hates the now ubiquitous humanoid robots. This loathing goes back to an incident in which his car crashes into a river after a collision with a truck and he is trapped while the car fills with water. Another car involved in the accident sitting on the riverbed beside his contains a little girl, also trapped and about to drown. A passing robot sees the accident and dives into the river. It quickly calculates that it can only rescue one of the two humans in time and it chooses Del. Del is angry because he feels the girl should have been rescued. He knows that the robot made a cold calculation that there was a much higher probability of success taking Del first, but that wasn’t the ‘human’ choice that should have been made. So what do we want from our Artificial Intelligence – logical decisions or emotion-inspired behaviour?

Machine Intelligence

The android that performed that rescue in ‘I,Robot’ demonstrated Machine Intelligence relying on statistical algorithms to determine a logical course of action. If it had used Del’s (human) decision-making process the outcome would have been different. If the captain of the Titanic had been a robot with machine intelligence only, then it is unlikely that the principle of ’Women and children saved first’ would have been used: pure logic at that time might have dictated that creating a lot of widows and orphans unable to support themselves by leaving the men to drown made no sense. However my rescue robot would seem to be justified in leaving a child that has clearly received a lethal radiation dose and instead removing a man who has a chance of recovery. Now things get really complicated: having determined that a victim has no chance of survival and is suffering considerable pain would it be ethical for the robot to ‘put them out of their misery’? We as humans make this decision all the time with sick pets – so why shouldn’t an emotionless robot? These are the kind of issues that fuel endless and heated debates amongst humans so it’s probably best that the robot designer steer clear of them – nobody needs a discussion of ethics in the middle of a disaster zone when quick reactions are required. So the trick is to isolate those areas of human intelligence which can supplement machine intelligence to achieve a better outcome than either can achieve on its own – without ethics becoming an issue.

Machine teaching and learning

How do you teach a robot? Programming the controlling device (usually a microcontroller chip) using traditional languages such as BASIC or C constitutes ‘teaching’. Although a great deal of ‘knowledge’ can be imparted, intelligence is limited to the conditional branch instructions built into the program. Typically the robot’s immediate behaviour is determined by sensor inputs (bump switches, ultrasonic range sensors, etc). Motor control algorithms (PID controllers) will make limited use of sensor ‘history’, but it can be said that the electronic brain does not learn very much after its initial programming. Industrial robot arms can learn a sequence of movements from the human operator who moves the arm manually while the controller monitors the joint position sensors. The controller can then drive the joint motors using this stored data. You can try this yourself on a small robot driven by servomotors modified for analogue output. Naturally artificial intelligence (AI) and machine learning are the subject of much research and a lot of impenetrable mathematics has been produced - the Wikipedia entry is a good starting point for a study of techniques, philosophy and ethics. For getting hands dirty quickly with some practical work without getting bogged down in all the maths I can recommend the book ‘Artificial Intelligence: 23 projects to bring your microcontroller to life’ by Bert van Dam, published by Elektor. A recent article in The Engineer magazine called Almost Human provides an interesting discussion on the ethics of making machines act like humans.

Is it Alive?

The big question: when can a robot claim to be alive? Here are a couple of thoughts:

  1. Has it become ‘self-aware’? A deeply philosophical and lengthy debate might be curtailed by considering the question of survival. If the robot develops a non-preprogrammed concern over the state-of-charge of its batteries or the fuel level in a tank and starts to abandon its primary task to try and address the problem, then it might be said to have developed a survival instinct. By itself it could be just be an example of simple Emergent Behaviour but taken with other unprogrammed effects such as:
  2. The robot attempts to reproduce itself. It will not have the physical hardware to do this or to refuel  itself, but start getting concerned when you find yourself discussing these issues over the communications channel.

This sounds rather like the Turing Test which can be (relatively) easily passed by a computer with access to a huge database of human experience and knowledge. In this case it becomes interesting if the robot initiates the discussion of life and mortality without access to a database containing these concepts. If it gets to this stage with a rescue robot when is it going to decide that a task is too dangerous for its own survival instinct?

Finally…

So we’ve programmed the rescue robot with some sophisticated learning algorithms and spent a lot of time supporting it during its education phase, but when will it be ready to be let loose on its own? The fact is, what we have done at best is to create a machine that is capable of making decisions in a human-like way. Unfortunately the down side is the introduction of human fallibility and the tendency to make mistakes. It could be argued that for our disaster rescue scenario, a real human being with an artificial exoskeleton would combine the best of both worlds – human intelligence with a machine’s physical strength.

In case you can’t recall the 1986 movie ‘Short Circuit’ and are mystified by the title of this blog post, it refers to a military robot (Number 5) that develops benign human characteristics after being struck by lightning. If only it were that easy.

If you're stuck for something to do, follow my posts on Twitter. I link to interesting new electronics components and development kits (some of them available from my employer!) and retweet posts I spot about robot, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.