Skip to main content

Just How Smart is Artificial Intelligence?

Over the last few months, I’ve noticed a significant reduction in the heady optimism of articles and opinion pieces in the press on the subject of artificial intelligence. In fact, many are now questioning the timescales given by manufacturers for the introduction of high-profile applications such as autonomous vehicles based on AI. If you are of a certain age, you might feel we’ve been here before…

Atlas the running robot
Atlas the running robot.
Image credit: Boston Dynamics

The Hype Cycle

Unlike most other new technologies, artificial intelligence and robotics have gone through more than one ‘hype cycle’ – overexcitement at the possibilities, with rapid development followed by a reality-check and a period of disappointment (the so-called AI Winter) before the next ‘breakthrough’. A new technology introduction, the invention of the airplane for example, normally works from the outset. The Wright brothers’ first effort actually flew; not very far it’s true, but it did achieve what its designers wanted and impressed the general public. Sure, visionaries saw the potential and I’ve no doubt there was much talk about a future of mass travel over large distances – early 20th century newspaper hype – long before it became a reality. Aircraft development has been an almost continuous process since then, with dreams becoming reality at a breathless pace culminating in the supersonic Concorde and vertical take-off/landing (VTOL) military jets. Communication is another example: we’ve gone from a basic voice telephone network in the 1950s to the Smartphone and the Internet in less than 50 years. Robots with capabilities imagined in the 1920s have yet to appear.

Robot Intelligence Then and Now

The first humanoid robot in the UK was demonstrated in 1928. Unfortunately, the original Eric the Robot has been lost, but here is a video of a replica built for the London Science Museum a few years ago:

Eric’s original motors, pulleys and clutch mechanisms have been replaced by modern electronics and servo motors, but the new version shows what it could do. Bear in mind that the original did not have an electronic brain nor a speech synthesiser. Eric couldn’t even walk. So how far have we come with humanoid robots? The following video shows Boston Dynamics’ Atlas and friends in action at the end of the year 2020:

Atlas is obviously a major advance on Eric – just look at those moves, they’re so… human. Unfortunately, this is not an example of machine intelligence: all those dance moves are the work of a human programmer/choreographer. That’s not to say there isn’t a huge amount of computer processing power at work here. Robots generally use computer software to drive the sensor-motor control loops for each joint, perhaps using a PID algorithm. The sensor may be a basic shaft rotation (quadrature) encoder or for direct monitoring of limb motion, a 3-axis accelerometer/gyroscope device known as an Inertial Measurement Unit (IMU). Atlas has 28 joints and is able to move its heavy bulk smoothly and accurately using hydraulic motors. But to get that natural movement takes some very sophisticated kinematic algorithms and yes, more powerful computers. All working in real-time of course. There are even cameras and Lidar fitted so Atlas can ‘interact’ with its environment. It’s obvious that Atlas is so much more advanced than Eric and yet if you could transport him back to 1928, would Eric’s audience be overwhelmed with nearly 100 years of progress in robotics? Or after a while, would they say: “Is that all it can do?”

Artificial Intelligence

We humans have a somewhat ambivalent attitude towards robots. On the one hand, we look at Atlas dancing and think: “Great moves, but you’re never going to be the Terminator”. On the other hand, attach the phrase ‘Artificial Intelligence’ and suddenly we believe we’ll all be driven around in perfect safety by a robotic chauffeur next year. AI is one area of modern technology where, for the last century at least, both layman and scientist have succumbed to unrealistic expectations. Perhaps it’s because it involves a concept everybody can superficially understand: creating an artificial version of ourselves.

In the 1930s only a handful of people knew anything about nuclear physics and atomic fission. Even fewer realised that within a decade engineers would use this new knowledge to deliver a massively destructive weapon: the atomic bomb. And the world was changed forever. The creation of a mechanical human being has been on everybody’s wish-list (or dread-list) since Eric performed on stage. How hard could it be? As it turned out, very hard and it exposed how so very little we know about the workings of our own natural brain. Attempts to create an artificial version of something so complex as natural intelligence with little understanding of how it does what it does, are ultimately doomed to fail.

First Cycle

In 1932, Edward Thorndike, a psychologist at Columbia University in the USA, suggested that human learning consists of some unknown property of connections between neurons in the brain. Later, in 1949, Donald Hebb, another psychologist postulated that learning specifically involves strengthening certain patterns of neural activity by increasing the probability (weight) of induced neuron firing between the associated connections. So, by now researchers with an interest in creating an artificial brain believed they had a model of the human version they could mimic in hardware. This approach to AI-based on a connected neural network became known as Connectionism and is often referred to as a ’bottom-up’ method. An alternative ‘top-down’ approach to describing the way a brain works became popular at the same time. Known as the Computational Theory of Mind (CTM), this theory starts with the assumption that the human ‘mind’ is a computational system with thought-processes analogous to what we now would recognise as software running on a digital computer. In 1936 Alan Turing invented his ‘Turing Machine’, a mathematical model of a physical device that could perform any calculation. Computationists used it as the platform to support their view of natural intelligence; Turing saw it as a way of achieving artificial intelligence.

Through the 1950s, much work was done on CTM, principally because theoretical computational machinery had evolved into practical hardware – digital computers – on which the first ‘intelligent’ programs could be run. The first ‘AI’ program ran on a Ferranti Mark 1 at Manchester University in 1951. It could eventually play a fair game of Draughts against a human opponent at a reasonable pace. Game playing against human opposition has become the standard way of demonstrating advances in computer ‘intelligence’ to the public with Deep Blue playing chess in 1996 and latterly AlphaGo playing Go in 2015.

Meanwhile, while all this computational activity was going on, Connectionism became focussed on the invention of the Perceptron in 1956. AI hype really took off with the Perceptron machine which was based on an artificial neuron. My article on DSP describes a simple implementation of a single Perceptron neuron. It’s known as a linear binary classifier as it provides a True output if the binary pattern applied to its inputs matches the pattern it has ‘learned’ previously. The memory of that pattern is stored in the values of the input ‘weights. The clever bit is that the device can teach itself, that is derive the weight values if the desired pattern is applied to the inputs first. An error signal – the difference between the expected and actual outputs – is used to adjust the weights until the error is reduced to zero.

An obvious task for a Perceptron machine is optical character recognition (OCR). The machine will require a minimum of 26 neurons, one for recognising each letter of the alphabet, and ten more for numbers. An optical scanner could consist of an 8 x 8 matrix of phototransistors, requiring a total of 64 weighted inputs to each neuron. Would make a nice DSP project!

During this period up until the early 1970s, government-funded AI research focussed on language processing. Perceptron networks were seen as the best way forward for automated language translation. A vast quantity of money was spent before it was realised that the complexity of language was just too much for what amounted to an image pattern recognition system. Interest in the connectionist approach to AI thus faded during the first AI Winter which lasted until the 1980s.

Second Cycle

The next hype cycle was based around a method of programming a conventional computer so it behaves as an ‘Expert System’, able to emulate the diagnostic capabilities of a human medical doctor for example. The program consists of two parts: the knowledge base containing all the facts, assertions and rules such as symptoms linked to diseases and other medical conditions, and the inference engine which took in the patient data and interrogated the database. New programming languages LISP and Prolog were developed to create these expert systems which operated in a completely different manner to conventional procedural code. There was even a version of Prolog written for the Sinclair Spectrum computer! Development of expert or knowledge-based systems continued into the 1990s but it was becoming clear that they did not represent artificial intelligence and commercial usage petered out because of the difficulty and time required in transferring human expertise to the knowledge base. The knowledge bases, despite their vast size in terms of memory space, were limited to narrow subject areas – a RoboGP to replace your local doctor never appeared. Another problem was administration: it required careful auditing to remove wrong facts and dodgy rules ‘learned’ automatically. Once again, an AI winter set in, this time until the mid-2000s.

Third Cycle

Back in the 1970s, it was realised that a single ‘layer’ of artificial neurons was not going to be able to classify anything more than a handful of simple objects, each well-defined and checked to make sure that the neuron output before each activation function was unique. The addition of another, ‘hidden’ layer of neurons allowed far more complex classifications. In fact, many layers can be used, with activation functions only on the final output. A neural network with multiple layers becomes a platform for what is now called ‘Deep Learning’ (DL). At last, everybody thought (again), we can build a machine with the complexity of the human brain, a sentient robot and so on. Larger and larger networks are being built based on the new generation of multi-core processors: connectionism is back in fashion. In spite of all the improvements in hardware technology, by the mid-2000s the usual AI disillusionment began to set in as it became apparent that Deep Learning was still not enough to create a ‘thinking’ robot. It’s still well short of Artificial General Intelligence. But the successful demonstration of prototype autonomous vehicles in the DARPA Grand Challenge of 2007 resulted in massive amounts of research funding becoming available to develop a production driverless car. An essential part of a robotic car is its vision system which includes the ability to ‘see’ objects and classify them correctly, in real-time. An obvious job for the advanced object recognition and classification abilities of an artificial neural network (ANN) based on Deep Learning.

Winter Approaches

From 2015 car manufacturers have been promising to introduce a driverless car ‘within the next year or two’. It’s now 2021 and nothing much has appeared. The project has fallen at the first hurdle: coming up with a computer vision system able to classify objects outside the car correctly, say up to 99% of the time – an obvious safety requirement. At the moment DL networks for vision are ‘trained’ using enormous quantities of images each tagged with what the pictured object is: car, pedestrian, bicycle, etc. The output file of this training program contains all the weights for every neuron connection in the network. Usually, the network is then tested in the lab by asking it to classify objects from a completely different set of images. Often 95% success rates are recorded and so the system is taken on the road with a live image feed from a video camera. Guess what? Correct classification falls below 50% within minutes. The reason is clear when you think about it – the training algorithm acts like an image compression algorithm, trying to extract and code the minimum of information needed to recognise any car, of any colour, viewed from any angle at any distance and so on. Except it can’t. The views out of a car window are infinite in their variety and it’s probably true to say that to achieve near 100% classification accuracy the training data set would have to contain approaching an infinite number of images. A deep learning AI computer may seem very impressive beating human experts in games such as Go, but it can’t do anything else. Even worse, it isn’t even aware that it’s a machine playing a board game. Oh boy. It seems to be getting a little chilly in here…..

What does a current AI robot brain lack? Common-Sense

The human brain doesn’t just react to what it senses at the time. Sensor information is combined with previously obtained knowledge or what we call experience to make a decision on what to do next. We all know what common-sense is. Don’t touch a newly boiled kettle, you’ll be scalded. While driving past a school, slow down. In fact, your memory is packed full of these pieces of wisdom and retrieval is automatic.

How do you give a computer ‘common-sense? No doubt part of the natural version is built-in at birth via genetic inheritance. We already have the machine equivalent: it’s the operating system that gets boot-loaded on power-up. But experience is accumulated (learned) over time by a process of trial and error. You can’t wait for your new autonomous car to learn the hard way how to deal with real-world driving conditions. If the car must gain common-sense on the road, how can you ensure that it learns the right things? Will it pick up bad driving habits just like us humans? Perhaps the answer might be to give each newly-made auto-driver brain a lifetime’s experience of driving via simulation followed by a driving test. The trouble is, that’s how DL image systems are trained now – and we know that’s not good enough. The problem of artificial common-sense is the subject of a lot of research activity and if successful, when combined with DL might just get us into the next AI summer.

Neuromorphic Computing and Spiking Neural Networks

The very gradual process of learning how the human brain works are carrying on in the background and engineers are studying new discoveries to see if any can be used to move AI forward. One such is the way natural neurons communicate with each other. It seems that messages consist of current pulses or ‘spikes’ where the time separation between the spikes carries the information. A neuromorphic computer built on this principle will require a completely different method of programming from a ‘normal’ digital computer. Indeed, it sees the return of new analogue components replacing binary digital logic. A key component is likely to be the Memristor, a device whose output resistance is determined by the timing of a current pulse sequence on its inputs. It also possesses non-volatile memory. If artificial neurons start to look and behave like their natural counterparts, we should see a massive improvement in processing performance and power consumption over the traditional Von Neumann architecture of digital computers. The physical size/performance ratio of such a machine should also become comparable with a natural human brain. Until that time comes, we can use prototypes like SpiNNaker with simulated spiking neurons to work out how to program, if that’s the right word, the new generation of AI.

SpiNNaker is a 1 million neuron Spiking Neural Network Computer
SpiNNaker: a 1 million neuron Spiking Neural Network computer based on artificial spiking neurons emulated by conventional ARM9 processor cores.
Image credit: Manchester University

Are there any applications for today’s AI?

Of course, there are. Forget all the hype about robots taking everybody’s jobs, dominating the world or rapidly improving our lifestyles with new ideas thanks to their quick-thinking superior brainpower. That all lies far in the future. If at all. Small-scale AI has been solving seemingly trivial design problems for years. For instance, a smartphone that comes to life when it’s picked up uses a small trained neural network to recognise the sequence of data coming from the on-board accelerometers and gyroscopes that indicates such a movement. Notice the word ‘indicate’ – the phone infers from the sensor data what is happening. It might be wrong and not switch on the main processor when you pick it up, perhaps on 1 in 20 occasions. That error rate is small and when it does fail, you probably won’t even notice. AI is ideal for making yes/no decisions based on ‘fuzzy’ input data, where the latter can have a range of acceptable values. I remember an application from the early 1980s. Designed for the London Underground railway, it took the video feed from a station camera looking along the platform and had three outputs: platform full of people, half-full and empty. The network probably consisted of three Perceptron-type neurons. I’m not sure what the outputs controlled; maybe station entrance gates. You can see how the AI makes a flexible ‘judgement’ on each of three platform states. It really doesn’t matter if there are a few people around on a platform flagged as empty or at the other extreme, that the platform is not quite full.

These examples suggest that applications for the Internet of Things may benefit from AI. Anything, in fact, that does not have an obvious safety issue. Driverless vehicles that have the potential to behave dangerously are clearly not possible – yet. Small scale AI can be implemented on quite modest computer platforms and there are a number of affordable systems on the market. Available from RS Components: the NVIDIA Jetson Nano (204-9968) and the BeagleBone AI (190-1825) . Adafruit Industries have just brought out an expansion board for the Raspberry Pi 4 called the BrainCraft Machine Learning HAT.

A last-minute update: The Raspberry Pi Pico featuring an in-house design dual Cortex M0+ core microcontroller has just gone on sale for about 4 GBP (212-2162) . A port of TensorFlow Lite is already available for it!

The concept of working with Machine Learning on a small scale has become known as TinyML and is becoming popular with engineers working on ‘Edge’ applications.

Caenorhabditis Elegans

C.Elegans is a transparent nematode (worm) about 1mm long that lives in temperate soil environments. This insignificant creature gets an incredible amount of attention from AI scientists because its entire ‘brain’ and nervous system have been fully mapped. It consists of precisely 959 cells, 302 of which are neuron and other nerve cells: a complete biological entity that we should be able to simulate on a computer. The OpenWorm open-source project was set up in 2011 to see if it would be possible to make a simulation behave like the real thing. It’s still an on-going project and anyone can get involved via the OpenWorm website. What are the chances of mapping the human brain with its 100 billion neurons?

Finally

Are we on the brink of another AI Winter as the limitations of Deep Learning become apparent? Perhaps not, if there is a breakthrough in our understanding of the way a brain works. Even if that knowledge is forthcoming, is current semiconductor technology up to the task of using it? It may be the answer lies in the world of quantum mechanics. If so, then to use an expression frequently found in Rodney Brooks’ prediction blog posts: NIML (Not In My Lifetime).

Further Reading

Rodney Brooks, one of the pioneers of AI and robotics, has a very informative website featuring blog posts that read more like tutorials. One, in particular, is a regular update on predictions he made for technology in 2018. Very much an antidote to hype.

Alan Turing’s prophetic paper on Intelligent Machinery. 1948.

A useful review of security, reliability, and vulnerability problems of current Machine Learning applications. Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead. 2021

If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.
DesignSpark Electrical Logolinkedin