Skip to main content

There is nothing artificial about intelligence, but AI may have its own opinion.

The idea of creating a machine that could be thought of as ‘intelligent’ is really quite abstract. Intelligence is subjective. Is a flower intelligent because it opens when the sun rises or is it a biochemical reaction that requires no ‘intelligence’ and something the flower has no ability to control? The answer may be different, depending on who you ask. A child may say ‘yes’, plants are clever, while an adult may say ‘no’, it’s just a reaction. A growing number of biologists and neurologists are now saying ‘maybe’, because plants are reportedly able to also demonstrate the ability to communicate, learn, solve problems and even remember.

This raises an important point that has been present since the dawn of enlightenment, that we don’t know everything and, sometimes, we don’t even know that. Science often focuses on proving things that are empirical, but we are getting much better at supposition. This is important in the field of AI because there is a lot that we don’t really know about it.

In modern terms, AI can trace its origins back to the dawn of the age of the integrated circuit, in the 1950s. Back then, the idea of making machines intelligent seemed entirely obtainable within a few short years. As the scale of the challenge became more apparent, timescales moved outwards. Alan Turing’s eponymous test originally named the imitation game, was formulated in 1950. It proposed that if someone thinks a machine is intelligent based on the answers it provides to prepared questions, then it is intelligent. Later, in 1981, American philosopher John Searle developed a thought experiment called the Chinese room, which exposes the flaws with the Turing test. If the person outside the room doesn’t know how the person/machine comes up with the answers then it doesn’t really matter if they are right or wrong, there is no original thought involved and so no demonstrable intelligence.

However, defining intelligence as original thought very quickly becomes a matter of interpretation. People have thoughts all the time, but they are strongly influenced by previous experiences, so are they original? Can any thought be original once a person has been exposed to any kind of information? If you drill down far enough, the foundation for any thought must be built on something solid, something already known. How can that be original? There is a term for it, a priori, meaning deduced or inferred. This is the very opposite of empirical and, perhaps not surprisingly, a term adopted by the AI community in the 1990s.

Defining intelligence in terms of original thought means the concept of an intelligence that is artificial is probably flawed to begin with. Intelligence can be shown in so many ways, so what makes it artificial? The antonyms of artificial include genuine and sincere, both of which can be applied to electronic circuits and software.

But doesn’t that also mean that if the system is built on trained models, shouldn’t it simply be considered as intelligent? That may be the philosophy behind AI, but the way in which the models are trained and then implemented is very much where the science comes in.

Deconstructing AI

During the 1970s and 1980s, there were examples of AI systems that were in many ways comparable to today’s examples. This included self-driving vehicles and chatbots, but it built on a period of research that had largely focused on the physical aspect of AI, making mechanical objects operate with some level of intelligence.

What followed was a period of disillusionment and a corresponding reduction in funding. But in the 90s came the Internet and this arguably helped generate an interest in natural language processing. A flurry of start-ups made it seem for a while like a new era had begun. It didn’t necessarily turn out that way, but more importantly, it encouraged the development of new processing algorithms, enabled by the availability of cheap, powerful computers.

Today’s equivalent ‘killer app’ is probably image recognition, while cloud processing resources have undoubtedly enabled the development of AI. Server farms are able to run AI algorithms across massively parallel platforms, providing all of the heavy lifting needed to make it accessible to many. It had now become about providing AI as a Service.

Image_recognition_3f1a1fd143b538df54ac88dbb408ac4004ff8189.jpg

As soon as we enter into a discussion about providing a service the context narrows. In fact, this kind of AI is often referred to as Narrow AI, because it focuses on a specific task. We can liken this to the Turing test or Searle’s Chinese room. In contrast, General AI is a system that doesn’t need to know what the task is, it will use intelligence to work that out and come up with an answer. Right now, we have expert systems being used in service industries to accelerate and automate decision making. That kind of AI is buried deep within data centres and it is immensely scalable. We can only guess at how prolific AI is today in these services, but its use is only growing.

In contrast, the deployment of AI at the edge, in small devices, is still an emerging area. This is where we start to talk about neural networks on a chip, machine learning embedded within image sensors, inference engines running on microcontrollers, and edge processing.

AI is often referred to as a superset of machine learning, which is itself a superset of deep learning. At this level, the line between technology and biology really starts to blur.

The lexicon of AI

The are many terms used in the theoretical world of artificial intelligence, some of which have leaked out into the practical world. One of the more recognisable terms is neural network and there is a growing number of semiconductor companies developing and marketing integrated circuits that embed a neural network of some kind.

A neuron is a cell that can be stimulated through an electrical signal. As the cells work this way in biology, it shouldn’t be surprising that they have been mimicked in microcircuits. When several of these cells are interconnected, they become a circuit and when many such circuits are interconnected, they become a network. Biological neurons are classified as sensory, motor and interneurons, the latter providing the basis for a neural circuit.

In the domain of artificial neural networks, several approaches have become popular. The most popular is probably the convolutional neural network (CNN), which can map to a conventional microprocessor architecture with varying levels of efficiency. Another is the spiking neural network (SNN), which typically employs an entirely new kind of processing architecture that more closely mimics the biological world.

If we assume CNNs to be the ‘conventional’ approach, it isn’t surprising that many companies are now developing a new class of processors that are capable of executing them more efficiently. These NPUs (neural processing units) can integrate alongside other, conventional architectures, such as DSPs and MCUs. The appeal is clear, as it brings neural networks into the design flow with minimal disruption.

Spiking NNs, or SNNs, could be described as the more revolutionary approach. Weighting is used extensively in neural networking; it can be likened to holding two objects in your hands and trying to judge which one weighs the least. That one may be discarded in favour of the ‘heavier’ object, which is then compared with a new object. This iterative process, much like a bubble sort program, will eventually return a result based on the selection criteria, such as which object weighs the most. An SNN introduces the element of time into its reasoning, which is why they are seen as more closely representative of the biological world and, also, probably what makes them more difficult to implement in microcircuits.

Learning to learn

AI_learning_7896d9bfa6c3a34db30426fdd4ab0bfd689fc6a1.jpg

The way AI learns is also developing. In general, there are three ways a neural network learns; supervised, semi-supervised, or unsupervised. In very simple terms, the difference between them relates to how the data is presented. If all of the data is labelled it is normally associated with a supervised learning system, if some of the data is labelled it is semi-supervised but if none of the data is labelled it is said to be unsupervised.

Reinforced learning is normally dealt with as a separate approach, because it uses the concepts of rewards and punishments to modify behaviour, rather than a simpler right or wrong outcome to an action.

One of the flow control techniques used extensively in software, the If…Then statement, maps well to another technology being used in AI: the inference engine. We all infer, based on past experiences. Those experiences can be our own or learnt behaviour. For example, we may scald ourselves with hot water once or twice as children, but we soon learn not to, even when the water isn’t in a saucepan. We can infer that all hot water will scald us. But we can also infer from lessons taught, rather than experienced. For example, we don’t need to fall out of a tree to know it may lead to a broken ankle, particularly if we saw our older brother do it when we were five.

Machines use inferencing engines to implement AI and the engines are often taught using much larger resources. This ability to generate an inference model that can be deployed using fewer resources is behind the move to put AI at the very edge of the IoT.

These devices, although undoubtedly unique in their own way, share common features. For example, an endpoint in the IoT will probably feature some form of wireless interface. This could be Bluetooth, Wi-Fi or Zigbee, as an example. Although one endpoint may be a smart lock and another may be a smart thermostat, they may both use the same wireless interface and so can benefit from sharing the effort needed to develop that interface.

The same is now true for AI models. Transfer learning takes those parts of a trained model that are common across different applications and allows them to be shared. This means the effort (time, computing resources, data sets) needed to train the smart lock could potentially be maximised by using some of that knowledge in the smart thermostat. Transfer learning coupled with pruning – paring down the model to provide just those results it needs to focus on – means AI can now fit into embedded processors and even microcontrollers. This will be extremely relevant as AI reaches deeper into the IoT.

Still so much to learn

Depending on how you look at it, AI is either older than science itself, or the latest technology on the circuit. Either way, it is here to stay, and it will continue to develop at a rapid rate.

Its use will expand and as it does so it will evolve. It is hard to believe that there will be any application in the future that doesn’t employ some form of AI at some level. That is what makes it such an exciting topic and why we will strive to bring you the latest developments and insights into the development of AI.

Favourite things are Family, Music and Judo. Also, I have the ability to retain and quote useless facts, something that pleases me but can annoy others. My engineering hero - Isambard Kingdom Brunel
DesignSpark Electrical Logolinkedin