Skip to main content

Some people, many of them very influential in their fields, believe that AI will evolve to become ‘conscious’. If so, it could reach a point where it fulfils the science fiction novelist’s prophecy of taking over the world and enslaving humankind. But how realistic is this view and what kind of evolutionary path would it need to follow in order to achieve it?

At some point along that path, we would lose control of the technology that empowers and enables AI. To avoid that, we would need to know, with some certainty, when we are approaching that critical inflexion point.

But is this just a classic case of chicken and egg? Of course, we know what came first; us. But without us, AI could not continue to evolve. It would never reach an evolutionary point where it no longer needs us. The egg would not reach a point when it no longer needed the chicken.

If we sail too close to that point, should we kill the chicken or break the egg? Could we turn AI off without hesitation, if we needed to? Or would we be so dependent on it by then that we couldn’t bring ourselves to do it? Do we need an autonomous kill switch and, if so, could we trust it to work in our favour?

Artificial intelligence

These are no longer hypothetical questions. They are serious concerns and are being treated as such.

The technological singularity

We have a long and successful track record of taming technology. What we haven’t faced before is a technology that knows it is being tamed. The challenge, simply put, is how can we be sure an intelligence is not just faking compliance. It isn’t easy with people, because we have unseen motivations. It is probably easier with domestic pets because their motivations are simple. Perhaps the answer is to ensure AI can never be motivated by an unseen agent.

We may only be a few short years away from the technological singularity, where the intelligence of technology outperforms human intelligence. The expectation is this will happen sometime between 2005 and 2030. Elon Musk believes it could be as early as 2025. The technological singularity would be the result of a period of increasingly shorter cycles, during which AI improves on itself and self-replicates a more intelligent version of itself until, within a few cycles, it becomes more intelligent than us.

What happens then is anyone’s guess. A single AI operating alone may not be so hard to disarm. We shouldn’t assume that the IoT will provide AI with insurmountable omnipotence, either. The key is really not to let things go that far, by building in some failsafe features.

Interestingly, right now, very few people are looking at how those failsafe features may work. In fact, given the distributed nature of AI, it may not even be possible to implement a kill-switch at all.

This is one of the potential problems being addressed by an ETSI ISG (Industry Specification Group) looking at Securing Artificial Intelligence, the ETIS SAI ISG. The SAI ISG’s first report, released in January 2021, was the first produced by any organisation or initiative dedicated to securing AI. The first Group Report gives an overview of the problem statement and identifies some of the basic and broader challenges that exist with AI today. These include something as fundamental as being able to understand how it works.

The report also highlights how very different AI systems are from conventional IT systems. For example, because of the way they operate, it is virtually impossible to roll back an AI system to a last known point. The way AI evolves is not fractional or procedural, as with other IT systems. This means if an AI goes ‘wrong’, it would be very difficult to roll it back to a version that was ‘right’. For the same reasons, backing up an AI system is also difficult. From this point of view, AI is already a living entity that needs to keep moving forward in order to stay active.

And because AI is not like IT, regular security measures like firewalls are not expected to be effective in securing it. This is where the ETSI SAI ISG is focusing its efforts, on identifying the security weaknesses with AI and the way machine learning develops, and then agreeing on ways to address those weaknesses. It has already concluded that conventional IT security solutions are not appropriate, so it will require entirely new approaches. This will include the way AI is trained, how AI can defend itself from conventional or AI-based attacks, and how AI might be used to launch those attacks.

The weak from the strong

Researchers tend to refer to AI as being either weak or strong. Today, most AI systems would be classified as weak. It simply means the system has been trained to do some things well, but not all things. It includes expert systems such as those used in image recognition, an example may be a system to examine medical images for signs of cancer.

Strong AI, also known as artificial general intelligence, is an AI that doesn’t need to be trained or given expert knowledge. It learns to do everything, basically on its own. Some believe that only strong AI, or artificial general intelligence, can become sentient. This would not necessarily be coincident with a technological singularity, but we can probably assume they are closely linked. 

Robots are the human face of AI and there are moves to make robots more accountable, based on the premise that if it looks human it must be as intelligent as a human. There is no real logic to this, but the idea has taken hold. The indications are that it will be the availability of robots with enough AI to make them human-like that will be the real turning point.

Examples such as Sophia, the humanoid robot developed by Hanson Robotics and given citizenship by Saudi Arabia in 2017, are beacons pointing to the future. The idea of a Robot Ethics Charter, instigated by South Korea in 2007, has propagated around the world. Such charters will protect both robots and humans from abuse or malicious intent. In 2020, NXP launched an AI ethics initiative, which aims to promote an ethical approach to using AI in edge devices. It is formed from a framework of five guiding principles, covering non-maleficence, the protection of human autonomy, trust through transparency, risk mitigation, and privacy and security.

Although they are parallel development paths, there seems to be an inextricable link between AI and robotics, making their ultimate convergence not only expected but practically inevitable.

Hardware in the loop

Beneath the polymer skin of a robot, or any device with artificial intelligence, will be a lot of hardware and software. While quantum computing is yet another technology that will change everything, it is still many years away from being the predominant form of processing.

This means that whatever it looks like to us, it is hard to conceive that any AI would not be dependent on silicon-based integrated circuits for many decades to come. Not surprisingly, there is a lot of research and development going at this low level. Software provides the framework for intelligent decision making, but it is hardware that gives that framework structure.

As well as more powerful and more parallel processors based on conventional architectures, such as GPUs and DSPs, there is increasing interest in AI accelerators and new architectures that implement AI in entirely different ways.

Conventional CPU architectures are inherently serial and although there are various tricks used to parallelise the execution path, regular software is still procedural. Hardware is not; FPGAs and ASICs are designed using languages that take this concurrent execution into consideration. We don’t have a new software language for programming AI architectures yet, but the architectures are coming.

It is worth remembering that while AI is being used in data centres everywhere to process big data, it is also being used in edge devices. In these applications, low power operation is a key driver. One of the most eagerly anticipated uses of AI in edge devices is to make it easier for people to use technology. There are great expectations on the role voice will play in this future. Using AI to interpret voice commands is perhaps now an archetypal example of an intelligent future.

One start-up has developed a neural network for just this application, in fact it has developed two. Syntiant has developed the Neural Decision Processorä based on its neural network processor IP. The company claims it can run all of the most popular neural network architectures while consuming very low power. This means it can provide always-on wake-word detection in portable, battery-powered devices.

Almost all of the established processor companies, including Intel, Nvidia and Arm, as well as technology companies such as IBM and Amazon, have made statements about their developments in this area. For the larger companies, the focus seems to be on accelerating AI in the data centre, where the training takes place. This makes sense, because training accounts for the majority of the processing effort needed. Competition reigns here, to prove whose approach is best. Generally speaking, the battle is likely to continue for many generations of processor and accelerator technologies. We can expect the top slot to be won and lost based on smaller and smaller margins over those generations.

AI data centre

At the edge, where the majority of AI and machine learning will take the shape of inferencing, there are far more propositions vying for position. Start-ups abound here, as market analyst forecasts show. These are not household names and may never be, as the business model for many probably includes being acquired by a larger, more recognisable brand.

What is common among them is their focus on specific applications. This is what we may refer to as weak AI, or expert systems. Machine vision is one such application, particularly when applied to emerging trends such as autonomous vehicles.

Predicting the near future

Future AI

Based on what we know, it shouldn’t be too difficult to predict what will likely happen in the near future, in terms of AI. We can be the ‘AI’ in this scenario, by examining what we can observe, correlating that with what we know, and inferring what will happen as a result.

Weak AI and inferencing will become more common at the network’s edge. It will be implemented using accelerators and optimised processing architectures. These accelerators will probably work in conjunction with more conventional embedded processor architectures. At the same time, we can expect those general purpose architectures to evolve to include more AI-centric features and instructions.

Right now, those features are numerous and specific, creating huge fragmentation in the market. The electronics industry has a history of smoothing out bumps like this, so we can expect those fragments to congeal, either through acquisition or a general consensus. At some point, we should reach a point where all edge processors can support at least a common and basic level of AI or machine learning, in much the same way. This will create a certain amount of compatibility.

Things may be different in the data centre. The architectural protagonists are less likely to play nicely. Often, in these situations, the one with the loudest voice gets heard most, but that doesn’t always mean the technically best solution dominates. Again, history shows us that it isn’t always (or even normally) the best technology that wins. However, in the case of AI, where the stakes are so high, we can probably anticipate more prudent discrimination. In this scenario, it is more likely that there will be space for at least three dominant players.

In terms of weak AI and strong AI, the landscape is unlikely to change drastically in the near future. Expert systems able to infer based on specific data will dominate in edge processing applications, while strong AI will develop within the confines of the data centre and server farm. That means we are unlikely to see robots that can perfectly imitate people any time soon. But we may get more annoying phone calls from automated services that pass as people, trying to sell us services we don’t need at amazingly low prices.

Favourite things are Family, Music and Judo. Also, I have the ability to retain and quote useless facts, something that pleases me but can annoy others. My engineering hero - Isambard Kingdom Brunel
DesignSpark Electrical Logolinkedin