DesignSpark Electrical Logolinkedin
Menu Search
Ask a Question

Who’s afraid of Intelligent Machines?

Blue: A gentle, unthreatening Cobot folding laundry      Image Credit: UC Berkeley

Fictional robots have always been supremely intelligent and physically much stronger than feeble humans. The reality is that yes, they can be made more robust than us, but as intelligent? Nowhere near. And yet as soon as the word ‘robot’ is mentioned, most people think of the Terminator movies with the evil controlling intelligence of Skynet and humanoid robots looking like chrome-plated human skeletons. Is humanity destined to be enslaved by sentient machines because of our obsession with creating ‘Artificial Intelligence’ (AI)?

Early Mechanical Intelligence – and Fraud

We humans have always been fascinated by ‘Automata’ – machines that perform complex tasks without any apparent human intervention. Into this category come clocks and watches of course: mechanical contrivances that do a simple job well. One of the first machines that appeared to possess real powers of thought was the 18th century chess-playing automaton known as ‘The Turk’. Of course, it was a very much human brain behind its playing skills, but it did inspire Charles Babbage to design his Difference Engines: early machines for performing mathematical calculations. It wasn’t autonomous, but the Turk did have a lot of the movement mechanisms needed to create what was later called a ‘Humanoid Robot’. Artificial intelligence would have to wait for the invention of electronics, because as Babbage found out, mechanical engineering of the time was not up to the task.

Eric the Robot

Fast forward to 1920 when the term ‘Robot’ was first coined in the stage play “Rossum’s Universal Robots” by the Czech writer Karel Čapek. Some years later in 1928, a humanoid robot called Eric was constructed to deliver the opening address at a UK Model Engineering exhibition. Eric could stand up, move his arms and head, and “talk”. A working replica is on display at the London Science Museum.

Replica of Eric: The UK's First Robot (1928)          Image Credit: Science Museum

Again, no machine intelligence but Eric did help generate the popular image of a robot looking like a somewhat sinister man in a medieval suit of armour. Eric’s teeth flashed with 35000 volt sparks as he ‘spoke’ perhaps adding to a feeling of unease among the audience! Although, to be honest, nowadays most audiences would probably react with amusement rather than fear. Eric may be humanoid in form, but he can’t walk and is not realistic enough to fall into the Uncanny Valley.

Three Laws Safe

Isaac Asimov published his Three Laws of Robotics in 1942 as part of a science fiction short story called ‘Runaround’. This and other stories about human interaction with robots and artificial intelligence were later collected together to form the book: I, Robot. It was the first attempt to study the ethics of future AI and introduced the now-famous laws designed to prevent an imagined robot ‘Armageddon’:

  • 1st Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • 2nd Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • 3rd Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A ‘zeroth’ law was added later:

  • 0th Law: A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

Asimov was way ahead of his time in predicting the need for rules governing the construction and operation of a machine that could potentially become self-aware and see itself higher than humanity in the ‘food-chain’.

The Present Day

It seems incredible, but nearly 90 years later, real, as opposed to fictional humanoid robots have advanced little in terms of intelligence. The mechanical hardware with its associated low-level control electronics is now pretty sophisticated: check out the developments at Boston Dynamics. Asimov’s laws went unchallenged for decades because they applied to future robots imagined to be both physically and intellectually superior to their human creators. Until recently we had little to fear from the unsophisticated multi-jointed arms welding cars together in a factory because they’re rooted to the spot, only do what their programmers tell them and fences keep human workers safely out of reach. Now we may have a problem: potentially lethal mobile robots may soon be on the factory or warehouse floor and AI in the form of Machine/Deep Learning makes it very difficult to guarantee safe operation. Take a look at this video of Boston content-Dynamics’ Handle warehouse robot in action:

Learning to Kill

Very lifelike humanoid robots may freak you out, but if an AI-powered Handle mistook you for a cardboard box, then it could cause you some serious injury. Is that possible? It’s more than possible: a big problem with Deep Learning is the inadvertent introduction of ‘bias’ into the initial learning process. For example, if it were presented with images of all sorts of boxes tagged ‘correct’, along with pictures of other things like people tagged ‘incorrect’, then the algorithm should learn to recognise boxes only as targets. Here’s a possible snag: if all the boxes in the images are the same brown colour and the real warehouse people all wear matching brown coats, then the inference engine running on the robot looking at real-time images from its camera, may infer from its dataset that all brown objects are boxes…..

Bias is a very real limitation of Deep Learning. It can be overcome, in this case by including plenty of brown-coated staff in the original image set. Very often its not that simple and the number of training images needed may be huge to ensure accurate recognition. You will only find out how good your dataset is when it’s tried out in the real world. Some object recognition software for a driverless car application worked just fine with test images, but failed miserably out on the road. It’s not an exact science – you work with probabilities not certainties – just like human intelligence. The question is: are we prepared to work alongside machines that could make mistakes because, just like us, they learn their skills? They are no longer programmed.

Updating the three laws

Are you scared of robots yet? There have been a number of attempts in recent years to improve upon Asimov’s laws, updating them to be useful to current designers. The UK research councils EPSRC and AHRC held a workshop in 2010 to come up with some Principles of Robotics for design and development engineers. These principles apply to the design and application of ‘real’ robots keeping full responsibility for their actions firmly in the human domain. No blaming an autonomous vehicle’s AI for having learned bad habits and causing a fatal crash for instance. Machine sentience is not considered here (yet).

However, there are others who propose rights for robots. Consider these proposed principles from robotics physicist Mark Tilden:

  • A robot must protect its existence at all costs.
  • A robot must obtain and maintain access to its own power source.
  • A robot must continually search for better power sources.

These principles apply to a sentient, living robot. I do not want to be still around when my toaster demands its freedom from the oppressive environment of my kitchen.

Some lateral thinking

Industry has been rather slow to introduce Cobots to workshops and factory floors. This may be in part due to excessive marketing hype around their usefulness, but also co-worker resistance on safety grounds. UC Berkeley have come up with a solution. They’ve created a Soft Cobot called Blue based on Soft Robot technology. Even small robot arms are strong and capable of injuring a human if they get in the way. Elaborate measures are taken to stop this happening, adding to the cost. The problem lies with the power of the joint motors and the rigidity of the arm necessary to ensure precise movement. But for the most popular tasks assigned to Cobots this precision and strength is not required. Most just pick things up and put them down somewhere else, or maybe pick up a tool when requested and hand it to the human engineer on the opposite side of the workbench. As both these jobs are within the scope of human strength and dexterity, why bother with unnecessary power and precision? Enter Blue: the plastic, rubber-band driven, inherently safe robot helper:

I bet you wouldn’t be afraid of that robot – even if its AI brain did start harbouring thoughts of world domination.

Finally…

Just to emphasize that Terminator-style AI is probably a very long way off, in 2014 IBM invested a great deal of money in their Watson AI computer which had beaten all human competitors to win the US quiz show Jeopardy. They started a project which should have seen Watson able to take over the diagnostic role of medical practitioners. To date there is little to show for all the effort. The input data, including doctor’s notes, patient medical records, etc. seems to be too unstructured and complicated for even Watson to find hidden patterns. Someday perhaps. Maybe when the Next Big Breakthrough comes along to replace Deep Learning.

If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.

26 Apr 2019, 8:21

Comments

May 15, 2019 07:10

Here is a report (https://www.sciencemag.org/news/2019/05/scientists-teach-computers-fear-make-them-better-drivers) on embedding some sense of appropriateness of action into the training of an AI for autonomous driving. They used measurements of human subject reactions to driving technique/patterns/situations to help the AI learn which methods were more acceptable, vs which were more risky and anxiety producing. The input from human anxiety levels helped the AI to learn a more complex set of factors to consider for safe driving, beyond those that might be encapsulated in simple rules like maintaining at least a certain distance from obstacles.

I find this approach interesting because sometimes when my dad and I are repairing something we didn't design, he will make a statement like "It goes this way. I can't tell you why, but I know it does." And he will be right. He has many years of engineering and manufacturing background, forming his (non-artificial) intelligence network understanding of how things work. His knowledge database is extensive enough that it has smoothed some of the explicit rules about how things work, somewhat like an adult's cursive writing takes on a fluidity that a youngster just learning the explicit rules of cursive writing lacks. Or the way a recording of an experienced pianist incorporates knowledge of "how music is put together", phrasing, and patterns characteristic to the work's composer. This knowledge goes beyond the straight transcription of notes to the page. It can make the difference between a beginning musician's playing of the correct notes, and a truly soulful performance by a master.

So this effort to incorporate a measure of what humans might call anxiety is helping transfer some of the deeper knowledge which drivers might not be able to explicitly codify.

May 15, 2019 07:10

And another news article on San Francisco's consideration of a ban on government use of facial recognition technology, pointing out some of the problems and issues involved. In this case the initiative for constraints on the technology seems to be coming from people other than the developers, with an industry group calling the proposed regulation a poor model for other cities. Cases where facial recognition has helped police are put forward as arguments for its use. Some say it should be a moratorium (until the software can be made more reliable and less subject to bias from factors like gender and skin color) instead of a ban.
https://www.npr.org/2019/05/14/723193785/san-francisco-considers-ban-on-governments-use-of-facial-recognition-technology

(Like the other article on embedding technology in autonomous weapons, this one in includes links to some additional sources.)

0 Votes

May 15, 2019 07:10

A news story from yesterday, about the role of developers in deciding the appropriateness of applying their technology to sometimes ethically problematic uses, such as autonomous weapons:
https://www.npr.org/2019/05/13/722909218/when-technology-can-be-used-to-build-weapons-some-workers-take-a-stand

0 Votes

May 1, 2019 10:29

I'm grateful that science fiction has raised many of the pertinent questions long in advance of technology reaching the point where they need answers. (Though given the accelerating pace of technological development, that may not remain the case for long.)
I think robots with AI becoming sentient (a popular sci-fi theme) are still a ways off. But I think Asimov's three laws actually translate fairly well to good design of non-sentient robots and machines of today as well. The difference is in who is responsible for evaluating implementation of the rules. In another comment, I'll point out how the rules might be applied to a non-AI robot.

May 2, 2019 12:57

@BradLevy How Asimov's three rules of robotics might be applied to a non-sentient robot: 1. Not injuring a human being, via action or inaction. This involves safety from actions of the machine doing its job (like cutting, heating, or compressing materials), but also from side effects of the technology used to implement the machine (exposed high voltage, RF waves, lasers, chemicals). We have industry standards that assist in evaluating adherence to this first law, if the designer on the non-sentient robot follows them. 2. Obeying humans, except where it would cause harm to same. This is a good measure of a well designed human interface, combined with a well designed mechanism to carry out the actions specified by the humans. It can include interlocks to prevent operation of equipment if protective shields are not in place. It should also include protection of the intent of the command. Voice recognition and touch screens can both make for rich user interfaces, but are subject to false activation as well - picking up conversation not intended as commands, or commands from someone (or some parrot) not authorized to issue them. Or accidental dialing on a cell phone due to stray fingers when hanging up an existing call. For some functions, a direct physical switch is most appropriate. You don't want to have to "tap and hold for three seconds to initiate emergency shutdown of the spindle". 3. Self preservation, where it doesn't violate laws one and two. This includes things like limit switches to prevent jams and overheating. Note that some emergency stop situations may call for damaging the machine in order to protect the operator. An example is a table saw that comes to an extremely rapid halt if it detects human contact with the blade, but afterwards requires repair.

May 7, 2019 07:23

@BradLevy Certainly Asimov's laws could be applied to not-very-intelligent machines, although various international bodies such as the ISO and IEC have sought to provide more detailed rules and standards. The most recent being those governing Functional Safety in the industrial and automotive areas, IEC61508 and ISO26262 respectively. In the end though, money determines how much attention is given to safety considerations: check out this unofficial detailed analysis of the Boeing 737 Max debacle. https://spectrum.ieee.org/aerospace/aviation/how-the-boeing-737-max-disaster-looks-to-a-software-developer Duplication and triplication of hardware circuits have always been used to detect/correct errors due to hardware faults. This won't work if each unit runs exactly the same software code so it's usual for redundant computers to made by different manufacturers programmed with code from different software engineers. The 737 Max MCAS unit broke all the rules: no redundancy for validity checking and it could actually override the human pilot's attempts to stop it flying the plane into the ground! Many design rules/laws were ignored by the manufacturer with the inevitable result. AI brings its own set of new hazards in the form of inadvertent bias as I mentioned in the article. How is this to be detected before a fatal crash? It may be that the tried and tested redundancy techniques will be used again. This time each redundant AI processor will 'learn' its task from its own, independent dataset. Each will perform the same task, but via a different learning experience to avoid common mistakes. Rather like having two human pilots on the flight deck in fact...

May 7, 2019 07:23

@Bill Marshall Unfortunately, your statement that, in the end though, money determines how much attention is given to safety considerations is sometimes all too true. That is why (when sparked by your article) I found it refreshing to think about applying Asimov's three laws to non-AI systems. Sci-fi (and other literature) let us consider issues and play out scenarios in a context where our own money is not directly at stake. Doing so can help build our moral compass, to be more ready to recognize the situations where no amount of money should override the risk to human existence. A good moral compass is important in all fields, but especially so in engineering, architecture, and other design fields where the consequences of the decisions we make are amplified by the larger number of people they may affect. It is a good argument for courses in ethics and philosophy to be a part of the training in these fields. I've worked in safety-critical fields like avionics and know the importance and usefulness of international and industry standards in good product design. One nice thing about Asimov's laws is that they are general rather than specific. They can be applied to a brand new product area for which standards have not yet been established. An example from many years ago: In the design of a solar power test facility with mirrors concentrating megawatts of sunlight onto tubes in a central tower, performing simulations to determine what happens if a power failure means you can't continue to move the molten material through the tubes, and can't redirect the mirrors off of the tower. Redundant mechanism and power sources are a wise idea! The maker movement has sparked imaginations by making the design creation process more accessible. New building blocks help people concentrate more on what they are designing, and less on some of the hard parts encapsulated within the building blocks. I am very much among those who support these positive aspects of the maker movement. But I think we need to do more to encourage a parallel development of safety consciousness among makers. Ideas like the Berkeley robot are a step in this direction. Opening minds to the idea that stronger and more powerful are not always the best direction to explore. I like that Asimov's laws entered popular culture and have sparked imaginations over the years. Perhaps they can be an introductory tool in spurring safety consciousness in product design in the maker movement. I'd also like to see more products for the maker market encapsulating the safety aspect of interfacing low-power maker-friendly platforms like Arduino, Raspberry Pi, and MicroBit to mains-level devices. I hope to post an article this topic soon, sharing some techniques I've used in the past, and some ideas for the future.

May 1, 2019 10:28

Interesting article. Human intelligence is gathered from more than just our own local experiences, I expected networked AI will add to the 'fun' when one system finds by dropping a parcel 30cm or so causes no issues and speeds things up and the lab assistant does the same....
The pallet stacking was impressive, but it would be interesting to see a variety of box sizes, weight, 'fragile', etc., could be handled. Not sure which factors had intelligence built-in as the source stock was nicely ordered and the destination was known.
The "Berkley Robot" also had a powerful AI in the background. But impressive robotic arms.
What this made me think about is just how well designed the human hands are! The palm is perfect for picking up larger objects such as boxes, the fingers for finer tasks and the combination for the more tricky larger objects... hard to beat!
Back to AI, a robot working alone is not too bad as long as the risks are assessed.
As an assistant with defined limitations to its allowed 'reactions' and sensible 'strength' again not too bad, but when 'it' is in control of peoples lives in a situation where it interacts with others (autonomous vehicles etc.) there will never be a correct decision in an accident situation. Perhaps they need a self-sacrifice mode and protect third parties?
Keep up these great posts.

May 2, 2019 12:57

@Boss "a robot working alone is not too bad as long as the risks are assessed." this is key I think, and certainly why I feel a little apprehensive towards things like automated cars, even though I love the technology and the ideas (and robots!). Great article!

May 2, 2019 12:56

@sianz yes agree, great technology, but lots of potential hazards. I also wonder how the AI (?) of an autonomous car will behave to intentional bad interference? People intentionally walking out in front of them knowing they will stop, similarly being "cut up" by other drivers, will AI vehicles learn to ignore being cut up and maintain an unsafe gap? Or ultimately will we get AI rage.... Technology is providing some interesting moral challenges.