Skip to main content
shopping_basket Basket 0
Login
Who’s afraid of Intelligent Machines?
Bill Marshall
11
Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.

Comments

May 29, 2019 07:05

I believe that the next step is analogous to sexual reproduction in biology. If genetic algorithms are employed to specify the network's synaptic weights, number of layers, number of synapses for each neuron, perhaps different types of neurons, etc., its DNA analog, then sexual reproduction of child networks allows changes without losing too many of the parents' abilities. The "guided" part amounts to a breeding program, just as is used with stock animals, selecting the children that are to be the next generation breeders based on their abilities. The environment in which each generation is tested is then purposefully made as hostile as possible to select only those breeders who survive. This breeding can be performed in simulations prior to deploying it in robots to greatly reduce the risk of a robot mistaking a brown-clothed human for a box.

0 Votes

May 15, 2019 07:10

Here is a report (https://www.sciencemag.org/news/2019/05/scientists-teach-computers-fear-make-them-better-drivers) on embedding some sense of appropriateness of action into the training of an AI for autonomous driving. They used measurements of human subject reactions to driving technique/patterns/situations to help the AI learn which methods were more acceptable, vs which were more risky and anxiety producing. The input from human anxiety levels helped the AI to learn a more complex set of factors to consider for safe driving, beyond those that might be encapsulated in simple rules like maintaining at least a certain distance from obstacles.

I find this approach interesting because sometimes when my dad and I are repairing something we didn't design, he will make a statement like "It goes this way. I can't tell you why, but I know it does." And he will be right. He has many years of engineering and manufacturing background, forming his (non-artificial) intelligence network understanding of how things work. His knowledge database is extensive enough that it has smoothed some of the explicit rules about how things work, somewhat like an adult's cursive writing takes on a fluidity that a youngster just learning the explicit rules of cursive writing lacks. Or the way a recording of an experienced pianist incorporates knowledge of "how music is put together", phrasing, and patterns characteristic to the work's composer. This knowledge goes beyond the straight transcription of notes to the page. It can make the difference between a beginning musician's playing of the correct notes, and a truly soulful performance by a master.

So this effort to incorporate a measure of what humans might call anxiety is helping transfer some of the deeper knowledge which drivers might not be able to explicitly codify.

May 15, 2019 07:10

And another news article on San Francisco's consideration of a ban on government use of facial recognition technology, pointing out some of the problems and issues involved. In this case the initiative for constraints on the technology seems to be coming from people other than the developers, with an industry group calling the proposed regulation a poor model for other cities. Cases where facial recognition has helped police are put forward as arguments for its use. Some say it should be a moratorium (until the software can be made more reliable and less subject to bias from factors like gender and skin color) instead of a ban.
https://www.npr.org/2019/05/14/723193785/san-francisco-considers-ban-on-governments-use-of-facial-recognition-technology

(Like the other article on embedding technology in autonomous weapons, this one in includes links to some additional sources.)

May 15, 2019 07:10

A news story from yesterday, about the role of developers in deciding the appropriateness of applying their technology to sometimes ethically problematic uses, such as autonomous weapons:
https://www.npr.org/2019/05/13/722909218/when-technology-can-be-used-to-build-weapons-some-workers-take-a-stand

May 1, 2019 10:29

I'm grateful that science fiction has raised many of the pertinent questions long in advance of technology reaching the point where they need answers. (Though given the accelerating pace of technological development, that may not remain the case for long.)
I think robots with AI becoming sentient (a popular sci-fi theme) are still a ways off. But I think Asimov's three laws actually translate fairly well to good design of non-sentient robots and machines of today as well. The difference is in who is responsible for evaluating implementation of the rules. In another comment, I'll point out how the rules might be applied to a non-AI robot.

May 2, 2019 12:57

@BradLevy How Asimov's three rules of robotics might be applied to a non-sentient robot: 1. Not injuring a human being, via action or inaction. This involves safety from actions of the machine doing its job (like cutting, heating, or compressing materials), but also from side effects of the technology used to implement the machine (exposed high voltage, RF waves, lasers, chemicals). We have industry standards that assist in evaluating adherence to this first law, if the designer on the non-sentient robot follows them. 2. Obeying humans, except where it would cause harm to same. This is a good measure of a well designed human interface, combined with a well designed mechanism to carry out the actions specified by the humans. It can include interlocks to prevent operation of equipment if protective shields are not in place. It should also include protection of the intent of the command. Voice recognition and touch screens can both make for rich user interfaces, but are subject to false activation as well - picking up conversation not intended as commands, or commands from someone (or some parrot) not authorized to issue them. Or accidental dialing on a cell phone due to stray fingers when hanging up an existing call. For some functions, a direct physical switch is most appropriate. You don't want to have to "tap and hold for three seconds to initiate emergency shutdown of the spindle". 3. Self preservation, where it doesn't violate laws one and two. This includes things like limit switches to prevent jams and overheating. Note that some emergency stop situations may call for damaging the machine in order to protect the operator. An example is a table saw that comes to an extremely rapid halt if it detects human contact with the blade, but afterwards requires repair.

May 7, 2019 07:23

@BradLevy Certainly Asimov's laws could be applied to not-very-intelligent machines, although various international bodies such as the ISO and IEC have sought to provide more detailed rules and standards. The most recent being those governing Functional Safety in the industrial and automotive areas, IEC61508 and ISO26262 respectively. In the end though, money determines how much attention is given to safety considerations: check out this unofficial detailed analysis of the Boeing 737 Max debacle. https://spectrum.ieee.org/aerospace/aviation/how-the-boeing-737-max-disaster-looks-to-a-software-developer Duplication and triplication of hardware circuits have always been used to detect/correct errors due to hardware faults. This won't work if each unit runs exactly the same software code so it's usual for redundant computers to made by different manufacturers programmed with code from different software engineers. The 737 Max MCAS unit broke all the rules: no redundancy for validity checking and it could actually override the human pilot's attempts to stop it flying the plane into the ground! Many design rules/laws were ignored by the manufacturer with the inevitable result. AI brings its own set of new hazards in the form of inadvertent bias as I mentioned in the article. How is this to be detected before a fatal crash? It may be that the tried and tested redundancy techniques will be used again. This time each redundant AI processor will 'learn' its task from its own, independent dataset. Each will perform the same task, but via a different learning experience to avoid common mistakes. Rather like having two human pilots on the flight deck in fact...

May 7, 2019 07:23

@Bill Marshall Unfortunately, your statement that, in the end though, money determines how much attention is given to safety considerations is sometimes all too true. That is why (when sparked by your article) I found it refreshing to think about applying Asimov's three laws to non-AI systems. Sci-fi (and other literature) let us consider issues and play out scenarios in a context where our own money is not directly at stake. Doing so can help build our moral compass, to be more ready to recognize the situations where no amount of money should override the risk to human existence. A good moral compass is important in all fields, but especially so in engineering, architecture, and other design fields where the consequences of the decisions we make are amplified by the larger number of people they may affect. It is a good argument for courses in ethics and philosophy to be a part of the training in these fields. I've worked in safety-critical fields like avionics and know the importance and usefulness of international and industry standards in good product design. One nice thing about Asimov's laws is that they are general rather than specific. They can be applied to a brand new product area for which standards have not yet been established. An example from many years ago: In the design of a solar power test facility with mirrors concentrating megawatts of sunlight onto tubes in a central tower, performing simulations to determine what happens if a power failure means you can't continue to move the molten material through the tubes, and can't redirect the mirrors off of the tower. Redundant mechanism and power sources are a wise idea! The maker movement has sparked imaginations by making the design creation process more accessible. New building blocks help people concentrate more on what they are designing, and less on some of the hard parts encapsulated within the building blocks. I am very much among those who support these positive aspects of the maker movement. But I think we need to do more to encourage a parallel development of safety consciousness among makers. Ideas like the Berkeley robot are a step in this direction. Opening minds to the idea that stronger and more powerful are not always the best direction to explore. I like that Asimov's laws entered popular culture and have sparked imaginations over the years. Perhaps they can be an introductory tool in spurring safety consciousness in product design in the maker movement. I'd also like to see more products for the maker market encapsulating the safety aspect of interfacing low-power maker-friendly platforms like Arduino, Raspberry Pi, and MicroBit to mains-level devices. I hope to post an article this topic soon, sharing some techniques I've used in the past, and some ideas for the future.

May 1, 2019 10:28

Interesting article. Human intelligence is gathered from more than just our own local experiences, I expected networked AI will add to the 'fun' when one system finds by dropping a parcel 30cm or so causes no issues and speeds things up and the lab assistant does the same....
The pallet stacking was impressive, but it would be interesting to see a variety of box sizes, weight, 'fragile', etc., could be handled. Not sure which factors had intelligence built-in as the source stock was nicely ordered and the destination was known.
The "Berkley Robot" also had a powerful AI in the background. But impressive robotic arms.
What this made me think about is just how well designed the human hands are! The palm is perfect for picking up larger objects such as boxes, the fingers for finer tasks and the combination for the more tricky larger objects... hard to beat!
Back to AI, a robot working alone is not too bad as long as the risks are assessed.
As an assistant with defined limitations to its allowed 'reactions' and sensible 'strength' again not too bad, but when 'it' is in control of peoples lives in a situation where it interacts with others (autonomous vehicles etc.) there will never be a correct decision in an accident situation. Perhaps they need a self-sacrifice mode and protect third parties?
Keep up these great posts.

May 2, 2019 12:57

@Boss "a robot working alone is not too bad as long as the risks are assessed." this is key I think, and certainly why I feel a little apprehensive towards things like automated cars, even though I love the technology and the ideas (and robots!). Great article!

May 2, 2019 12:56

@sianz yes agree, great technology, but lots of potential hazards. I also wonder how the AI (?) of an autonomous car will behave to intentional bad interference? People intentionally walking out in front of them knowing they will stop, similarly being "cut up" by other drivers, will AI vehicles learn to ignore being cut up and maintain an unsafe gap? Or ultimately will we get AI rage.... Technology is providing some interesting moral challenges.

Related Content

DesignSpark Electrical Logolinkedin