Soft Humans, Hard robots
There’s no getting away from it: robots are taking over the world. People have been saying this for decades but somehow the intelligence of Robby the Robot in the film Forbidden Planet (1956) still eludes us. In an article for the RS magazine eTech post- Fukushima disaster 2011, I attempted to match what we thought the rescue robots could do against the reality. The result was distinctly depressing. Over two years later have things moved on?
For a few months now I’ve been working on a project to write a version of the FORTH programming language for the Microchip PIC24/dsPIC 16-bit microcontroller ranges, intended for real-time robot control. Back in 1983 I’d written one for the Z80 microprocessor when FORTH was very popular and had used a book by Alan Winfield, a lecturer at Hull University as an essential reference. The book was dusted off and proved useful again, 30 years later! Now at Bristol University, Professor Winfield is a world expert in robotics and writes an excellent blog. His latest post has the intriguing title ‘Soft Robotics in Space’. It seems there is a whole new field of robotic design using literally soft materials rather than traditional ‘hard’ plastics and metal. There’s a whiff of ‘Health & Safety’ about it: surely real robots need to be tough and rugged, capable of climbing over rubble, digging for disaster survivors and shutting down overheating nuclear reactors despite radiation levels that would be lethal to humans? Well yes, to some extent but what happens when the robot encounters a survivor? The robot may be hard but the human is soft and easily damaged by something with the weight of a tank and the grip of a steel-jawed crocodile. If this insensitive robotic would-be rescuer is not too clever either (which will be the case) then it just doesn’t bear thinking about. Part of the answer to the grip problem at least may lie with the Jaeger-Lipson coffee balloon gripper discussed in Alan’s blog – a great piece of lateral thinking.
The ‘hard-men’ of the robotics world get to slug it out this week in a preliminary stage of the DARPA Robotic Challenge. Just take a look at the line up and remember that they are intended to help in disaster zones. I think it’s fair to say that any injured survivor would have mixed feelings when they caught site of Atlas or Valkyrie heading purposefully towards them. Inevitably the cynical view will surface that being sponsored by DARPA, these are really robot warriors not rescuers. It’s a depressing fact that most rapid technological advances during the last century have been driven by political and military needs. I don’t mean incremental change: I mean paradigm shifts. Think rockets, digital computing, the atomic bomb and, more recently, the Internet (The World Wide Web protocols may have their origins at CERN, but the original US network was called ARPAnet – spot the clue in the name.). Each took massive investment on a scale that only governments can afford. Is this why robot technology has advanced only in the fictional world? Perhaps, until now that is.
The original DARPA Challenge took autonomous vehicle development to the point where private capital took over and now, largely thanks to Google, self-driving cars are a reality. The current Challenge appears to have galvanised Amazon and Google into spending large sums on robotics. Google has gone on a spending spree culminating in the purchase of Boston Dynamics makers of Big Dog and the humanoid robot Atlas. So the mechanical hardware is shaping up nicely and the low-level control algorithms work (Just watch the video of Big Dog getting a sideways shove while on ice or loose ground). What these robots lack, whatever their intended application, is a human-like brain to assess a developing situation and act appropriately. Given that the major Western military powers are now committed to the widespread use of drones in the ‘battlefield’, are we now at the tipping point for serious Artificial Intelligence research and development? There could be a repetition of the situation in the 1950’s when the public was ‘sold’ on the idea of free energy from nuclear power stations, but which were in fact initially constructed to produce bomb grade Plutonium. This time around AI controlled robots could be presented as entirely benign rescuers and harmless explorers of hazardous environments when their real purpose is military. Some might argue that if modern wars are being fought against ‘terrorists’ who can harness the power of the extreme believer, then why shouldn’t conventional armies play their trump card, extreme technology, in response? Apart from ethical issues there may be some potentially terrifying practical problems: if humans can be ‘turned’, that is persuaded to switch their allegiance, then so can the AI equipped not-so-drone. Only this month a big item in the news concerned the publication of technical details on how to grab control of an AR.Drone helicopter from its rightful owner. True, this is hardly a war machine but if it can be done that easily by a lone programmer….
At this point we run into the perennial argument of what constitutes a robot. Interestingly, both the layman and the expert visualise an intelligent machine which can operate autonomously. The ‘media’ usually and incorrectly include anything that can be operated by remote control. Currently, most airborne drones or UAVs are remotely controlled by human operators. If and when they become true robots it may open up a whole new field of ‘hacking’. On the one hand, the new intelligence of the drone should improve security of control because it will be able to spot crude hijacking attempts. On the other hand, and this is where we may be entering the world of science fiction, it may be possible to disrupt its ‘thinking’ process by suggestion in the same way a human can be brain-washed. A recent book on this subject gives some disturbing insights. We are safe for the time being though, as the scientists have yet to crack how a brain ‘works’ let alone be in a position to design an artificial version. That may change in the near future thanks to the intervention of Google. Conventional computer design continues apace though and we now have the Parallella with its 100Gflops performance from a 64-core array consuming 5W of power on a palm-sized board.
Whenever this sort of leap forward in performance occurs, resulting in more talk of artificial brains, I think of a garden spider, the type that has a pea-sized body with long legs like single human hairs. Its tiny brain enables it to control eight legs which appear to have no joints and move in any direction. It can feed (fuel) itself by using sensors to locate food; use other sensors to identify danger and hide. It can replicate itself. Try and make an artificial version of that with current technology. We are still waiting for the Great Leap Forward in AI, but who knows, it might be here soon courtesy of Google Labs. Let’s just hope we get all the bugs out by developing intelligent ‘soft’ robots before letting it loose on the ‘hard’ variety. I like the idea of a 3D printer creating a ‘brain’ by linking artificial neurons together with extruded synaptic material and able to modify those links so that the structure adapts and learns. Well, it’s amazing just how many daft ideas a natural brain can have. Hang on, how do you create an artificial idea?
If you’ve not seen it, I can thoroughly recommend John Carpenter’s 1974 low-budget Sci-Fi film Dark Star. The special effects are cheap and dated, but many of the concepts are not: the end sequence when the space-ship crew try to reason with an accidentally armed AI-equipped planet destroying bomb suddenly doesn’t look so fantastic. Hilarious at the time, but perhaps no more.
If you're stuck for something to do, follow my posts on Twitter. I link to interesting new electronics components and development kits (some of them available from my employer!) and retweet posts I spot about robot, space exploration and other issues.