Skip to main content

Your Car will Soon be Driving You to Work

              Last week a major car company announced they were spending $1billion setting up an artificial intelligence and robotics R & D lab in California. I think it’s fairly certain now that production driverless cars will soon be appearing on our roads.

             Toyota is the latest company deciding to throw huge amounts of money into creating intelligent robots, in particular the driverless car. Fear of the unknown (and product liability law) has limited most car makers’ efforts to developing driver assistance electronics. It took a non-car maker, Google to push ahead with the research and bring the dream close to reality. It was only 11 years ago that DARPA in the US really started the ball rolling with the Grand Challenge 2004. That year nobody even got close to achieving the goal of an autonomous vehicle driving 150 miles over desert terrain. By 2007 teams had cars successfully moving around an ‘urban’ landscape. Then Google got in on the act with a seemingly money-no-object project to create a practical robotic car that would be safe to let loose in the jungle that is the modern motoring environment. Progress has been pretty rapid and Google having been running their Lexus-based prototypes in real traffic for some years now (albeit with a human driver who could take over in the event of failure). They’re now at the stage of testing some prototypes of an actual production car.

title

 Wikipedia: Michael Shick

              The scale of Google’s achievements has not gone unnoticed by car manufacturers who are pouring money into their own designs and governments seem keen to relax regulations and allow these vehicles onto real roads. How is this happening in spite of the endless discussions in the press about safety and the morality of an ‘artificial intelligence’ making life and death decisions?

Reversing the Logic

                The problem is one of perception: suggest to any motorist that a robot could match or even exceed their driving skills and you will probably get a lot of cobblers about the importance of ‘instinct’ and ‘intuition’, which of course they alone possess unlike all the other idiots on the road. The idea that driving a car requires hardly any prior training under real-world conditions and once you’ve passed a rather feeble test you can go out and learn the rest the hard way, is unique in the world of humans controlling dangerous machinery. The reality is that the average human brain is ill-equipped to control a modern car safely, lacking knowledge, and ‘processing power’ leading to an inability to make correct decisions fast enough in unexpected situations. Now consider the necessary sensory inputs: for the human driver the main sensory organ is the eye. Actually the human eye is a pretty rubbish sensor and the brain has to work very hard to turn the rather limited real-time information available into a useful mental ‘image’ of the scene beyond the windscreen – and only a tiny part of that is actually in focus. The only reason people are allowed to drive cars at all is because when they were invented the conditions were far less challenging than they are now. The cars have evolved, we haven’t. So if the motor car were invented now, who would think of letting a human driver near it? Health and safety regulations would rightly kill that idea at once. Hence reverse the logic: forget trying to create a robot that drives like a human, that way lies carnage. Instead, create a machine to do the job properly, without all the human ‘features’ which are mostly negative; features such as impatience, ignorance, not forgetting arrogant stupidity. Then add a 360° vision system, other road condition sensors and suddenly you have it – a genuinely safe driver.

The Red-Herring of Artificial Intelligence

                It has been generally assumed that a robot tasked with driving a car will require sophisticated Artificial Intelligence. However, once it is accepted that the human mind should not be used as a basis for design for the reasons described above, then it can be seen that current technology should be up to the job. Indeed that is what Google and others have proved to be the case. There is a lot of discussion around the ‘morality’ of decisions taken by the robot driver in no-win situations; given the choice of swerving and hitting a pedestrian or crashing into another vehicle, for example. These sort of ethical arguments always assume that a human driver has some special power to make the right decision (whatever that is). The most likely outcome in this situation is that no decision will be made because the human brain is often way too slow to register the danger when travelling at even moderate speeds. Even if there is time to consider options, it is likely that instinctive self-preservation will be dominant. The machine should be able to determine the least-worst outcome. At this point, of course, more complex examples of ethical conundrums can be introduced: what if the collision choice is between a bus-load of children or a coach full of pensioners? This sort of philosophical argument may be of interest to scholars, but in the meantime, peopled are being crippled and killed as part of the daily routine on the roads. My own view, as an engineer is that development of technology likely to reduce the dreadful cost in deaths and injuries incurred daily just to maintain our ‘lifestyle’, should be pursued with vigour.

The Near Future?

               A timetable for the introduction of driverless cars could look like this, assuming that spurious moral/ethical arguments and tabloid newspaper scare stories don’t prevail.

5 years from now: Optional autodrive will be available on top-end (expensive) cars. Autodriving will be legal, but there will always have to be a qualified driver in the driving seat able to take over manual control. Drink-driving legislation will still apply, even in autodrive mode.

10 years from now: Autodrive is increasingly standard on medium price cars. Insurance premiums for these cars will fall (alright, that should be ‘insurance premiums will rise steeply for all other vehicles’).

20 years from now: Autodrive is a legal requirement for all new cars. Emergency safe-stop button is the only manual control. No driving qualifications are now required. Manual driving will be optional but a licence will have to be obtained involving driver training in a simulator with yearly updates.

                This progression to full automation of all traffic flow actually suggests a reduction in the sophistication of the autodriver software over time: the early models will have to deal with the unpredictable actions of the remaining human drivers on the road. When nearly all vehicles are driverless, they will be communicating with each other and pages of code for dealing with erratic human behaviour can be drastically reduced.

Where are we now?

                Most of the hardware/software is already available with much of it in action on the latest cars. All the necessary actuators for steering, brakes and throttle became electric years ago; initially to provide mechanical assistance and now able to work without human input. Think about Electronic Stability Control (often still called Traction Control, although it’s now much cleverer than that), Hands-off car parking, Station-keeping (a modern version of cruise control that maintains distance between your car and the car in front, Lane assist that stops the car wandering sideways, and so on. Then there is SatNav or GPS; an obvious essential component in any automated transport system. Finally everything can be networked together with the automotive CAN bus which appeared as long ago as 1986. All these gadgets are known as Advanced Driver Assistance Systems or ADAS. Modern high-end cars are full of them, but as the name suggests, they only assist the driver in certain specific tasks.

What components have yet to be perfected?

                I had a ride in a top of the range SUV the other day; it was packed with ADAS and it was obvious ‘only’ two more components needed to be added to make the car autonomous: a vision/radar system to detect and map objects around the car, and of course, a central ‘brain’. The Google car uses laser radar (Lidar) in the form of a rotating scanner in that ‘bubble’ seen on the roof. Its major problem is that it gets blinded by rain and fog. I’m not convinced that many beams flashing around on a crowded road will work either. An entirely passive system based on digital cameras placed around the vehicle giving the autodriver full 360° vision is the real answer. The fusion of data from four or more vision sensors and the interpretation of the resulting image in real-time is a tall order though. Anticipating the demand for such systems in the automotive market, chip manufacturers are already delivering the goods: the Renesas R-Car H2 ADAS can process inputs from four cameras and is available now from RS Components. If you feel ten cameras would be more appropriate then there is the NVIDIA DRIVE PX board featuring a staggering two Teraflops of processor power.

                An interesting item on the list of things to be sorted out is how to recognise people with authority, police officers for example, directing traffic with hand signals!

Safety

                It’s easy to get carried away with the possibilities of all this technology and forget about the question; what happens if it develops a fault, transient or permanent while on the road? Functional Safety has been the subject of legislation for some years now and any designer of automotive electronics needs to be familiar with the requirements laid down in the international standard IEC 61508 and the automotive-specific ISO 26262. Back in 2011 Texas Instruments launched the first of their Hercules range of microcontrollers designed to meet these Functional Safety standards. I published a Design Spark blog featuring the Hercules on the theme of trusting your life to a computer, based on what was obviously the start of a trend.

                Ford is currently allowing their autonomous test vehicle to drive itself around a fake ‘town’ constructed for the purpose. The area under investigation is how to respond if the autodriver is overwhelmed by a complex traffic situation and, well, panics. It’s a good job that never happens with a human driver.

                Limited machine-learning will undoubtedly be incorporated in the autodriver firmware, but full Artificial Intelligence? Best beware the Law of Unintended Consequences: AI could learn bad habits. After the autodriver has seen the umpteenth BMW overtake it in dense fog on a blind bend, it might just think it’s a good idea. On the other hand it could read the number plate, determine that it has a human driver and report the incident to the police.

                Finally, for safety’s sake, it would probably be a good idea to maintain an ‘air’ firewall between the autodriver and the Internet – in other words, drive off-line. Exciting times ahead.

If you're stuck for something to do, follow my posts on Twitter. I link to interesting new electronics components and development kits (some of them available from my employer!) and retweet posts I spot about robot, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.
DesignSpark Electrical Logolinkedin