DesignSpark Electrical Logolinkedin
Menu Search
Ask a Question

Driverless Cars: Are We Nearly There Yet?

Concept driverless car with built-in AR      Credit: Panasonic

“Are we nearly there yet?” Now there’s a familiar cry from the back seat of many a family car heading off on a long journey. Senior managers with car manufacturers around the world may be asking the same thing of their engineers attempting to design the first driverless car. In November 2015 I suggested in a DesignSpark article that ‘Your Car will soon be Driving You to Work’. After nearly three years of frantic activity by engineers and marketing people, are autonomous cars any nearer becoming a reality? As with most things in life, the answer to that is possibly, maybe even probably, depending on just how strictly you define the word ‘autonomous’. Despite the achievements of competitors in the DARPA Grand Challenges of the last decade, nothing resembling a commercial driverless car appeared until a few years ago. The following video summarises those advances in technology made by the end of the last Challenge:

By 2015 Google (now Waymo) had prototype ‘driverless’ cars clocking up mileage on US roads and gathering valuable data. Mainstream car manufacturers began pouring money into research, many promising self-driving cars on the road in only a few years. Car-maker Tesla had gone a step further and offered customers for its electric cars a somewhat misleadingly named option called Autopilot. In May 2016 a Tesla Model S crashed into a truck crossing its path and the car driver was killed (NHTSA Accident Report 812481). This incident, together with a previous little-publicised crash in China suddenly jolted everyone back to reality.

Degrees of Autonomy

It became apparent that owners had a rather grander interpretation of the word ‘Autopilot’ than Tesla intended. Many appeared to think it meant they had bought a genuinely driverless car: posting pictures of themselves on social media doing crazy things like reading a book while travelling at speed. In fact, Autopilot is just an assistant to a human driver, requiring frequent ‘intervention’ from the latter depending on road and traffic conditions. The Society of Automotive Engineers (SAE) had come up with a standard (J3016) in 2014 based upon six levels of increasing automation from Level 0, manual operation, to Level 5, fully automated with no human intervention required. By this standard, Autopilot actually manages Level 2, somewhat less than many customers’ expectation of Level 5. Although the human driver was at fault for not being alert at the wheel, the fact is that Autopilot ‘saw’ the truck but misinterpreted the image, ‘thinking’ it was an overhead sign.

In March 2018 an Uber taxi killed a pedestrian pushing a bicycle across an unlit road at night. According to the preliminary accident report, the human driver seemed to be alert, but crucially looked down at the instruments seconds before impact and couldn’t react in time. In this case, the Lidar sensor had spotted an ‘object’ but the system didn’t classify it as a danger for a few more seconds. Tragically, the emergency braking system had been disconnected because false positives were giving passengers a rough ride. The human driver was expected to take action instead. The system should at least have sounded a driver warning alert, but no such facility had been provided.

Also in March 2018, a Tesla Model X under Autopilot control appeared to drift off the road at a slip-road and crashed into a barrier, fatally injuring the driver. The preliminary accident report also highlighted a serious problem with the crash resistance of the on-board Li-Ion traction battery – it caused a serious fire. Fire crews thought they had extinguished the fire at the site, but five days later the battery burst into flames again while at an impound yard.

Stalling Progress

It doesn’t take many fatal incidents to convince an increasingly risk-averse public that some new technology or new drug must be banned despite the apparent benefit to the majority of its use. Users want 100% guaranteed safety, even under conditions of misuse, and legislators insist that complex systems must ‘fail-safe’. Unfortunately, humans have yet to construct anything that meets either of these conditions. This unpleasant fact is accepted by engineers, which is why the value ‘100%’ does not appear even in the most stringent functional safety standards such as ISO 26262. Perhaps more disturbing to engineers, is the levelling off in the graph of auto-driving ‘disengagements’ recently. A disengagement is when the automatic control system fails in some way and the car returns to manual driving. The return of control could be instigated by the computer system itself, or by the human driver who is always expected to be vigilant. The fact is, this failure rate amongst all the test cars on the road is large and although reliability had been steadily improving, it may now have bottomed-out. Why? What’s going on (if anything)?

Powerful Sensors, Big Data

When the DARPA Challenge ended many felt that all the major design issues facing the designer of an autonomous car had been identified and most were well on the way to being solved. This was certainly true in the area of sensor development. The competition spawned the 360° rotating Lidar scanner, and showed how existing technologies such as microwave radar and digital cameras would play a part. It became obvious that electronic sensing systems are superior in every way to a human driver’s eyes: see my DesignSpark article on this subject. Read about new FIR (Far Infrared) camera technology here which may prove critical in the design of effective car vision systems. So, it looks like our autonomous car may at least be able to detect things in its environment. The catch is that all these sensors generate vast amounts of data that has to be processed in real-time in order to identify or classify the detected objects. That’s a job the human brain does so well and it’s why there’s so much confidence (hope?) that Artificial Intelligence has advanced sufficiently to cope with the task.

Artificial Not-So-Intelligence

The hype surrounding AI has reached stratospheric levels recently. Advances in ‘Deep Learning’ algorithms based on multi-layer artificial neural networks (ANN) have massively speeded up information extraction from very large data sets. It’s what makes Google Searches so fast with an uncanny ability to ‘know’ what you’re looking for (most of the time). Deep Learning AI seemed to be the obvious solution to the problem of object classification within vehicle vision data, whether it be from optical cameras or Lidar. Deep Learning systems need to be ‘taught’ how to recognise objects by presenting them with a huge quantity of example data, not unlike a novice human driver gaining experience on the road under the supervision of an instructor. The difference is that a person starts learning to drive with most of the object recognition capability already in place. They just have to learn how to manipulate the controls correctly in response to sensory input. The machine is just like a new-borne baby in comparison. A major problem is determining when sufficient experience has been gained to allow the novice (human or machine) to run without further ‘tuition’. High disengagement rates with current prototypes suggest that we have a long way to go, literally, before the machine can be seen as a replacement for the human driver. And that’s just for accurately determining what’s happening in the world around the car, let alone deciding how best to deal with a particular situation!

The publication of a recent research paper on the robustness of Deep Learning in object classification may have thrown a spanner in the AI works. The problem actually surfaced some time ago in the context of facial recognition and cybersecurity. It was found that when a previously recognised face in an image was only slightly altered, the algorithm went from say 95% certainty to less than 10%, and the high figure transferred to a completely irrelevant image! For law enforcement agencies searching for known felons on surveillance video, this meant that a feeble disguise like a pair of colourful sunglasses could fool the system. The recent paper has a different take on the same issue:  a small change in a classified object’s position between successive video frames from an autonomous car’s camera could have the same effect. This may mean, for example, a pedestrian correctly classified with 90% certainty on one frame might be seen as, say, a seagull with 90% certainty, on the next. Within a split-second the driving algorithm could get totally confused and potentially make a fatal decision. Either this ‘generalisation’ weakness in DL algorithms will have to be removed, or fusion of other sensor data from, for example, Lidar will need to be employed to verify camera data. If neither of these prove solutions possible in the short term, it may mean the start of Level 5 or even Level 4 operation has receded even further into the future.

After the Hype: Will we ever get Driverless Cars?

A literally fully-autonomous car will probably never exist. The infinite variety of situations on the road needing to be detected, analysed and acted upon in real time with 100% reliability is clearly way beyond current computer technology, even with AI. It is often said that an auto-driver will need to be as a good as a human before anyone will trust it. But is that what we really want? To achieve comparability, there will have to be major breakthrough in understanding of how our brains actually work. At the moment, measurement of brain activity is largely limited to seeing which areas ‘light up’ on an MRI scan when it’s subjected to known stimuli via the ears and eyes. The resolution is really that bad – rather like trying to work out how a microprocessor chip works by watching a PC screen while tapping the keys. If a truly artificial brain is eventually created, theoretically capable of driving a car, we may find ourselves presented with a paradox. This new AI will exhibit the same qualities of unpredictability and imperfect memory as human intelligence. In short, it will be just as bad at driving a car as a human driver because it will have the same failings reproduced in silicon instead of living flesh. This suggests that AI development may just get us back where we started, but with an error-prone machine in the driving seat instead. Like the brain, AI works with statistical probabilities, not logical certainties, and that’s why it too can be unpredictable.

But all is not lost. Two areas of technology currently under development may ensure that our family car at least appears to be autonomous. Firstly, it will just be a node in a vast network of wireless-interconnected vehicles, each able to exchange data with others nearby (V2V) via a familiar cellular radio infrastructure (V2I). Such an infrastructure does not exist yet – arguments are still taking place over standards and protocols. Should it be based on 5G or a modified form of WiFi, for example? And, of course, who will pay for it?

Secondly, providing the auto-driver with pre-prepared detailed 3D terrain maps will ease the burden of precise position location. Some cities have been mapped in this way already, and it might just allow the safe operation of driverless taxis over a restricted area using current technology.

Cockpit of a Tesla Model-X with AutoPilot      Credit: Ian Maddox

The Road Ahead: Driver Assistance

Car makers generally have been very careful not to imply that their Autopilot-like offerings are anything more than ADAS (Advanced Driver-Assistance Systems), giving them names like Pilot Assist, ConnectedDrive and ProPILOT. Note the absence of an ‘auto’ prefix. These gadgets are really only usable on long motorways with clearly visible road markings. So, no assistance in heavy rain or snow then. Those who had been promising ‘full driverless functionality’ by this year or next, are quietly updating their forecasts. Instead, new models feature many ADAS functions, but responsibility for safe driving remains firmly with the person in the front seat. I don’t know about you, but I’m not quite sold on the idea of switching on the Adaptive Cruise Control with Lane Keeping Assist, only to sit with my hands on the steering wheel, eyes focussed on the road ahead with an expression of grim concentration on my face. Don’t I do that now, without the tech? Do I heck.

If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.

22 Aug 2018, 14:45

Comments

September 12, 2018 07:52

Fully-autonomous vehicles will certainly be universal at some point in the future. 2050? Probably not. 2100? Almost certainly. Who would have predicted in 1918 that the telephone and early radio experiments would lead to the communication technology available today? The concept didn't even exist. But we do have the concept, and the need, for autonomous vehicles. The problems will be identified and solved; mistakes and disasters will occur; and the pattern of civilisation will adapt around them. But it will happen.

0 Votes

September 12, 2018 07:52

Hi Bill and thank you for an excellent article. I would agree that AI is limited at this point in time and can certainly be improved upon especially with LIDAR, Radar and other sensor tech as you have mentioned. I must also add that the advancement in technology will allow, in my opinion, driverless cars that are generally or statistically safer than human drivers in years to come.

Another thing, one could perhaps create a set of general rules when using driverless cars such that in built up areas either the road user may not engage the auto-pilot or the user must be alert and ready to take control at any moment. While if you are travelling on the freeway or main roads, one needs not be so alert. So the vehicle could advise: "Leaving city boundary area. You may engage autopilot." or alternatively when approaching a built up area: "You are approaching a built up area please take control in 5, 4, 3, 2, 1." The idea is to get the user to understand the level of alertness required for the situation and not presume that the AI can handle every possible situation. In some places it wouldn't make sense to use the autopilot such as in Delhi that is so congested with pedestrians and tuktuks. Snow and heavy rain are also exceptions and anyone engaging autopilot in such conditions certainly seriously need to think twice. Interesting topic.

0 Votes

September 10, 2018 14:45

I disagree with your conclusion. We *will* have fully autonomous cars, and without an AI breakthrough. Sufficient technology will eventually mean that cars will recognise hazards sufficiently well. This, combined with them communicating with each other and the road infrastructure, will mean that accidents will occur less frequently and (perhaps) will lower injury levels (but this is not essential) than they do today. That is, they'll be "good enough"

There will undoubtedly be some interesting discussions about ethics and responsibility and accountability but it will happen eventually. The awkward time will be the transition between now and then.

August 28, 2018 10:01

Whenever I think of Driverless cars my imagination skips to the Tom Cruise movie, Minority Report...vast networks of weaving road lanes with vehicles travelling in all directions, vertical, horizontal, parallel, everywhere...from your door to your destination. No people out there...no cyclists, no deer, no jaywalkers (unless they have Spiderman abilities). This imagined scenario is in my mind how it would have to be, a complete rebuild of the entire road based network. The idea of trying to create a fool-proof self-driving car on our existing roads and infrastructure, however noble, is like trying to turn lead into gold.

0 Votes

August 28, 2018 10:00

@Ten22 Lead into gold? No problem - a few more years of quantum mechanics research and we'll be there! :-) Fully-autonomous cars running on today's roads with no external assistance? No way. Get to grips with QM, and teleportation will be just around the corner. We won't need driverless cars then.....

August 23, 2018 08:28

Another great in depth article Bill - thanks.
At the end of the day with our current roads and range of vehicles and technologies on them I agree it can only be an "assist" function. But this would be a great value on its own providing the human driver with greater information.

Just thinking of driving in an automated and relaxed way and getting an alarm "call to action", how long would it take the driver to gather the information of what's going on around him and safely take control of any required avoidance....

Perhaps with the huge issues of inner city transport that is where automated driver less vehicles would benefit. Some (huge) investment on route and lane guidance and only specific high tech connected vehicles being allowed in the zone. Special cycle lanes with bikes having a transponder of some sort identifying them and people only allowed to cross roads at specific points - no jaywalkers! Could be highly efficient and increase safety.

0 Votes

September 12, 2018 07:58

@Boss We should already have transponders in our vehicles! With a collision avoidance system as used in air craft and shipping. I saw a demonstration of one problem with driverless technology, if the vehicle infront of the driverless vehicle suddenly swerves around a stationary vehicle the driverless vehicle cannot react in time. This can happen even with an alert driver in the same circumstances but if a transponder had alerted the driver of the vehicle they could not see, accident avoided.

September 14, 2018 09:56

@J Dan G A recent report: https://www.iihs.org/externaldata/srdata/docs/sr5304.pdf highlights problems with current-model cars fitted with Adaptive Cruise Control and Lane Keeping. It finds the latter distinctly unreliable on all of them to a greater or less degree. Not funny, but I couldn't help laughing at one example of unintended action. The system designers have at least recognised that real-world lane markings get worn away or covered up and have built in a back-up. In slow-moving traffic, in the absence of lane markings the ACC radar can 'lock on' to the car ahead and follow it. Unfortunately - bet you can see what's coming - if the car in front turns off down a slip-road the car behind follows. It conjures up images of whole convoys of cars with their enraged owners following each other off the motorway. I can see most owners of ADAS-fitted cars turning off these systems after their first 'scare'. It would be a shame if people are put off using potentially life-saving technology by introducing it too early, before even basic bugs are ironed out.

August 28, 2018 10:01

@Boss Thanks! You're dead right about the problem of 'handover' from auto back to manual control. In the early days of development, it was rather glossed over in the general excitement. How will the auto-driver 'know' it's making a mistake? In the second Tesla crash when Autopilot seemingly drove the car into a crash barrier, what did the human driver think was going on? Perhaps they noticed the movement but hesitated to take back control because, after all, it is an autopilot and must know what it's doing. The requirement that drivers should be vigilant at all times is clearly ridiculous as I suggested at the end - what's the point of automation if you effectively still have to drive the car? It's a cop-out and it won't wash. This is why most car-makers are skipping Level 3 automation and aiming for production of Level 4 cars. Eventually.