Skip to main content

Driverless Cars: Are We Nearly There Yet?

Panasonic_car_tech-2017_db5d256ec1f2aa880ead9c747e773005cccf8f82.jpg

Concept driverless car with built-in AR Credit: Panasonic

“Are we nearly there yet?” Now there’s a familiar cry from the back seat of many a family car heading off on a long journey. Senior managers with car manufacturers around the world may be asking the same thing of their engineers attempting to design the first driverless car. In November 2015 I suggested in a DesignSpark article that ‘Your Car will soon be Driving You to Work’. After nearly three years of frantic activity by engineers and marketing people, are autonomous cars any nearer becoming a reality? As with most things in life, the answer to that is possibly, maybe even probably, depending on just how strictly you define the word ‘autonomous’. Despite the achievements of competitors in the DARPA Grand Challenges of the last decade, nothing resembling a commercial driverless car appeared until a few years ago. The following video summarises those advances in technology made by the end of the last Challenge:

By 2015 Google (now Waymo) had prototype ‘driverless’ cars clocking up mileage on US roads and gathering valuable data. Mainstream car manufacturers began pouring money into research, many promising self-driving cars on the road in only a few years. Car-maker Tesla had gone a step further and offered customers for its electric cars a somewhat misleadingly named option called Autopilot. In May 2016 a Tesla Model S crashed into a truck crossing its path and the car driver was killed (NHTSA Accident Report 812481). This incident, together with a previous little-publicised crash in China suddenly jolted everyone back to reality.

Degrees of Autonomy

It became apparent that owners had a rather grander interpretation of the word ‘Autopilot’ than Tesla intended. Many appeared to think it meant they had bought a genuinely driverless car: posting pictures of themselves on social media doing crazy things like reading a book while travelling at speed. In fact, Autopilot is just an assistant to a human driver, requiring frequent ‘intervention’ from the latter depending on road and traffic conditions. The Society of Automotive Engineers (SAE) had come up with a standard (J3016) in 2014 based upon six levels of increasing automation from Level 0, manual operation, to Level 5, fully automated with no human intervention required. By this standard, Autopilot actually manages Level 2, somewhat less than many customers’ expectation of Level 5. Although the human driver was at fault for not being alert at the wheel, the fact is that Autopilot ‘saw’ the truck but misinterpreted the image, ‘thinking’ it was an overhead sign.

In March 2018 an Uber taxi killed a pedestrian pushing a bicycle across an unlit road at night. According to the preliminary accident report, the human driver seemed to be alert, but crucially looked down at the instruments seconds before impact and couldn’t react in time. In this case, the Lidar sensor had spotted an ‘object’ but the system didn’t classify it as a danger for a few more seconds. Tragically, the emergency braking system had been disconnected because false positives were giving passengers a rough ride. The human driver was expected to take action instead. The system should at least have sounded a driver warning alert, but no such facility had been provided.

Also in March 2018, a Tesla Model X under Autopilot control appeared to drift off the road at a slip-road and crashed into a barrier, fatally injuring the driver. The preliminary accident report also highlighted a serious problem with the crash resistance of the on-board Li-Ion traction battery – it caused a serious fire. Fire crews thought they had extinguished the fire at the site, but five days later the battery burst into flames again while at an impound yard.

Stalling Progress

It doesn’t take many fatal incidents to convince an increasingly risk-averse public that some new technology or new drug must be banned despite the apparent benefit to the majority of its use. Users want 100% guaranteed safety, even under conditions of misuse, and legislators insist that complex systems must ‘fail-safe’. Unfortunately, humans have yet to construct anything that meets either of these conditions. This unpleasant fact is accepted by engineers, which is why the value ‘100%’ does not appear even in the most stringent functional safety standards such as ISO 26262. Perhaps more disturbing to engineers, is the levelling off in the graph of auto-driving ‘disengagements’ recently. A disengagement is when the automatic control system fails in some way and the car returns to manual driving. The return of control could be instigated by the computer system itself, or by the human driver who is always expected to be vigilant. The fact is, this failure rate amongst all the test cars on the road is large and although reliability had been steadily improving, it may now have bottomed-out. Why? What’s going on (if anything)?

Powerful Sensors, Big Data

When the DARPA Challenge ended many felt that all the major design issues facing the designer of an autonomous car had been identified and most were well on the way to being solved. This was certainly true in the area of sensor development. The competition spawned the 360° rotating Lidar scanner, and showed how existing technologies such as microwave radar and digital cameras would play a part. It became obvious that electronic sensing systems are superior in every way to a human driver’s eyes: see my DesignSpark article on this subject. Read about new FIR (Far Infrared) camera technology here which may prove critical in the design of effective car vision systems. So, it looks like our autonomous car may at least be able to detect things in its environment. The catch is that all these sensors generate vast amounts of data that has to be processed in real-time in order to identify or classify the detected objects. That’s a job the human brain does so well and it’s why there’s so much confidence (hope?) that Artificial Intelligence has advanced sufficiently to cope with the task.

Artificial Not-So-Intelligence

The hype surrounding AI has reached stratospheric levels recently. Advances in ‘Deep Learning’ algorithms based on multi-layer artificial neural networks (ANN) have massively speeded up information extraction from very large data sets. It’s what makes Google Searches so fast with an uncanny ability to ‘know’ what you’re looking for (most of the time). Deep Learning AI seemed to be the obvious solution to the problem of object classification within vehicle vision data, whether it be from optical cameras or Lidar. Deep Learning systems need to be ‘taught’ how to recognise objects by presenting them with a huge quantity of example data, not unlike a novice human driver gaining experience on the road under the supervision of an instructor. The difference is that a person starts learning to drive with most of the object recognition capability already in place. They just have to learn how to manipulate the controls correctly in response to sensory input. The machine is just like a new-borne baby in comparison. A major problem is determining when sufficient experience has been gained to allow the novice (human or machine) to run without further ‘tuition’. High disengagement rates with current prototypes suggest that we have a long way to go, literally, before the machine can be seen as a replacement for the human driver. And that’s just for accurately determining what’s happening in the world around the car, let alone deciding how best to deal with a particular situation!

The publication of a recent research paper on the robustness of Deep Learning in object classification may have thrown a spanner in the AI works. The problem actually surfaced some time ago in the context of facial recognition and cybersecurity. It was found that when a previously recognised face in an image was only slightly altered, the algorithm went from say 95% certainty to less than 10%, and the high figure transferred to a completely irrelevant image! For law enforcement agencies searching for known felons on surveillance video, this meant that a feeble disguise like a pair of colourful sunglasses could fool the system. The recent paper has a different take on the same issue: a small change in a classified object’s position between successive video frames from an autonomous car’s camera could have the same effect. This may mean, for example, a pedestrian correctly classified with 90% certainty on one frame might be seen as, say, a seagull with 90% certainty, on the next. Within a split-second the driving algorithm could get totally confused and potentially make a fatal decision. Either this ‘generalisation’ weakness in DL algorithms will have to be removed, or fusion of other sensor data from, for example, Lidar will need to be employed to verify camera data. If neither of these prove solutions possible in the short term, it may mean the start of Level 5 or even Level 4 operation has receded even further into the future.

After the Hype: Will we ever get Driverless Cars?

A literally fully-autonomous car will probably never exist. The infinite variety of situations on the road needing to be detected, analysed and acted upon in real time with 100% reliability is clearly way beyond current computer technology, even with AI. It is often said that an auto-driver will need to be as a good as a human before anyone will trust it. But is that what we really want? To achieve comparability, there will have to be major breakthrough in understanding of how our brains actually work. At the moment, measurement of brain activity is largely limited to seeing which areas ‘light up’ on an MRI scan when it’s subjected to known stimuli via the ears and eyes. The resolution is really that bad – rather like trying to work out how a microprocessor chip works by watching a PC screen while tapping the keys. If a truly artificial brain is eventually created, theoretically capable of driving a car, we may find ourselves presented with a paradox. This new AI will exhibit the same qualities of unpredictability and imperfect memory as human intelligence. In short, it will be just as bad at driving a car as a human driver because it will have the same failings reproduced in silicon instead of living flesh. This suggests that AI development may just get us back where we started, but with an error-prone machine in the driving seat instead. Like the brain, AI works with statistical probabilities, not logical certainties, and that’s why it too can be unpredictable.

But all is not lost. Two areas of technology currently under development may ensure that our family car at least appears to be autonomous. Firstly, it will just be a node in a vast network of wireless-interconnected vehicles, each able to exchange data with others nearby (V2V) via a familiar cellular radio infrastructure (V2I). Such an infrastructure does not exist yet – arguments are still taking place over standards and protocols. Should it be based on 5G or a modified form of WiFi, for example? And, of course, who will pay for it?

Secondly, providing the auto-driver with pre-prepared detailed 3D terrain maps will ease the burden of precise position location. Some cities have been mapped in this way already, and it might just allow the safe operation of driverless taxis over a restricted area using current technology.

Tesla_Autopilot_in_a_Model_X_7bdcce0c3e42d14db33aaab9c9eb260ffdbdf615.jpg

Cockpit of a Tesla Model-X with AutoPilot Credit: Ian Maddox

The Road Ahead: Driver Assistance

Car makers generally have been very careful not to imply that their Autopilot-like offerings are anything more than ADAS (Advanced Driver-Assistance Systems), giving them names like Pilot Assist, ConnectedDrive and ProPILOT. Note the absence of an ‘auto’ prefix. These gadgets are really only usable on long motorways with clearly visible road markings. So, no assistance in heavy rain or snow then. Those who had been promising ‘full driverless functionality’ by this year or next, are quietly updating their forecasts. Instead, new models feature many ADAS functions, but responsibility for safe driving remains firmly with the person in the front seat. I don’t know about you, but I’m not quite sold on the idea of switching on the Adaptive Cruise Control with Lane Keeping Assist, only to sit with my hands on the steering wheel, eyes focussed on the road ahead with an expression of grim concentration on my face. Don’t I do that now, without the tech? Do I heck.

If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.
DesignSpark Electrical Logolinkedin