The road to autonomous driving is not like the M50 on a quiet Sunday morning – fast flowing, on a surface smooth as silk, with perfect visibility all around. As we’re learning, it’s more akin to a trip along a secondary country route on a dark winter’s evening. There are many bumps – maybe potholes – and turns on the way, and we have far from perfect sight of what lies ahead.
The tragic death of Joshua Brown in Florida in 2016, when his Tesla Model S collided with a truck while in “Autopilot” mode, was a wake-up call, a moment when the misgivings around autonomous travel really picked up momentum. Not least among the insurance industry, unsurprisingly. The recently published Insurer Requirements for Highway Automation makes clear how self-driving cars represent a huge change from the insurer’s point of view “because the liability for any accident caused will shift from the driver to the car.” That is unlikely to be a seamless shift.
An increasingly evident bump on the journey has been the potential for hackers to trick the Advanced Driver Assistance Systems (ADAS) of the car, with a very concrete example coming to light only this month.
As www.futurism.com reported, security researchers in Israel have found a way to ‘fool’ autonomous vehicles into braking just by flicking an image of a stop sign on a digitally-controlled billboard. They say this has the potential to cause traffic jams or even serious accidents.
“The attacker just shines an image of something on the road or injects a few frames into a digital billboard, and the car will apply the brakes or possibly swerve, and that’s dangerous,” Ben Gurion University researcher Yisroel Mirsky told Wired magazine. “The driver won’t even notice at all. So somebody’s car will just react, and they won’t understand why.”
Apparently the researchers started out experimenting by projecting images onto the road surface to cause the car’s ADAS system to react in a manner that could cause an accident. It was only afterwards that it came to them that an electronic billboard would be a far easier way to trick the car. The stop sign image would only need to be flashed on screen for a fraction of a second. (Hackers side, the same thing could conceivably happen with a legitimate billboard advertisement containing a stop sign).
Researchers were able to fool Tesla’s most recent Autopilot, according to Wired. It said those behind such a phantom attack would not require very advanced hardware to pull it off and would be almost impossible to trace.
None of which should lead us to demonise the self-driving car as dangerous. The insurance industry readily concedes that “automated driving in the right contextual conditions is expected to be much safer than manual driving”. The autonomous car will be a life saver. We’re just not there yet.
https://futurism.com/the-byte/hackers-billboards-self-driving-cars-slamming-brakes