Laser hack of self-driving cars can ‘delete’ pedestrians

Although a city filled with entirely self-driving cars is still in the realm of science fiction, more and more cars are coming with ‘self-driving’ features so it’s a little alarming to learn that there are ways to use lasers to mess with the technology the cars use to detect its surroundings.

In a study uploaded to arXiv by a team of researchers in the US and Japan, researchers were able to trick the ‘victim vehicle’ (their words not ours) into not seeing a pedestrian or other object in its way.

Most self-driving cars use LIDAR to be able to ‘see’ around them by sending out a laser light and then recording the reflection from objects in the area. The time it takes for the light to reflect back gives the system information about how far away the object is.

This new ‘hack’ or spoof works because a perfectly timed laser shined onto a LIDAR system can create a blind spot large enough to hide an object like a pedestrian.

“We mimic the LIDAR reflections with our laser to make the sensor discount other reflections that are coming in from genuine obstacles,” said University of Florida cyber security researcher professor Sara Rampazzi.

A schematic of the attack. Below, showing the deletion of lidar data for a pedestrian in front of a vehicle, visible below left but invisible below right. Credit: Sara Rampazzi/University of Florida

“The LIDAR is still receiving genuine data from the obstacle, but the data are automatically discarded because our fake reflections are the only one perceived by the sensor.”

Although the technology is relatively simple, the attack isn’t an easy one. The team demonstrated the attack up to 10 meters away from the car, but the device must be perfectly timed, and move with the car to be able to keep the laser pointing the right way.


Read more: LIDAR: how self-driving cars ‘see’ where they’re going


The researchers have already told manufacturers about this potential exploit and have suggested ways to be able to minimise the problem. Manufacturers might be able to teach the software to look for the tell-tale signatures of the spoofed reflections added by the laser attack.

“Revealing this liability allows us to build a more reliable system,” said first author, University of Michigan computer scientist Yulong Cao.

“In our paper, we demonstrate that previous defence strategies aren’t enough, and we propose modifications that should address this weakness.”

This unfortunately isn’t the first time that researchers have found vulnerabilities with LIDAR sensors on self-driving cars, but as more of these problems are uncovered and fixed, the technology will hopefully end up safer in the long run.

The research is to be presented next year at the 2023 USENIX Security Symposium.

Please login to favourite this article.