Self-driving cars can be stopped with a laser pointer

Self-driving cars may be the future of transportation – it’s possible that eventually they could be safer than vehicles with human drivers.

But autonomous vehicles are going to need more and better defenses against the threat of hackers and cyberattacks.

Uber has just hired a pair of renowned car hackers to help the alternative cab company develop security for self-driving cars.

It’s a good move by Uber because, as a security researcher using a simple laser pointer has proved, self-driving cars can be hacked.

Jonathan Petit was able to launch a denial-of-service attack against a self-driving car by overwhelming the car’s sensors with images of fake vehicles and other objects.

As Petit describes in a paper he will present at Black Hat Europe, he recorded the pulses emitted by objects with a commercial lidar (light detection and ranging) system that self-driving cars use to detect objects.

By beaming the pulses back at a lidar on a self-driving car with a laser pointer, he could force the car into slowing down or stopping to avoid hitting phantom objects.

In an interview with IEEE Spectrum, Petit explained that spoofing objects like cars, pedestrians, or walls was fairly simple; and his attack could be replicated with a kit costing just $60.

But it’s not just self-driving cars that are at risk. We’ve also seen ample evidence that today’s human-driver cars have security vulnerabilities – including ones that could allow an attacker to force a vehicle to stop or drive off the road.

Fiat Chrysler had to recall more than 1 million cars and trucks to patch them against a security vulnerability in the entertainment system that could allow a remote attacker to take over the vehicle’s brakes, steering and other systems.

That vulnerability was widely publicized in July when hackers Charlie Miller and Chris Valasek (the guys hired by Uber) demonstrated how they hacked into a Jeep Cherokee’s entertainment system to take over the vehicle from 10 miles away.

Petit’s attack on a self-driving car’s lidar requires an attacker to be within about 100 meters, but he was able to jam the lidar from the front, side or the rear.

He said lidar manufacturers could fix the issue by cross-checking data to filter out objects that aren’t plausible.

But the manufacturers haven’t done it yet, which is why Petit hopes his research will be a wake-up call.

Self-driving car manufacturers – including tech giants Google and Apple – are hopefully paying attention too.

Image of driverless car courtesy of Shutterstock.com.

Leave a Reply