Researchers trick tesla autopilot steering into oncoming traffic
Last updated on: 02 April,2019 08:49 am
The researchers painted three tiny squares in the traffic lane to mimic merge striping.
(Web Desk) – A prolific cybersecurity research firm says it has managed to make Tesla’s self-driving feature veer off course by sticking three small stickers on the road pavement.
Keen Lab, a two-time honoree of Tesla’s "bug bounty" hall of fame program, said in a research paper on Saturday that it found two ways to trick Autopilot’s lane recognition through changing the physical road surface.
The first attempt to confuse Autopilot used blurring patches on the left-lane line, which the team said was too difficult for someone to actually deploy in the real world and easy for Tesla’s computer to recognise.
"It is difficult for an attacker to deploy some unobtrusive markings in the physical world to disable the lane recognition function of a moving Tesla vehicle," Keen said.
The researchers said they suspected that Tesla also handled this situation well because it’s already added many "abnormal lanes" in its training set of Autopilot miles. This gives Tesla vehicles a good sense of lane direction even without good lighting, or in inclement weather, they said.
Not deterred by the low plausibility of the first idea, Keen then set out to make Tesla’s Autopilot mistakenly think there was a traffic lane when one wasn’t actually present.
The researchers painted three tiny squares in the traffic lane to mimic merge striping and cause the car to veer into oncoming traffic in the left lane.
"Misleading the autopilot vehicle to the wrong direction [of traffic] with some patches made by a malicious attacker is sometimes more dangerous than making it fail to recognise the lane," Keen said.
"If the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident."
In response to Keen’s findings, Tesla said the issues didn’t represent real-world problems and no drivers had encountered any of the report’s identified problems.
"In this demonstration the researchers adjusted the physical environment (e.g. placing tape on the road or altering lane lines) around the vehicle to make the car behave differently when Autopilot is in use," the company said.
"This is not a real-world concern given that a driver can easily override Autopilot at any time by using the steering wheel or brakes and should be prepared to do so at all times."
Tesla’s Enhanced Autopilot supports a variety of capabilities, including lane-centering, self-parking, and the ability to automatically change lanes with the driver’s confirmation. The feature is now mostly called "Autopilot" after Tesla reshuffled the Autopilot price structure. It primarily relies on cameras, ultrasonic sensors, and radar to gather information about its surroundings, including nearby obstacles, terrain, and lane changes. It then feeds the data into onboard computers that use machine learning to make judgements in real time about the best way to respond.
In a detailed, 37-page report, the researchers wrote:
"Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As we talked in the previous introduction of Tesla’s lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident."