The Washington Post has an excellent article about a crash involving Tesla’s autopilot. It is astonishing and depressing to me how much we have allowed Tesla to escape accountability for their actions. It is clear from reading the article that Tesla deliberately attempts to make people believe that their autopilot is safer than the evidence supports and tries to avoid blame by relying upon software licenses and warning screens to elude responsibility.
This specific accident involved a man in 2018 who set his autopilot to 69 MPH in a 55mph zone in a highway with cross traffic, an area that Tesla knew their system was unsafe in, and then took his hands off the wheel. A truck pulled out in front of him and neither he nor the car braked. The car drove under the semi and he was beheaded. The car didn’t stop for almost a third of a mile. Telsa warns drivers that they must keep their hands on the wheel and that they are ultimately responsible, but here is what Tesla also does and does not do:
- Tesla keeps a video on its website showing a car with a driver whose hands are not on the wheel navigating city streets with a voice over saying “the car is driving itself.” The video makes no mention of the fact that the route was pre-mapped specifically for the video nor that an early attempt resulted in the car crashing into a fence.
- Allowing the speed to be set at 69 MPH in a 55 MPH zone when the driver’s hands are off the wheel.
- Noticing the driver’s hands being off the wheel, but only warning the driver (not taking any remedial action like braking or slowing down mind you, just a warning) after 25 seconds (the accident in question took less than 3 seconds to play out, but required only 1.6 seconds of braking to save the driver’s life)
- Allowing the system to be activated in areas it is not suited for (this is arguable, but in the era of ubiquitous GPS, it is entirely possible to know that the system is in such an area)
- Having its CEO, Elon Musk, claim that “It’s [ the autopilot system] probably better than a person right now,” to reports three years before this crash took place.
- To this day, has failed to correct the problem with its camera placement that leads to the system being unable to correctly identify semi-trailers as obstacles in the road.
Tesla clearly markets their system as being much safer in reality, or at least much more autonomous, than it actually is. It compounds that misinformation by a series of choices that encourages drivers to think the systems has more control and margin for error than it actually does. As a result, people don’t treat it like the “fancy cruise control” Tesla’s lawyers claim it is in court, but the self-driving system Tesla allows its marketing and engineering decisions to imply that it is.
That may be legal in this fallen country if ours, but there is no universe in which it is moral.
Fans of self-driving cars insist that we must allow them to be used on public roads so that even if they are not safer than humans today than can eventually become so. The problem with that mentality is that it requires us to trust that every producer of self-driving cars actually cares about making these cars safer. That assumes facts not in evidence. Given the disconnect between what Tesla says in court and in its user manuals and what it says in it its marketing and what it allows drivers to get away with, one can make a strong claim that Tesla, at least, cares only about using the perception of driving automation to sell more cars.
I, for one, do not want my family to be put at risk so Elon Musk can sell another car.