In perhaps the least surprising news ever, Cruise, the automated vehicle arm of General Motors (though GM apparently grants it great deal of autonomy) was caught lying to regulators about a serious accident. A Cruise vehicle dragged a pedestrian it could not avoid running over for about twenty feet. Cruise didn’t show the entire vide of the accident to regulators, hiding a portion of it that made the car’s behavior look worse, until regulators were made aware of it through other channels and specifically asked for it. Given the way most companies treat ethics in artificial intelligence, this, as I noted, is entirely unsurprising.
Microsoft gutted its AI ethics team. Google fired its most prominent AI ethics researcher. Marc Andreessen just wrote a manifesto that called regulators and AI ethicists, among others, the enemy. Facebook has been a key contributor to genocide and just committed a massive, industry destroying fraud. I could go on, but it is all so tiresome. These companies show no signs of being concerned about ethics or human beings. No one should be surprised that they lied to regulators about potentially contributing the serious injury or death of a pedestrian (as I write this, the pedestrian’s ultimate fate is unknown).
The argument for self-driving cars is always, always that they are or will soon be better than humans as drivers. That even if they cause some accidents or harm some people or prevent some rescue vehicles from getting to, you know, rescue people, the cost is worth it because these vehicles ultimately save lives. Even if you want to argue that people have made the democratic decision to allow these experiments on their roads (something that is manifestly not true in San Fransisco, where this happened), it is incredibly difficult to argue that the citizens made an informed decision.
Almost every study that purports to show that self-driving vehicles do better than human drivers rely on data provided by the self-driving companies. It wasn’t until 2021 that the NHTSA even required manufacturers to report certain kinds of crashes. Tesla appears to have been caught lying about their crash data in public and perhaps to regulators — the government is having to sue to get the data it requires to determine that once and for all (among other possible issues and/or crimes). And that is before we even get into the general history of corporate malfeasance such as hiding the cancer-causing effects of smoking, the global warming effect of fossil fuels, or the fact that the Pinto was a rolling bomb. What, in this history, would lead us to possibly believe that such companies can be trusted with our lives?
We have ceded far too much ground to the people who run these companies and their cheerleaders. When a normal car wants to go onto the road, it has to adhere to defined safety standards, and if it makes a claim about crash safety that claim has to be independently verified. No one forced these companies to build realistic courses and try the cars out there under supervised conditions and stringent tests before unleashing them on innocent people. Instead, we were told to just trust them and that the risk was worth the payout. It was amoral nonsense at the time and looks even worse in light of these kinds of apparently deliberate lies about accidents. We simply cannot rely on people whose dreams of wealth depend upon a specific action to tell the rest of us the truth about the results of those actions. It is madness.
People should not be products, and they should certainly not be guinea pigs. I have no faith that self-driving cars will ever be able to be useful in complex urban environments. Such environments rely far too much on context and inter-human communications to be reducible to machine learning prompts. Making them amenable to algorithms would require reconfiguring them significantly, and if you are going to do that, then you should spend the money on massive public transportation and walkable neighborhood projects instead. Those will serve more people more efficiently and with a greater likelihood of improving the environment.
But even if I am wrong, even if these vehicles can be made to work, the route to success does not lie in trusting these companies to test them as they see fit. Only stringent government oversight can ever lead to an ethical, safe, equitable self-driving future. Relying on these companies to be honest and forthright, to keep us safe? That is just going to get people unnecessarily killed.