As I get older, despite Churchill’s supposed wisdom, I find myself getting more and more radical about most subjects. From civil rights of all kinds to capitalism to labor rights to the bloody loser point in hockey, I am more and more in favor of maximalist solutions. I think this is in part because we live in such reactionary time (c’mon Bettman. The loser point only servers the needs of cowardly GMs!), but I also think in part it is because we are surrounded by so much bullshit, intentional or not. And since I read a couple of articles that highlight some of the weak thinking around AI regulation, you, lucky person that you are, get a rant-y little screed about, well, AI regulation.
This could take a bit. You might want to get a snack.
Both this Tim B. Lee article and this interview with Stuart Russell express a certain reluctance to engage with the plausibility of regulating AI. Now, Russell himself is much better on this point than Lee — he comes right out and says that there are regulations that can be put into place today that can control some of the worst aspects of what imitative or generative AI can do right now. But even he says that he cannot think of many examples of creatures controlling other creatures who are smarter than them — giving into the paranoia about so-called general-purpose AI.
Lee, though? Lee’s point is that since many aspects of regulating AI will be hard there is no real point in trying to.
First, he uses a recently passed law in California, AB 316, to try and paint regulation is inherently harmful. In his words, the bill “… could lead to a future where California becomes something of an economic backwater. Perhaps driverless trucks will carry freight to the California border before a human driver hops into the cab and drives it on to the final destination.” This is hyperbole to the point of dishonesty, a recurring theme in the Lee’s article.
The bill actually does two things: requires safety drivers to be present in all large trucks and required a report no later than 2029 on the safety of such vehicles. A real report, mind you, not one created by the companies themselves or dependent upon the data that companies deign to release, as almost all such reports are today. Both of those sound like perfectly reasonable regulations to me — in fact, the vast, vast majority of heavy truck autonomous testing is done with safety drivers. The argument that you cannot design testing protocols that leverage well trained test drivers and still get to fully autonomous vehicles is ludicrous.
Driving a large truck, especially in an urban environment, is hard, skilled labor. I worked in warehouses in the center of cities — I have seen for myself the difference between good drivers and bad. Input from these drivers is in fact, if used correctly, more likely to speed up the process of developing reasonably safe vehicles than retard it. But that means paying such drivers, something tech companies are loathe to do. And why not? Lee himself doesn’t seem to think of drivers as deserving of respect. His comment about having to have drivers haul material around California (again, not what these law does, mind you) certainly doesn’t seem to find value in their work, or value in their concerns. Which is another massive blind spot in his outlook.
Technological advances do not automatically make society better off. There are whole books about this, books that Lee apparently has never read or who would not be so blase about people wanting to protect their families from AI. There is no guarantee that an AI future is a better future if we simply allow companies to do what they want. No technological revolution has ever shared its spoils without a fight. If Lee really does believe that AI is going to make driving safer, for example, than he owes it to the people whose lives are going to be ruined by AI to come up with a way to prevent them from being immiserated. Why shouldn’t people fight tooth and nail against AI if it means hunger and homelessness for them and their families? The lives of truck drivers matter just as much as the lives of programmers or AI journalists.
Lee does suggest something approaching regulations, to be fair, but they aren’t in the AI world. His response to the notion that AI makes it easier to damage things in the real word is to harden certain targets. First, that is woefully inadequate. Putting aside the universal cyber-security rule that defending is always harder than attacking: if AI is really that world altering, then how can we expect biolabs, for instance, to stay ahead of whatever attacks AIs come up with? Second, his response does nothing to deal with AIs that spread disinformation, or deep fakes, or discriminate against people, or get in the way of emergency vehicles on the road. In his world, apparently, we shouldn’t do anything about those dangers because we just aren’t collectively smart enough, and never have been:
Suppose you took a time machine back to 2001 to warn George W. Bush that there was about to be a thing called social media that would worsen teenage depression, destabilize governments in the Middle East, and aid the election of an nativist demagogue in the US in 2016 (or, if you prefer, engage in large-scale censorship of right-leaning political speech).
Do you think Congress could have passed legislation that would have averted these outcomes? I don’t. Even with everything we know today, it’s hard to think of a regulatory framework that would have led to a better outcome.
That is a level of incorrect that approaches deliberate disingenuousness. We could have prevented these companies from selling personalized target ads. We could have prevented them from collecting the information needed to sell those ads. We could have forced companies to show only a reserve chronological feed instead of feeds designed to drive engagement. We could have forced them or make their algorithms public so we could have known when they dialed up the outrage to get people to stay on the sites. We could have prevented them from forming ad and social media monopolies. We could have held them responsible for their moderation failures. Nothing would have been perfect, but there were things we could have done to make things turn out better, and many of those things were discussed at the time these companies were wreaking havoc. I know — I was there.
To Ministry of Truth those ideas and that history is just the worst kind of intellectual erasure. We have a past so that we know what not to do in the future, not so that we smugly assert “Welp, too bad we could not have ever stopped someone from building the Destruction Engine. Let’s see if it works better this time!”
Either Lee genuinely believes that no regulation is worth the cost in terms of AI advancements, in which he is wrong. Or he believes that the people collectively through their government cannot be trusted to and so it should all be left to the companies — in which case he is wrong.
In the first case, there is no point to AI if it is not going to benefit society. We have a thousand years of history that shows us that unregulated technological change almost always makes most of society worse off, that the benefits have to be wrested away from the monopolizers and the powerful. Without meaningful regulations, you will either face a backlash that will derail progress or a society worse off than the one we have today.
In the second case: there is so much potential in these systems. They can help us do so much good, from medicine to climate change to personal assistants. But we will not get those benefits relying solely on unrestrained capitalism to provide them to us. Look around at the extractive, exploitative economy the social media titans have built. Look at how the AI companies are focused not on fields such as medical research but on driving out of business writers, artists, and therapists. That is what shareholder value means — extracting as much money from as many people as possible, damn the consequences, for the people at the top of these companies. Without regulations reigning them in, we get a repeat of the social media era. Except the enshitification probably does even more damage.
Lee titles his article “Regulating AI Won’t be Easy” and then goes on to essentially argue that because its hard we shouldn’t bother to try. Well, no shit it is going to be hard. It is complex area with a lot of competing interests that can do real good and real harm. But that it is hard is not an excuse not to try. Lots of things are hard. Being the starting center for the Blackhawks is hard. Writing well is hard. Being a good person can be hard. Preserving democracy is hard. To quote the preeminent moral philosopher of our era: “It’s supposed to be hard … The hard is what makes it great.”
We can do great things with AI, but only if we stop acting like children, stop shrinking our responsibilities, stop pretending that some Big Tech Daddy knows best and will save us. Of course, it’s hard. That is what adults do — the hard things. If we act like adults, if we collectively take responsibility for our society the way countless other generations have done in the past, than we can shape AI to benefit us all. But if we give into the petty little whine of “it’s too hard”, to the allure of doing nothing and expecting others to save us? Well, then we will wake up one day and find we have traded in our futures for a bit of misdirected calm.
Rant over. Hope you enjoyed your snacks.
Leave a Reply