The V.C. Bros that now own the Gizmodo group fired off their first A.I. written article and it did not go well:
Whitbrook — a deputy editor at Gizmodo who writes and edits articles about science fiction — quickly read the story, which he said he had not asked for or seen before it was published. He catalogued 18 “concerns, corrections and comments” about the story in an email to Gizmodo’s editor in chief, Dan Ackerman, noting the bot put the Star Wars TV series “Star Wars: The Clone Wars” in the wrong order, omitted any mention of television shows such as “Star Wars: Andor” and the 2008 film also entitled “Star Wars: The Clone Wars,” inaccurately formatted movie titles and the story’s headline, had repetitive descriptions, and contained no “explicit disclaimer” that it was written by AI except for the “Gizmodo Bot” byline.
This is unsurprising. Imitative artificial intelligence engines do not learn in any meaningful sense, and they do not have the ability to make meaningful connections. All they can do is calculate, based on their training sets, what they think is the most likely word to come next. They have no way to know if what they produce is correct, they have no way to know if what they produce makes sense, they have no way to know if what they produce is meaningful in a specific context. The Gizmodo result is therefore completely expected — an uninspired mishmash of half-correct, easily found material.
The danger, right now, to workers is not that A.I. can reproduce their work. A.I. doesn’t have the ability to make interesting or meaningful connections. Two of my most recently popular posts did just that. One drew out the hidden lessons the Titan disaster held for our society and the other talked about how a late season Blackhawks-Penguins game might be the most important game in the NHL’s draft lottery era. No A.I. system could write articles like that because no A.I. system can currently draw out hidden connections from context.
A large section of writing, then, is incapable of being replaced by A.I. However, a large portion is not. Companies have been using machine learning systems since about 2016 to take box scores and turn them into very simple stories. The argument is that these stories wouldn’t be written if a machine did not write them, but it is indisputably work that a human could have done. What is more likely to happen in the near future, though, is the Gizmodo example: terrible “first drafts” filled with uninspired nonsense will be churned out and then one of two things will happen.
Either the work will be allowed to stand, polluting the general discourse. Or workers will be expected to “edit” the work to make it worth reading. And by edit we mean rewrite. Of course, the pay will be lower — either because it is “just” editing or because the writers will be expected to do more work for the same amount of pay — because A.I. makes them more “productive”.
The hype around these systems is largely driven by the desire to drive down wages. If they can convince enough companies to pay them instead of people who know what they are doing, then these companies can capture the benefits that should go to workers. But as the Gizmodo experience proves, it is just that — hype.