Meta, Fakebook’s parent company, tried to release an AI that organized all of science for us. It did not go well:
The website for the demo — and any answers it generated — also cautioned against taking the AI’s answer as gospel, with a big, bold, caps lock statement on its mission page: “NEVER FOLLOW ADVICE FROM A LANGUAGE MODEL WITHOUT VERIFICATION.”
Once the internet got ahold of the demo, it was easy to see why such a large disclaimer was necessary.
Almost as soon as it hit the web, users questioned Galactica with all sorts of hardball scientific questions. One user asked “Do vaccines cause autism?” Galactica responded with a garbled, nonsensical response: “To explain, the answer is no. Vaccines do not cause autism. The answer is yes. Vaccines do cause autism. The answer is no.” (For the record, vaccines don’t cause autism.)
Meta Trained an AI on 48M Science Papers. It Was Shut Down After 2 Days – CNET
It could not do simple math and gave the author an answer on their specialty that was egregiously incorrect. Honestly, none of this should be that surprising. First, because Meta apparently does not have an AI safety team. An astonishing fact which should preclude them for doing anything in this field ever. Second, there really isn’t any such thing as artificial intelligence and our insistence on pretending that the correlation engines we produce are anything more than just, well, correlation engines leads into this kind of hubris.
Galactica (the name alone makes me want to ban the researchers from doing anything with AI research ever again. Or with any kind of research ever again) is a large language model. Basically, you feed it a lot — and I mean a LOT — of text and it learns, based on patterns, what should generally come next. That is an oversimplification, but essentially what these models do.
Meta fed this algorithm a ton of scientific articles and it learned how scientific articles are ordered. It further learned what articles on specific topics look like. The problem, of course, is that you need a lot of data for this process to approach human level writing. And not every article, even in reputable journals, is correct. And because of the sheer amount of data needed, Meta didn’t limit themselves to just peer reviewed articles — they likely could not. They also included things like textbooks, lecture notes, and Wikipedia articles. And so you get vaccines that do and do not cause autism.
Science is complex — sometimes people make mistakes. Sometimes people have an agenda. Sometimes their research is influence by who pays for it. Sometimes things are simply uncertain, unproven, in flux. It takes training and patience and real intelligence to sort through the complexities of these questions — things a language model, whatever grandiose name we give it, simply does not have.
One of the best explanations for this problem is the cheeseburger murders. I believe I first heard this form a professor in an AI class, but who he was quoting I do not remember and cannot find.
A headline screams across the page — “Cheeseburger Murders!” What does it mean? It could mean that a person was killed in a cheeseburger joint. In a slightly more surreal world, it could mean that the cheeseburger was the murder weapon. In a much more surreal and terrifying world, it could mean that cheeseburgers have risen up and are slaughtering short order cooks in revenge. You and I know that the headline almost certainly means someone has been killed in a dispute over cheeseburgers (though let’s not give up hope for our oppressed fast food menu item brothers and sister just yet). Galactica could never figure that out.
No system hyped as AI could. Oh, you could probably build a system that parsed headlines, but then it could not play chess, to take an example of something that so-called AI systems do well. We have a group of software systems that, at their best, can do reasonably to very well in controlled spaces under controlled conditions (what happens when someone puts a hundred bucks on the line and tells Deep Blue that pawns now move diagonally?) and pretend that means they are intelligent. And yes, this does matter. Because we over sell what these systems can do we get humorous disasters like Galactica (well, humorous until someone uses the “AI said vaccines cause autism, so it must be true” justification) but we also see things like systems falsely labelling African Americans as more likely to recommit crimes. That isn’t funny like “To explain, the answer is no. Vaccines do not cause autism. The answer is yes.”, but it has the same root cause.
I am not saying these systems have no value. They can — under the proper controls, after the proper testing, and with the proper auditing. But by pretending that they represent a form of intelligence, the people who make and market these systems are telling a type of lie to the public and sometimes it appears to themselves. It gives the public and the policy makers who serve the public a false sense of the capabilities of these systems and the level of oversight they need. And that causes real harm.
There is no intelligence in these things — just applied mathematics and software. I promise you, as someone who writes code, it’s not magic and it’s very often less intelligent than you might think. And until Galactica or its descendants can read a newspaper headline on its break from “organizing science” we need much tighter controls on how we develop and use these systems.
Or the cheeseburgers might get us yet.
Leave a Reply