0 0
Generative AI Won't Cure Your Cold - Metaphors Are Lies

Generative AI Won’t Cure Your Cold

Read Time:4 Minute, 12 Second

One of the claims for generative AI tools like ChatGPT is that they could provide things like legal, medical or therapeutic services for those people who either cannot afford those service or who do not have access to them because of language, proximity, or other barriers. On its face, this sounds like a reasonable potential usage — a case that would be an almost unalloyed good. Unfortunately, because of the nature of these programs, they would in fact take something that we largely can do today and make it worse or would provide false assistance to people could least afford it.

The problem becomes clear when we understand that generative AI is a terrible name for these programs. First, there is no intelligence involved, artificial or otherwise. Second, and more importantly, they do not generate, they imitate. To simplify, ChatGPT calculates what word is most likely to come next based on its training. It ingests a ton of material and calculates for the context provided to it — say a rental contract or a text parsing program in the PERL programing language (I miss PERL) — what word is most likely to fit in the flow of text it is producing. It is imitating, in other words, what it already knows. It can generate nothing new; it can create nothing it has not already seen. It merely calculates the odds of a word appearing after a previous word. This is likely why it hallucinates (lies, to you and me) so readily. To oversimplify: it calculates that in a legal context, for example, a case law should most likely be cited at point X, and it knows the format of case law citations since it has been trained on them. If its training doesn’t provide a valid case law for a give situation, well, a case law cite is still the most likely result so a case law cite it shall provide.

There are a lot of problems, then, with using a system that works on probabilities to provide critical legal or medical assistance. First, these systems lie — I’m sorry: hallucinate. You cannot use them if they cannot provide rock sold information. Just recently, a lawyer used ChatGPT in his research and it provide him with false cases cites. Now he is facing disciplinary action. If a lawyer makes that kind of mistake, how is a regular person supposed to understand when they are getting false advice?

Second, outside of very basic actions, they can never really replace professionals. A lawyer knows, for example, the relative resource of the parties to a suit and can tailor their strategy accordingly. A medical professional can have years of working with a specific patient and be able to ferret out things that the patient may not know are important or be reluctant to discuss with the doctor, leading to be a fuller understanding of the problem. Human professionals will have context and history, in other worlds, that will be impossible to provide a generative AI system. Without that context, any advice, even non-hallucinatory advice, will be at best substandard and at worse actively harmful.

Well, I can hear proponents of these systems ask, what about the simple cases? What about the routine, the repetitive, the easy? Surely tasks exist that can be automated? Of course, there are. Simple wills. Simple rental contracts. Simple intake forms. Simple health questionnaires as the first step in triage. The issue for generative AI systems is that we already know how to do automate these cases. In programing, we call the means to this automation templates, but the idea is the same — a repeatable set of rules applied to a known set of data which produces a known, acceptable set of results. But generative AI systems cannot create a repeatable set of rules.

Remember, generative AI systems calculate the odds that the next word should appear after the previous word. Again, I realize this is an oversimplification, but the concept remains: there is a non-zero chance that the same input will produce a different outcome. And as long as that is the case, then these systems are not suitable for templated interactions. They are, in fact, taking something we already know how to do well and making that process worse.

There are things generative AI systems can help with. They are decent at starting simple to medium complex programs (though you need to be very cognizant of the security implications of the code they produce), likely because of the structured nature of programing and the sheer amount of open-source code available to training sets. Some people have had some success with breaking writers blocks by messing around with them. They appear to be decent at producing alt-text for images in controlled environments.

But they are not and likely never will be reliable ways to provide scarce services to those who need them the most. And if we pretend that they are, then we are likely to damage the people who most need our help.

Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Posted

in

,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.