header print

What We Can Learn from LLM's About Being Human

In recent years, artificial intelligence (AI) models have repeatedly proven they can do many things we once thought were exclusively human. ChatGPT, the most famous of these developed by OpenAI, can generate original text that demonstrates not only understanding of queries and context, and extensive knowledge, but also considerable creative ability. Many see this as a threat to their professional future and to society as a whole, and probably rightly so. But another aspect that doesn't get enough attention, in my view, is the shadow that ChatGPT and similar models cast on our perception of ourselves as human beings.

chatgpt

ChatGPT's success may require us to update our concepts of what "thinking," "creativity," and "speaking" mean. Does ChatGPT think and speak like us, or is it an imitation of the real thing? What does it say about thought if an AI model can think, and what does it say about us if such a close imitation of reality is possible?

ChatGPT and Human Language

Ludwig Wittgenstein (1889-1951), one of the greatest philosophers of the 20th century, particularly regarding language, promoted the idea of "meaning as use." He opposed the view that language is merely a verbal expression of thought that preceded it: that we first knew something, and only then expressed it in language; that meaning came first, and then the word that mediates it was invented. Although he himself held a similar view early in his career (like most of his contemporaries), he eventually concluded that such views only confuse us. According to his approach, the meaning of words comes from how they are used. More precisely, the uses of a word are its meaning. There's no doubt that ChatGPT knows how to use words, and this is what makes it an intriguing case for examining this view.

wittgenstein

ChatGPT's training process raises questions about the relationship between language and reality. The model can generate coherent and meaningful language based on statistical patterns and relationships between words, without necessarily having an understanding of objects in the real world, or of concepts that language refers to. This highlights the idea that the relationship between language and reality is complex and not always simple and straightforward.

The above quote was written by ChatGPT itself and refers, with characteristic vagueness, to the tension between use and understanding. As humans, we have direct acquaintance with the world. We can talk about a "chair" and also sit on the chair, lift it, look at it, build it, or destroy it; we can imagine chairs, contemplate them, and feel things toward them. ChatGPT can only talk about chairs, but somehow, despite not having all the other experiences we have regarding chairs, it can talk about them in a completely coherent way that doesn't betray its lack of experience with the real world. Indeed, the relationship between language and reality is "not always simple and straightforward."

ChatGPT is an AI model based, in abstraction, on learning statistical relationships between words. The model is built so that given a text, it knows how to output a guess for the next word. Before training, it guesses gibberish words, but in the training process it "reads" enormous amounts of text and applies a learning algorithm that calibrates it according to the words that appear in the text. Thus, with experience, it learns to return words that are likely to appear next in the text. In other words, given a certain text (or a query from the user), ChatGPT returns the word that is most statistically likely to appear next. Then it returns another word, and another word, until it reaches a stop. And this while at no stage in the learning process is there anyone helping it see the connection between words and objects in the world. No one presents a chair to it, or lets it sit on one, and helps it attach meaning to the word.

In much of his work, Wittgenstein tries to separate practice from what stands behind it, which is inaccessible. ChatGPT is the ultimate test case for his approach, in that its practice – writing – is very similar to ours, while we know that what stands behind it is not human. If meaning is indeed in use, then there's no difference between a human statement and an AI statement.

In one of the important sections of his book Philosophical Investigations, Wittgenstein deals with "following a rule" (sections 138-242). He argues there that while it sometimes seems we can grasp something entirely in thought, for example a poem, a certain action, or the concept of infinity, we actually cannot (or should not) hold the entire thing at once. What we have is the ability to progress step by step. We know, after humming the first verse, what the first line of the chorus is. We know how to take any number and add 1 to it. We know, given a conversation with someone, how to say the next sentence. This is what it means to know a song, to understand infinity, and to know how to conduct a conversation. Thus, when we understand a certain subject, when we know something, the thing we know is the rule we follow.

From Wittgenstein's claims it follows, as I understand it, that what characterizes us is our dispositions. We have a certain mental device that, given different situations, produces different outputs. If this device knows how to add 1 to every number, the person understands the concept of "infinity." If it knows how to continue the song, it knows the song. If it writes the next line in a proof, it understands mathematics. But Wittgenstein didn't imagine us as automata. We also have a mental world, we have internal experiences and emotions, they're just not part of the analysis of how we use language.

The philosophical problem posed by ChatGPT arises precisely from the fact that it doesn't have an internal world similar to ours, but it still speaks like us. It manages to follow the same rules as us, to use language like us. If we accept Wittgenstein's definition of meaning as use, there's no longer such a thing as an "imitation" of language. Whoever knows how to use words also understands them. And if ChatGPT understands words without ever having contact with the things themselves, without having seen a chair, sat on it, felt it, or been hit by it, how is our understanding better than its, if at all? If you can know everything about the world just from learning semantic relationships between words, what value is there in all those experiences we've accumulated?

Being ChatGPT

ChatGPT allows us a clearer understanding of "meaning as use" and the implications of this idea. If understanding language means knowing how to speak (or write), this changes what we understand about ourselves and our ability to think. We have certain dispositions, some conscious and some not, and with them we create language while speaking or writing. Our internal world has no clear role in this process. That's not to say it's not important for other things: emotions help us process information, we have feelings that guide us about what information to accept and what to resist, and so on. All these help shape us. But given a "query," we put this aside and simply begin to generate speech.

It's interesting to realize that our language learning process doesn't assume understanding. When children learn their first words, they learn them not through understanding meaning but through use (Wittgenstein addresses language learning in Philosophical Investigations, sections 1-21). They learn that in certain situations you say certain things. We tell them: "Say mama. Ma-ma." We ask them to complete sentences: "One, two, three, hands on your..." We teach them to connect a situation to a word: "Say bye." And this continues for many years. We say "bon appétit" without pondering the "meaning" of the phrase. So too with words like "amen," "congratulations," "good luck." We can say we teach children to follow rules. They learn statistically what the next word should be.

Of course, children's learning is much harder than ChatGPT's: their world of stimuli is much more complex and rich and includes not just text. They need to understand not only what should come now but also how to pronounce it and to whom, and when someone is speaking to them. For this they indeed need to know what a "chair" looks and feels like. They need the hand that will point to the object while they hear its name. But often you can do without this, and when dealing with abstract concepts there's nothing to point at anyway. In other words, we start by understanding the use, and the mental connection joins later, or in some cases never joins, or remains vague.

Perhaps our ability to use language is less connected to understanding than we're used to thinking. Perhaps when we learn to speak, we're simply learning to speak. Feelings, emotions, and our character have other roles. Speech is just an action. When we begin this action, we don't always know how it will end and where we'll arrive. Sometimes it's clear to us what the continuation of the thought is, and sometimes there's uncertainty (which is certainly also true for ChatGPT: sometimes there's one word with high probability, and sometimes several with low probability). Sometimes our speech flows and sometimes we get stuck in the middle, or need to go back and correct. Sometimes we reach conclusions that surprise us. Sometimes we discover that the "query" dictates the conclusions we'll reach, something that also happens to ChatGPT.

Perhaps our use of language is simply following a rule. It's a rule defined by who we are and our experiences, and therefore it's ours, in a certain sense. But the result isn't really ours. Perhaps it's more accurate to think of speech like a habit: the habit is ours, and when its results aren't good, we can and should consider changing it.

Thinking, Step by Step

When ChatGPT "thinks," it can "generate harmful instructions or biased content" and "generate incorrect information," as they warn us on the site. People, of course, also generate harmful instructions and biased and incorrect content. On social networks, in comment sections, and in arguments where it's important to us to win. Perhaps the difference between us and ChatGPT isn't so great. We receive some input from the world and begin to think in language. What makes us believe that our thoughts are necessarily the "correct" thoughts? How do you know if your response to this article is a correct response, or useful? By what criteria?

It's not necessarily easy or obvious to find an objective criterion for judging our thoughts, and this is a problem that occupies, among others, philosophers trying to define what truth is, psychologists trying to find a better way to think, political thinkers trying to develop ways to bridge social polarization, and more. This is one of the things that ChatGPT (still) doesn't know how to help us with, but it can inspire us to be less zealous about our opinions and allow us to see different perspectives. Then we'll need to decide for ourselves which are better. This is (still) our exclusive responsibility as human beings, and perhaps in this (still) humans are distinguished from machines.


Note: This article takes Wittgenstein's philosophy as a given. His view is quite dominant today, but (like any philosophical view) it has opponents with different opinions and should not be taken as gospel.

h/t: hayadan

Next Post
Sign Up for Free Daily Posts!
Did you mean:
Continue With: Facebook Google
By continuing, you agree to our T&C and Privacy Policy
Sign Up for Free Daily Posts!
Did you mean:
Continue With: Facebook Google
By continuing, you agree to our T&C and Privacy Policy