LLMs and Consciousness
Uninformed discussions revolving LLMs and consciousness are mostly overreactions.
The fact that human beings are conscious seems self-evident. We think that we are obviously conscious. However, providing a definition of consciousness is much trickier. What exactly is consciousness? Is it the combination of conscious thoughts and perceptions? What about sensations and emotions? What about the mental states and processes that work “in the background”? These are hard questions for neuroscientists and cognitive scientists to answer. Nevertheless, that much is certain: we clearly have consciousness.
ChatGPT and other large language models have become popular. Most people are amazed by how close these LLMs can mimic human linguistic behaviors. Due to the strong performance by LLM on mimicing human lingiustic behaviors, some people have developed a suspicion that LLMs already have consciousness. Some others believe that LLMs are inevitably evolving towards becoming a conscious system.
How plausible is the claim that LLMs already have, or eventually will have, consciousness? I think that LLMs are unlikely to ever have consciousness.
One reason for doubting LLMs’ capacity for consciousness is the absence of multimodal integration. Our brain is a multimodal system, which means our conscious experience consists of information gathered by multiple channels of our nervous system. This includes visual information (from visible lights captured by the retina, reaching the rods and cones, causing electrical signal to be transmitted all the way up to the visual cortex, etc.), audio information, textile information, and so on.
On the other hand, LLMs are essentially predictive models. They take in huge quantities of language data and study the patterns of how the words come together. When they learn the word “red” or “apple”, they do not form a mental image of redness or an apple, nor will they ever have any possible red sensation or the experience of seeing, smelling, and chewing an apple.
Some might argue that LLMs are conscious from a unique argument that LLMs have knowledge, and the extent of their knowledge surpasses that of the most intelligent non-human animals. It makes sense to say that LLMs don’t feel. Does it make sense to say that LLMs don’t know? When I ask chatGPT who the current president of the USA is, it will correctly say it’s Joe Biden. Does that mean chatGPT has knowledge?
In a narrow, more technical sense of the word “knowledge”, maybe we have to concede that chatGPT does have knowledge. But this doesn’t serve as a justification for the consciousness of chatGPT, or any LLM for that matter.
A typical technical definition of knowledge is “justified true belief”. Does chatGPT has a justified true belief that Joe Biden is the current president of the USA? To answer this question, we have to provide further clarifications on what “justified”, “true”, and “belief” mean. The first two terms are unproblematic, but “belief” is controversial.
Does chatGPT have beliefs? Well, if we simply define beliefs in a behavioristic way, such that X has the relevant beliefs if X produces appropriate behavioral responses that supports X’s having that belief, then chatGPT does have beliefs. In turn, chatGPT has knowledge. This is a highly controversial point, and I believe many people would argue that chatGPT doesn’t in fact have any knowledge, but for the sake of argument, I will allow that chatGPT has knowledge.
Going back to my claim in paragraph 7. Even if we grant that chatGPT has knowledge, can we claim that it has consciousness? Well, no. One reason is that the way chatGPT comes to know something is fundamentally different from how we come to know something. And if that’s true, then we do not have strong evidence that chatGPT, or any LLM is conscious.
What do I mean by chatGPT comes to know something in a fundamentally different way from us? Needless to say, LLMs do not have the same kind of knowledge as we do when it comes things like visual arts and music. So to be fair, let’s focus on propositional knowledge that can be captured in purely descriptive terms. My point is, even when it comes to learning propositional knowledge, such as the English language, physics, or mathematics, LLMs learn in fundamentally different ways from us. This is because our conscious experience is unified: our knowledge of mathematics is connected with our spatiotemporal perceptions. Additionally, I’d like to contend that much of our propositional knowledge has phenomenal contents: there is something that it is like for us to know that 2+2=4. It’s highly contentious to claim that LLMs have phenomenality.
All of what I’ve said about LLMs may not apply to future multimodal models, such as vision-language models and language- action models.
For reference, this talk by David Chalmers is a good overview.

