Can ChatGPT Think?
In 1950, the computer scientist Alan Turning published a seminal paper that began with a profound question – 'Can Machines Think'?
Turing didn’t answer this question directly, but proposed a thought experiment called the ‘Imitation Game' – if a machine can convince an interrogator in a written conversation that it is human, it wins.
Today, apps like ChatGPT can generate text with surprising aplomb and accuracy. It can summarise articles, write cute poems, deliver a clever insult, and elicit the occasional guffaw. Heck, sometimes a student will hand me an essay partly written by a chatbot, and it can be hard to spot the difference.
Put simply, ChatGPT would ace the Turing test. Does that mean it can think?
Can it think like a human?
Over the years, scientists, philosophers and artists have offered different conceptions of what it means to ‘think’. When we say we are thinking we are usually referring to us having ideas, opinions, and beliefs about things, but the philosopher Heidegger rejects this notion.
Turing himself was not interested in abstract definitions, so he didn’t define thinking. The Imitation Game was in fact a proxy for intelligence – to observe behaviours in a machine that are indistinguishable from humans.
An artist’s perspective can be found in Maria Popova, who says: "thinking is not mere computation—it is also cognition and contemplation, which inevitably lead to imagination".
Clearly, thinking is hard to define. In this essay, I explore a more nuanced question – can ChatGPT think like a human? First, I’ll examine certain elements of thinking that are unique to humans. I’ll argue that these traits are not found even in the most advanced AI today. I’ll concede that nothing precludes a future AI from exhibiting these traits, but we may never reach a conclusive answer to whether an AI can think like a human. Yet, I’ll explain why this question really matters.
The Limits of Language
When Turing picked language as a test for human intelligence, he was on to something. Even though plants and animals communicate, there is something incredible about human language. As the historian Yuval Noah Harrari says, we use language to tell stories, build complex societies, and invent concepts like religion, laws and money. Some even say that language is the greatest human invention!
What’s remarkable is we seem to have replicated this intelligent behaviour in AI. Since the 1950’s, scientists have used ‘symbolic AI’ to represent words as symbols. Then, using transformer models, large volumes of data, and reinforcement learning, we have created prediction machines that excel at skills like research and writing. Moreover, a chatbot doesn’t generate text through randomness, but through observation, probability distribution, and structured learning of patterns. Does that mean it can think?
The answer is no. As the philosopher John Searle points out, simulating a human conversation only shows that the chatbot can string together words in the right syntax. It is not thinking because it does not understand the semantic meaning of the words it produces — a key element of thinking.
Even if a chatbot understands the relationship between words, there is more to how humans think. For example, when ChatGPT uses the word 'blue' in a sentence, what is it thinking, if at all? For humans, blue can represent many things at different levels of abstraction. It could mean the blue sky, a memory of one’s favourite blue jacket, or the feeling of being depressed. Two people might even think about the same word differently (does a blind-born person think about 'blue' the same way I do)?
A chatbot also lacks the capacity to think about concepts to which it has not been exposed, or something it has not personally experienced. For example, as Yann LeCun explains, an AI trained on text alone cannot understand physical effects like rotation, velocity and friction, which are important aspects of how we understand our world at large. So how can an AI think in the same way that we do?
A World Model
We are entering a new era in which chatbots don’t just write, but also "listen, look and talk." These multi-modal systems, the likes of which were recently unveiled by Google and OpenAI, can simultaneously ingest audio, video and text, and generate outputs in all of these formats.
For the first time, AI models will have access to training data that directly relates to our physical world – at the scale of a billion users. Through our phones, VR headsets and Ray-Ban sunglasses, they will observe complex physical and social dynamics at play. Scientists call this building a 'world model' – a sophisticated digital representation of the world based on language, sensory input and spatial awareness.
Is this the path to building an AI that understands our world? I’m not sure. But as Rodney Brooks, the inventor of iRobot, explains in his paper “Elephants Don’t Play Chess”, this kind of ‘physical grounding’ is essential for intelligence. That’s why scientists are putting headcams on everything from cars to babies.
The goal is to build an AI that not just thinks, but acts like a human. Skip the tedious holiday planning they say – just ask your AI agent. It will read the Lonely Planet guide for you, watch travel vlogs, and curate a thoughtful itinerary. With better world models, it will virtually explore the streets of your planned destination, book some local tours, and call the hotel in advance to make a special request for you. And when the trip ends, just sit back and relax, because it will create a holiday video that perfectly matches your vibe.
Human Creativity
One of the most wonderful expressions of thinking is creativity. Today, apps like Suno can generate original songs in a variety of styles by training on vast music libraries. While quite impressive, listening to an AI-generated song reminds of the anguish express by Douglas Hofstadter when he says:
"Ever since I was a child, music has thrilled me and moved me to the very core. And every piece that I love feels like it’s a direct message from the emotional heart of the human being who composed it. It feels like it is giving me access to their innermost soul. And it feels like there is nothing more human in the world than that expression of music. Nothing. The idea that pattern manipulation of the most superficial sort can yield things that sound as if they are coming from a human being’s heart is very, very troubling." (emphasis mine)
Not everyone is alarmed. According to David Deutsch, AI-generated music lacks the novelty seen in human art because it is the product of pattern matching and data manipulation, not creative thinking.
One difference is intention. For example, a musician intends to arouse certain feelings in the listener by composing or performing the song in a certain way. While it’s not clear what level of thinking is involved in a ‘flow’ state, or how art emerges from actions that are “not deliberate, not random, some place in between”, it appears that some form of agency is required for creative thinking.
What also appears to be missing is emotion. Some of the world’s most respected musicians – The Beatles, B.B. King, Michael Jackson, Kendrick Lamar – are considered legends for channeling their emotions in ways that exemplify their respective genres to this day. Even if a song generated by AI can pull at your heartstrings in some way, would you say it can think like a human if it doesn’t have the emotional capacity to feel them?
Emotional AI
In his groundbreaking book, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman describes an intuitive, emotional and often subconscious mode of thinking that he calls ‘System 1 thinking’. This, he says, enables us to make swift decisions in complex environments. Is this evident in AI?
Over the years, I have come to prioritise what I’m feeling rather than thinking in order to make important decisions, as I explain in a short piece called "Gut Instinct". I don’t fully understand the physiology at play, but as the author Noga Arikha explains, "we are our bodies, we have emotions that are embodied, and that deeply inform our thinking processes''. This is also the foundation of the thought experiment ‘Inside Mary’s Room’, which asks the question; “Does feeling an emotion teach us something that we cannot learn from a book or a movie?”
Despite the recent breakthroughs in creating ‘emotional AI’, the consensus is that ChatGPT is just ‘faking human emotion and behaviour’. I tend to agree with this view because, as I explain above, many aspects of my own thinking are driven by physical sensations rooted in biology, which are not yet replicable in AI.
Unless we create AI that is not just emotionally capable, but ‘embodied’ in a way that gives rise to human instincts, we are always going to be a few steps removed from an AI that can think like a human.
We may never know
When it comes to thinking, what really sets humans apart is our meta-cognitive abilities – we can think about thinking. Why? Because we are self-aware. Does that mean an AI has to be conscious to think?
In June, an engineer from the AI company Anthropic claimed to have observed "meta-awareness" in its Claude chatbot (when asked to perform a certain task, the chatbot ‘suspected’ it was being tested). Another paper from Anthropic shows that specific features of an AI system can be mapped to concepts such as bias, toxicity and manipulative behaviour, which suggests that some AI models may operate similar to how our brains work. Is this how we will know that an AI is self-aware?
I remain sceptical. Despite many decades of research in AI, neuroscience, and the philosophy of mind, we have still not solved the "Hard Problem of Consciousness". We don’t know what consciousness is, how it emerges, or if subjective experiences called ‘qualia’ can emerge from non-biological systems made of silicon. Indeed, we don’t know what it’s like to be a bot. And I’m afraid, until we solve this hard problem, we may never actually know if ChatGPT can think like a human.
Does it matter?
To recap – thinking is notoriously hard to define. To answer the question 'can ChatGPT think', I start with what it means to think like a human. Human thinking encompasses several aspects such as understanding, intention, emotion, intuition and reflection. Even the most advanced AI systems today do not appear to possess these qualities, and therefore I conclude, do not think like a human.
But what if a future AI could think like a human? It is possible that with better sensors, reasoning and planning abilities, real-world data and capable models, the AIs of the future will have a better understanding of our physical, intellectual and emotional states and perhaps, some day, exhibit it themselves. We may never be able to prove that an AI can actually think like a human because of the enduring problem of consciousness, but it remains an important question for many reasons.
First, I think if we realised that an AI could think like a human, we would be deeply troubled as a society. There are some early signs of this. In the now infamous ad, for which Apple has since apologised, a set of musical instruments are slowly crushed into pixie dust with crunching metallic sounds in the background… only to reveal a shiny new iPad. Beyond the dystopian notion of machines replacing musicians, I think, is the concern that an iPad may eventually do the creative thinking for us, something we consider the preserve of human ingenuity.
There are also important ethical and legal questions to consider. If AI could think like a human, we may be more inclined to grant them legal rights, as we have come to do with animals and rivers. Debates about whether an AI should be granted copyright for example, may be settled by whether it did any original thinking.
Finally, on a hopeful note, I think if we found out that an AI can think like a human, it might make the future less scary. It might help us peer into the ‘black box’ so to speak, and make progress on the challenge of ‘explaining AI’ in a way that we can understand. Eventually, in a future where AIs and humans are expected to coexist, I think that may be what really matters.
*Special thanks to Shivani, Sundar, Badri and Sounak for their thoughtful comments on a draft of this essay.