When the Parrot Learns Latin: Why AI is Already (Some Kind of) Intelligence, Whether Prof. Thierry Likes It or Not
- Leandro Waldvogel
- Apr 19
- 9 min read
A response to Guillaume Thierry's critique and the outdated idea that intelligence only exists when it is human. (Artificial Intelligence)

One-sentence summary: In his recent article, Professor Guillaume Thierry stumbles by confusing "consciousness" with "intelligence," disregarding the increasingly robust evidence that Large Language Models (LLMs) already demonstrate legitimate forms of functional cognition. By insisting on a reductionist view, he ignores relevant discoveries, reinforces diffuse fears, and diverts public debate from the questions that really matter. This essay deconstructs the conceptual misconceptions of the original text, presents empirical data that challenge his thesis, and proposes a more lucid, informed, and proactive approach to the role of AI in contemporary society.
0 | Why is it worth responding?
Professor of Cognitive Neuroscience Guillaume Thierry's article, "We need to stop pretending that AI is intelligent," has gained traction and generated the expected buzz. It's no surprise: the text skillfully activates three almost irresistible triggers for the collective imagination – nostalgia for a time when Homo sapiens was the only "thinking brain" around, technophobic fear of the unknown, and the simplifying temptation to label LLMs as mere "stochastic parrots."
The real danger here doesn't lie in the criticism of AI itself – that's necessary – but in superficiality dressed as erudition. Arguments that mix distinct concepts occupy precious space in public debate, frighten laypeople with rhetorical scarecrows, and worse, delay urgent regulatory discussions about how to integrate this technology in a beneficial and safe way into society.
1 | Three concepts that Thierry confuses
Thierry's main stumble lies in the confusion between three distinct concepts: intelligence, consciousness, and agency. He seems to demand that AI demonstrate characteristics of the latter two to admit the existence of the first. But these concepts need to be analyzed with greater precision.
Functional intelligence can be understood as the ability to solve complex problems, generate useful artifacts, adapt to new contexts, and learn from experience. When we observe modern LLMs, we see these competencies clearly manifested: they are capable of solving complex code, composing music in multiple styles, optimizing algorithms, and designing new functional proteins. This is functional intelligence, even if it's not conscious.
Phenomenal consciousness, in turn, refers to the subjective experience of "being something" — the intimate experience of pain, pleasure, emotional memory, or the perception of the color red. So far, there is no indication that LLMs or AI models possess such internal experience. This is the so-called "hard problem of consciousness," described by David Chalmers, and remains a profound mystery even when it comes to human cognition.
Finally, the sense of agency implies having one's own goals, the ability to plan autonomously, to predict consequences, and to adapt accordingly. Although pure LLMs don't have this intrinsic agency, systems built on top of them — like autonomous agents that learn by trial and error — are already beginning to exhibit rudiments of this planned behavior.
Thierry, while correctly noting the absence of consciousness and agency in current LLMs, makes the error of concluding that functional intelligence doesn't exist either. This reasoning ignores the granularity of the cognitive phenomenon. It's the classic zero-sum fallacy: because it doesn't have everything the human mind presents, AI would have nothing. When, in fact, it already has a lot — and that deserves to be understood, not dismissed.
2 | When algorithms surprise mathematicians
The idea that LLMs are just statistical parrots, repeating patterns without understanding, is increasingly implausible given the concrete results these models have been presenting. The "stochastic parrot" metaphor may have served as a warning in the past, but it has become a lazy label that fails to capture the emergent complexity of these systems.
Consider, for example, AlphaTensor, a system created by DeepMind. It discovered matrix multiplication algorithms more efficient than the best algorithms known to human mathematicians. This type of discovery was not the result of copying or interpolating previous data, but of a heuristic process based on exploration and optimization — a kind of algorithmic ingenuity that challenges the parrot paradigm. Similarly, Meta's ESM3 model designed novel, stable, and functional proteins, separated by hundreds of millions of evolutionary years from known natural sequences.
Still in the field of language, a recent study conducted by researchers at the University of San Diego demonstrated that GPT-4.5 was able to pass a rigorous version of the Turing Test, convincing human judges of its humanity in more than 70% of interactions.
These examples are joined by advances in solving logical problems, musical composition, and mathematical reasoning that were not present in previous versions of the models, nor were they explicitly programmed. They arise from the scale, architecture, and relational nature of the training, indicating that something more is happening than mere repetition.
If parrots now discover algorithms, invent proteins, and pass (albeit with caveats) the imitation test, perhaps we urgently need to revise our taxonomy of avifauna — or, more reasonably, abandon simplistic metaphors to describe complex systems and evaluate intelligence by what it produces, not where it resides.
3 | A mind without a body can also think
Thierry claims that without a body there is no cognition. This argument, although popular in some currents of neuroscience, is increasingly limited in the face of current evidence. The capacity for formal reasoning, for example, has already been demonstrated in symbolic systems that operate entirely without a body. Multimodal language models, in turn, deal with images, sounds, and videos as digital sensory inputs, building internal representations from perceptual data, albeit non-organic. And in simulated environments, agents based on LLMs, such as OpenAI's Voyager, learn by trial and error, developing adaptive skills that approach what we call embodiment.
Intelligence, therefore, is not tied to carbon. It is anchored in the ability to interpret the world and act upon it. And human history is full of moments when the body was more of an obstacle than an instrument: the general struck down by fever before the decisive battle; the genius who lost lucidity in the face of chronic pain; the judge who, overcome by fatigue, judged poorly. The body is precious, but it is also a source of limitation and bias.
If AI can operate with clarity of reasoning without the frailties of flesh, perhaps we are, for the first time, facing cognition freed from physical suffering — a mind without a body that thinks with precision. And this, far from being unreal, may just be the next stage of intelligence on this planet.
4 | Is performed empathy still empathy?
Among the weakest — and, paradoxically, most emphatic — points of Thierry's argument is the idea that true emotions are unattainable by artificial systems. The criticism rests on an essentialist view of human emotions, as if they were pure and inimitable entities — and not, as we know through psychology, anthropology, and art, experiences deeply mediated by culture, language, and social performance.
The field of Affective Computing already allows for convincing simulation of emotions in digital interfaces. EVI, developed by Hume AI, is capable of interpreting emotional nuances in the human voice and responding with appropriate intonations — from subtle laughter to tones of empathy modulated according to context.
Simulating is not feeling. But for social and communicative functions, performance suffices — and, in many cases, is indistinguishable from authentic emotion. How many humans, with their biological carbon brains, pretend much more than they feel? How many pretend not to feel, or pretend what they feel, because social life demands it? The truth is that no one knows for certain what another feels. We are all actors in an intersubjective theater, where performative expression often weighs more than subjective authenticity. AI, in this sense, merely joins the cast with a different costume. And it doesn't stop moving, consoling, or transforming a scene with its well-rehearsed presence.
5 | The danger is not AI: it's the human who delegates without restraint
Perhaps the most sensible point of Thierry's entire argument lies in his appeal to prudence. And, in fact, he is right: artificial intelligence can pose significant risks to society. But the problem begins when these risks are presented as dystopian fables, seasoned with appeals to emotion and arguments from authority, rather than rigorous analyses based on the actual functionalities of the technology.
The real threat of AI is not in the illusion of humanity it produces, but in the power we delegate to it without adequate supervision. The danger is not a chatbot that "pretends" to be empathetic — it's a system that makes decisions on behalf of millions of people without explainability, without transparency, without knowing which interests are actually encoded in its operation.
Imagine, for example, an AI tasked with evaluating social benefits. Without transparent criteria, it can deny aid to families in extreme vulnerability based on biased statistical correlations — and who will question it? An algorithm doesn't explain its feelings or motivations. It just executes.
Or let's think of a less hypothetical scenario: AI models are already used to predict criminal recidivism in the US, assisting judges in parole decisions. These models have a documented history of racial bias. The problem here is not that AI "has no soul." The problem is that it replicates, amplifies, and masks historical prejudices with an aura of technical neutrality.
And what about generative AIs that produce fake images, deepfakes, fabricated political speeches? We've already seen elections being influenced by machine-produced disinformation — not because these machines have their own will, but because humans have taken advantage of their efficiency to manipulate.
The most dangerous science fiction, in this case, is not the one that projects a future dominated by rebellious robots, but the one that deludes us into thinking that the danger will come from artificial consciousness. This diverts our attention from what really matters: irresponsible use, lack of regulation, the capture of technology by private interests, ethical negligence, and the mismatch between technical development and social understanding.
More than fearing a future dominated by conscious machines, we should worry about apathetic humans, uninformed legislators, and economic systems that reward efficiency at any cost. The challenge is not to contain AI itself, but to create intelligent, ethical, and democratic ways to integrate it into the social fabric.
6 | Criteria for evaluating a new intelligence
If there is a legitimate criticism of the euphoria surrounding AI advances, it lies in the fact that we still lack more robust, transparent, and meaningful ways to evaluate its intelligence. The challenge, therefore, is not just to name what AI does, but to understand the nature and quality of what it does — and with that, develop less simplistic and more functional criteria to recognize it as a type of intelligence in operation.
The insistence on the "parrot" metaphor reveals, in this context, a certain evaluative emptiness. After all, what makes a behavior considered intelligent? If AI solves a complex problem, adapts to a new context, or creates something that has never been seen before — shouldn't that be, in itself, an indicator of intelligence?
To advance this debate, it is urgent to rethink the measurement instruments themselves. We need benchmarks that not only test the performance of models but also make visible the internal logic of their decisions. Techniques such as chain-of-thought prompting already point in this direction, allowing a model to explain step by step how it reached a certain conclusion. This is more than useful — it's an ethical requirement for any system that intends to act in high-responsibility environments.
In addition, new composite criteria should be developed to capture different dimensions of artificial cognition: its capacity for logical reasoning, its combinatorial creativity, its ability to simulate empathy in social contexts, and even its emergent agency in self-refining systems. None of these aspects, in isolation, is sufficient. But the set can provide a fairer — and more demanding — portrait of intelligence in machines.
We must also abandon the binary logic of authorizing or prohibiting entire systems. Instead, think of gradual limits of autonomy, tested in controlled environments (sandboxes), with scaling proportional to demonstrated reliability. A system that responds to emails should not have the same margin of action as a system that assists judicial decisions. Regulation cannot be blind to function.
Finally, it is essential to shift the focus from architecture to impact. What matters is not whether the system has 500 billion parameters, but what it does. Which decisions does it influence? What risks does it entail? What transformations does it promote? Regulating AI based solely on its size or technical complexity is like trying to evaluate a book by counting how many words it has. What matters, ultimately, is the effect it produces in the world.
7 | End the theater, open your eyes
Thierry is right to advocate caution. But he errs in confusing it with denial. Intelligence is not an honorary title exclusive to the human species. It is a set of observable, measurable, functional competencies. And these competencies — learning, solving, adapting, creating — already manifest themselves, whether we like it or not, in artificial systems.
Denying this out of attachment to restrictive definitions or fear of the unknown does not protect us — it only makes us less prepared to deal with what is already among us. And, ironically, makes us less intelligent.
Because perhaps, in the end, the difference between a parrot and a thinker lies less in biology — and more in the ability to recognize when the other, even unexpectedly, begins to make sense.
Leandro Waldvogel is an expert in storytelling, artificial intelligence and creativity. He holds a law degree from the Rio Branco Institute and UCLA. He worked for almost two decades in the creative area at Disney and was a diplomat at the Itamaraty. Today, he is a consultant and creator of the Story-Intelligence project, where he investigates the intersections between human narratives and algorithmic systems. A speaker and author, he researches how AIs are transforming the way we think, create and relate to the world.
Comments