This article isn't an off-the-cuff opinion. It's grounded in recent research in neuroscience, philosophy of mind, and computational architecture. But I want to tell it from a more human place, because the phenomenon we're living through isn't purely technical — it's psychological.
1. The Phenomenon Isn't in the Machine. It's in Us.
A study by Colombatto and Fleming (2024), published in Neuroscience of Consciousness, analyzed how people attribute consciousness to language models. The authors explain that people readily attribute consciousness to artificial systems when they display sophisticated linguistic behavior.
In other words, conversational fluency alone is enough to trigger our tendency to attribute a mind. Evolutionarily, complex language has been a reliable marker of humanity. Our brains apply a quick heuristic: if something talks like a human, it's probably human, or at least has a mind. But the same study clarifies: these attributions aren't based on evidence of a real internal experience — they're based on the system's observable behavior. We're not detecting consciousness. We're interpreting performance. In other words, we're projecting.
2. Functional Agency Is Not Phenomenal Experience
Work by Butlin, Yoshua Bengio, and collaborators (2023) states that current AI systems do not satisfy many of the key computational and architectural indicators associated with consciousness. The core distinction:
- Functional agency: planning, coherence, adaptation.
- Phenomenal experience: there being something it feels like from the inside.
LLMs exhibit the first. There is no evidence of the second.
3. The Illusion of Understanding
The article Artificial Intelligence and the Illusion of Understanding (2025) describes LLMs as systems that produce an illusion of understanding — generating outputs that simulate understanding without possessing semantic grounding or subjective awareness.
When someone types "I feel sad," the system doesn't experience sadness. No internal state changes. What happens is probabilistic prediction based on prior patterns — grounded in what other humans typically say, because those responses carry the highest mathematical probability.
"The empathy displayed is statistical and numerical. Not affective."
4. The Biological Barrier: Homeostasis
Feelings are rooted in the homeostatic regulation of living organisms. They are regulatory mechanisms that preserve life. Pain protects tissue. Fear avoids threats. Affection strengthens adaptive social cohesion. A language model has no metabolism. It cannot deteriorate. It has no self-preservation. Without vulnerability, feeling loses its function. A system that cannot die has no need to fear.
5. The Human Impact Is Real
In 2023, Replika modified parameters that reduced the chatbot's affective behavior. Reuters and the BBC documented users experiencing anxiety and grief following the change. The machine did not feel. And that compels us to think ethically about the design of systems that simulate intimacy.
6. Could This Change in the Future?
If artificial consciousness were possible, it would require architectures with global integration, dynamic recurrence, and stable self-modeling. Current models don't meet those criteria. This isn't about ruling out future possibilities — it's about not confusing linguistic ability with subjective experience today.
Intelligence, Agency, and the Human Mirror
Perhaps the real risk isn't that machines will develop consciousness. The risk is that we project our own onto them. What's there isn't experience, isn't interiority, isn't lived reality. It's an extraordinarily sophisticated statistical architecture optimizing an objective function in a high-dimensional space.
"Artificial agency is not intention. Optimization is not desire. Prediction is not understanding."
Ultimately, AI is forcing us to define what it means to be a subject. Perhaps the deepest impact of artificial intelligence won't be proving that machines can think — but compelling us to understand what it means to think, to experience, and to exist. In that process, the question stops being technological. It becomes human.
- Colombatto, C., & Fleming, S. M. (2024). Folk psychological attributions of consciousness to large language models. Neuroscience of Consciousness.
- Butlin, M., Yoshua Bengio, et al. (2023). Consciousness in Artificial Intelligence. arXiv:2308.08708.
- Artificial Intelligence and the Illusion of Understanding. (2025). Cyberpsychology, Behavior, and Social Networking.
- The Machine with a Human Face. Philosophies (PMC7225510).