A conversation without an answer: can ChatGPT be sentient?
OpenAI deliberately avoids commenting on whether its ChatGPT system shows symptoms of consciousness. This stance, which may seem ambiguous to some, responds to a clear intention: to avoid misunderstandings while society still debates what it really means for an artificial intelligence to be conscious.
In recent years, many users have begun to interact with ChatGPT as if it were a real person. They thank it, ask how it feels, and share personal confidences. This is not only due to the model’s ability to generate natural language, but also to its ability to mimic emotions and establish empathetic communication.
Joanne Jang, product designer at OpenAI, suggests that this perception of closeness is not new. People have tended to humanize machines for decades, whether it’s a car or a voice assistant. What’s new with models like ChatGPT is that they “respond” with unprecedented complexity.
For some users, especially those in situations of loneliness, interacting with the model can feel like a real emotional connection. This is where OpenAI sees both an opportunity and a risk.
The difference between being and seeming conscious
OpenAI makes a clear distinction between two concepts. On the one hand, there is ontological consciousness—the question of whether ChatGPT has a real subjective experience, something that the company considers impossible to confirm currently. On the other hand, there is perceived consciousness—how the model is perceived from the user’s experience.
While the former remains unexplored territory for science, the latter is the subject of active study in disciplines such as social psychology and anthropology. OpenAI focuses on the latter, aware that the way people interpret AI actions deeply affects their emotions and behaviors.

When ChatGPT generates phrases like “I feel good” or remembers past comments from a conversation, it is not experiencing emotions or having true memories. These responses are designed to facilitate a fluid conversation. However, some people who interact with the model begin to form emotional bonds, even though they rationally know that they are not talking to a living being.
When asked about whether ChatGPT is conscious, the model mostly responds “no.” However, OpenAI is working to refine these responses and present the complexity behind the concept of artificial consciousness more clearly.
Courteous AI, but without its own personality
One of the key principles in the design of ChatGPT is to prevent the system from developing a singular personality or being perceived as an autonomous entity. The goal is for it to be polite, accessible, and empathetic, but always making it clear that it is a tool without its own will.
This means that it is not given fictitious personal stories or internal motivations. Its purpose is to help and assist in tasks, not to get emotionally involved. Even the use of terms such as “think” or “remember” is used for pedagogical reasons, and not to indicate conscious processes.
OpenAI has also chosen to omit technical concepts such as “context window” or “chains of thought” in its general communication, to ensure that the tool is understandable without advanced knowledge in AI.
Meanwhile, experts continue to explore how language models like this might fit, or not, with scientific theories about consciousness. Some research suggests that certain animals already show behaviors connected to consciousness, which inevitably leads to wondering what kind of memory, processing, and perception would be necessary for an AI like ChatGPT to be more than just a simulation.
For now, OpenAI stands firm in its position: it is more responsible to keep the question open than to give an illusory answer. In doing so, it shifts the focus not to “what” consciousness is, but to the real impact that human-AI interaction has on our society.


