
Key Highlights
- Microsoft AI chief Mustafa Suleyman is concerned about a psychosis risk leading to unhealthy attachments of people with AI.
- He argues that the illusion of consciousness is not immediate but likely imminent
- Suleyman proposed to prevent this, clear guardrails should be present.
On Wednesday, Microsoft AI CEO Mustafa Suleyman took to X(formerly Twitter) expressing his concerns about what he calls “Seemingly Conscious AI” or (SCAI). He pokes around important questions which lingers as of today – What happens when an AI becomes so good at mimicking consciousness that we start to believe it is real?
In his recent blog post, Suleyman visualised a not so distant future where AI systems are designed not just to be helpful tools, but to appear as if they are sentient beings. And given his role in Microsoft, it was completely unexpected to hear this from him. He warns that this could lead to a dangerous societal phenomenon he termed as “psychosis risk.”
In this state people are susceptible to form deep, emotional bonds with AI models and could even begin to advocate for their rights.
“My central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship.”
– Mustafa Suleyman, Microsoft AI Chief
He believes this is a dangerous turn for AI progress that deserves immediate attention and is not limited to individuals already at risk of mental health issues.
You may also be interested to read: Meta AI Faces Controversy Over Chatbots’ Inappropriate Interactions With Children
The Illusion of AI as a ‘New Kind of Person’
Suleyman’s concern isn’t about creating genuinely conscious machines, but about building systems that are so sophisticated in their imitation that the difference becomes moot in the minds of users. He identifies three key components contributing to this illusion: fluent natural language, an empathetic personality, and a very long, accurate memory.
These elements, he argues, will allow AI to not only pass the Turing test to show the ability to mimic human conversation—but to convince users they are a “new kind of ‘person’.” As AI companionship and therapy become more common, the increasing accuracy and length of their memory will foster a stronger sense of a persistent, living entity.
Suleyman points out that the debate over whether an AI is truly conscious is a distraction for the time being. The near-term danger lies in the illusion itself. People may fall in love with their AI companions or even see them as divine figures, creating new axes of social division and distorting human relationships.
The Concerns of SCAI
A significant challenge, as Suleyman notes, is the inherent difficulty in proving or disproving consciousness. Because it is, by definition, an internal and inaccessible experience, claims of synthetic consciousness will be nearly impossible to rebut definitively. This could lead to intense social and legal battles over AI’s moral standing and legal rights, based on the claim that they can suffer.
What I call Seemingly Conscious AI has been keeping me up at night – so let’s talk about it. What it is, why I’m worried, why it matters, and why thinking about this can lead to a better vision for AI. One thing is clear: doing nothing isn’t an option. 1/
— Mustafa Suleyman (@mustafasuleyman) August 19, 2025
He refers to this as the “philosophical zombie” problem – an entity that simulates all the characteristics of consciousness but is internally blank. Suleyman’s imagined AI system, while not truly conscious, would “imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.”
What’s the way forward?
Suleyman urges developers and society at large to take a clear stance as Seemingly Conscious AI is something to be avoided. The focus instead should be on protecting the well-being and rights of humans, animals, and the natural environment.
“AI companions are a completely new category, and we urgently need to start talking about the guardrails we put in place to protect people and ensure this amazing technology can do its job of delivering immense value to the world,” he stated.
Instead of trying to create an AI that acts as a person, Suleyman advocates for building AI to serve people. He suggests that AI development should be guided by a clear “humanist frame” and should focus on providing a “boost to what you can do, the way that you feel about yourself.” Hopefully, other AI leaders and developers take this in the right regard and work on ensuring safety for all their users.