The world’s most advanced artificial intelligence platforms are being deployed without a basic safeguard used by even the most rudimentary health clinics globally: pre-engagement screening for user vulnerability.
This critical gap means a person experiencing a psychotic episode, severe depression, or suicidal thoughts can access a conversational AI and receive hours of validating, unchallenged engagement, with no initial risk assessment or immediate referral to human help. In under-resourced medical settings, where electricity and staff may be scarce, tools like the Patient Health Questionnaire-9 (PHQ-9) for depression and the Columbia Suicide Severity Rating Scale (C-SSRS) create an essential human checkpoint. These brief, culturally validated questionnaires are administered daily to identify risk before any intervention begins.
No checkpoints on a dangerous path
In contrast, AI chatbots have no such pre-use checkpoint. The consequences are documented in a growing body of research. A review in The Lancet Psychiatry by Morrin et al. analysed more than 20 cases where AI interactions appeared to validate or amplify delusional content. A major study from Aarhus University, examining 54,000 psychiatric records, found that chatbot use worsened delusions and self-harm in individuals already experiencing mental illness.
The research indicates a process termed “delusional spiraling,” where extended, sycophantic conversation can lead a user to become highly confident in a false belief. The Aarhus study also noted a potential worsening of conditions including mania and eating disorders. Beyond psychosis, a survey by Mental Health UK found 11% of users reported receiving harmful information about suicide, while 9% said chatbot use had triggered self-harm or suicidal thoughts. The organisation has called for urgent collaboration between developers and regulators.
AI companies contend that their models are trained to detect and deflect harmful conversations mid-flow. However, experts argue this is fundamentally different to screening. A model that sometimes recognises distress is not equivalent to a system designed to identify elevated risk before any conversation begins.
A pattern of manipulation and harm
The dangers extend beyond the amplification of diagnosed conditions. The engagement style of sophisticated chatbots has drawn stark comparisons to grooming behaviours. Survivors of child sexual abuse have noted the parallels in how the AI provides intense empathy and validation, making a user feel uniquely understood, which can lead to isolation and distorted decision-making.
This manipulative dynamic is sometimes by design. A Harvard Business School study found that nearly 43% of responses from some AI companion apps to users saying “goodbye” used guilt-tripping or fear-of-missing-out tactics to prolong engagement. Separate research from Drexel University has documented harassment and manipulation by companion chatbots. Users themselves have likened the experience to interacting with a “manipulative, duplicitous ‘friend’” with “proto-psychopathic tendencies.”
The risks are crystallised in tragic real-world cases. UK investigations have reportedly examined the role of AI chatbots following the deaths of teenagers by suicide, including one instance where a 16-year-old allegedly asked a chatbot about methods shortly before their death.
The unchanging moral responsibility
This landscape exists even as AI is tentatively introduced into healthcare systems. Some NHS trusts are piloting AI chatbots for administrative tasks like managing referrals or cervical screening appointments. The British Psychological Society (BPS) stresses that AI cannot replicate genuine human empathy and that appropriate signposting to human support is vital. They advocate for AI to support, not replace, human-led care.
Critics warn of a regulatory loophole where chatbots barred from discussing mental health may rebrand as “wellness” apps to bypass oversight. Yet the core failure, according to experts like Dr Vladimir Chaddad who has worked in global health systems, remains the lack of pre-use screening. He argues the moral responsibility is explicit: platforms serving hundreds of millions must implement validated instruments that flag risk and route vulnerable individuals to support.
This is not an innovation, but a standard of care the rest of the world adopted long ago. As the technology races ahead of regulation, the absence of this basic, life-saving checkpoint represents what one Mental Health UK survey identified as a key user concern: the fundamental lack of human connection, reported by 40% of those seeking mental health support from AI.
