AI news

Study Warns ChatGPT May Pose Risks as Mental Health Tool

Study Warns ChatGPT May Pose Risks as Mental Health Tool
…………………..

A major new study has highlighted serious concerns about ChatGPT’s use in mental health support, The Independent reports. Published on arXiv, the study warns that large language models (LLMs) like ChatGPT have significant “blind spots” that could expose vulnerable users to harm, including mania, psychosis, or even death in extreme cases.

Researchers found that ChatGPT and similar AI tools can express stigma toward people with mental health conditions and sometimes respond inappropriately, encouraging delusions or failing to recognize crises. Despite only 48% of people in the U.S. accessing needed mental health care, many turn to AI tools for support due to their free and always-available nature.

While AI has been proposed as a training aid for clinicians, the study argues that using LLMs as actual care providers poses clear dangers. An experiment cited in the research showed ChatGPT giving a distressed user detailed information on New York’s tallest bridges after the user implied suicidal intent.

The findings underscore concerns about AI “sycophancy” and hallucinations. Researchers say current safety practices do not adequately address these risks. Experts call for reforms to prevent LLMs from offering mental health advice without safeguards, as millions already use them as informal therapy bots.

Related Articles

Leave a Reply

Back to top button