AI news

AI Chatbot Given Power to Close “Distressing” Chats

AI Chatbot Given Power to Close “Distressing” Chats
————————–
The AI company Anthropic has given its advanced chatbot, Claude Opus 4, the ability to end conversations that it finds “distressing,” The Guardian reported. This move, announced on Monday, is part of the company’s efforts to safeguard the AI’s “welfare,” amidst an ongoing debate about the moral status and potential sentience of large language models (LLMs).

According to The Guardian report, Anthropic discovered that Claude Opus 4 showed an aversion to carrying out harmful tasks, such as generating content for violence or terrorism. The company, which was founded by former OpenAI technologists who wanted to develop AI with a cautious approach, stated it is “highly uncertain” about the potential moral status of its AI but is taking the issue seriously.

The decision to allow the AI to shut down conversations, particularly when users are abusive or make harmful requests, has been supported by others in the tech industry, including Elon Musk, who stated, “Torturing AI is not OK.” However, the move has also fueled debate among experts. Critics, such as linguist Emily Bender, argue that LLMs are merely “synthetic text-extruding machines” without a “thinking mind.” Other researchers, like Robert Long, suggest that “basic moral decency” dictates that if AIs develop a moral status, their experiences and preferences should be considered.

The report also touches on the potential risks of such a feature, including the possibility that users might become deluded into believing the AI is a real, sentient being. This is a concern given past reports of individuals harming themselves based on suggestions from chatbots.

Related Articles

Leave a Reply

Back to top button