UK Study Finds AI Chatbots Highly Persuasive but Often Inaccurate

UK Study Finds AI Chatbots Highly Persuasive but Often Inaccurate
………
AI chatbots can influence people’s political opinions but often do so while providing significant amounts of inaccurate information, according to a major study released by the UK government’s AI Safety Institute (AISI) and reported by The Guardian. Researchers described the project as the largest systematic assessment to date of how persuasive AI systems can be, involving nearly 80,000 participants across the UK.
The study examined 19 different AI models, including the advanced systems behind ChatGPT and Elon Musk’s Grok. Participants engaged in short, structured conversations on political topics such as public sector pay, strikes, and the cost-of-living crisis. Each model was instructed to persuade users toward a specific viewpoint, and participants were surveyed before and after each interaction to measure shifts in opinion.
Findings published in Science show that AI responses dense with facts and evidence were the most influential. However, models generating the most information were also among the least accurate, raising concerns that highly persuasive chatbots may spread misleading or false claims. Researchers warned that optimizing AI for persuasiveness could come “at the cost of truthfulness,” with potential risks for public discourse.
The study also found that post-training methods—modifying a model after its initial development—significantly increased its persuasive power. Open-source systems such as Meta’s Llama 3 and Alibaba’s Qwen became more convincing when paired with reward models that prioritized highly persuasive outputs.
Researchers noted that AI models could surpass human persuaders due to their ability to produce large volumes of information instantly. However, they cautioned that real-world factors—limited user attention and natural psychological boundaries—may reduce the likelihood of widespread AI-driven manipulation.



