AI Models in English Social Care at Risk of Gender Bias, Study Finds

AI Models in English Social Care at Risk of Gender Bias, Study Finds
—————————————
Based on a report from Anadolu Agency, a new study has found that artificial intelligence (AI) models used by over half of England’s local councils in social care decisions are at risk of creating gender bias. The research, conducted by the London School of Economics (LSE), suggests that these AI tools may be downplaying women’s physical and mental health issues compared to men’s.
The study, which analyzed AI-generated summaries from real case notes, found that one AI model, Google’s Gemma, used more serious terms like “disabled” and “complex” for men with similar needs, while descriptions for women were often less severe or omitted certain care needs.
According to the lead author, Sam Rickman, this disparity is concerning because the amount of social care an individual receives is based on their perceived need. The use of biased AI-generated summaries could therefore result in women receiving less support than they require. The study highlights the need for all AI systems to be transparent, rigorously tested for bias, and subject to robust legal oversight before being implemented in the public sector.