Sweden’s AI Welfare System Faces Discrimination Allegations
Amnesty International has called for the immediate discontinuation of the AI systems used by Försäkringskassan, Sweden’s Social Insurance Agency, following an investigation revealing discriminatory practices. The report, conducted by Lighthouse Reports and Svenska Dagbladet, found that the algorithm disproportionately flagged marginalized groups—including women, individuals with foreign backgrounds, and low-income earners—for benefits fraud inspections.
The AI system assigns risk scores to applicants, leading to automatic investigations based on presumed criminal intent.
Despite previous warnings about the algorithm’s bias and legality, Swedish authorities have remained opaque about its operations. The findings raise significant concerns under the newly enacted European AI Regulation, which mandates strict governance for AI systems affecting public services. Amnesty International insists that the current system must be halted to protect human rights and prevent further discrimination.
Similar patterns were observed in other European countries: Amnesty International had previously raised alarms about Denmark’s use of artificial intelligence in its welfare system, warning that it could discriminate against marginalized groups, including people with disabilities, low-income individuals, and migrants.