UK Lawmakers Warn of AI Risks to Consumers and Financial Stability

Consumers and the wider UK financial system face potential harm from the rapid adoption of artificial intelligence, amid concerns that regulators and the government are failing to keep pace with the technology, according to a parliamentary report published this week, as reported by The Guardian. Lawmakers warned that insufficient oversight could expose vulnerable consumers and increase systemic financial risks.
The Treasury select committee criticised ministers and key regulators, including the Bank of England and the Financial Conduct Authority (FCA), for what it described as a cautious “wait-and-see” approach to AI in financial services. MPs said this stance leaves open the possibility that widespread use of similar AI systems could amplify market shocks or disadvantage consumers in areas such as lending and insurance.
More than three-quarters of financial firms in the City now use AI, particularly insurers and large international banks. The technology is increasingly deployed in core activities, from processing insurance claims to assessing creditworthiness. However, the UK currently has no AI-specific financial regulations, relying instead on existing rules that firms must interpret for AI applications.
The report highlighted concerns over transparency and accountability, questioning who would be responsible if AI-driven decisions caused harm. It also warned of higher fraud risks, misleading financial advice, cybersecurity vulnerabilities and heavy reliance on a small number of major technology providers, which could undermine resilience during economic stress.
MPs urged regulators to act, including introducing AI-focused stress tests and issuing clearer guidance on consumer protection by year-end. While the FCA, Treasury and Bank of England said they are assessing AI risks and reviewing the recommendations, the committee stressed that stronger oversight is needed to prevent serious harm to consumers and financial stability.




