The UK Parliament’s Treasury Committee has warned that artificial intelligence deployment in financial services poses systemic risks that current regulatory frameworks are ill-equipped to address, calling for urgent reforms to prevent consumer harm and market instability.
The cross-party committee released findings on Tuesday indicating that the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) lack sufficient powers and resources to oversee AI systems increasingly embedded in lending decisions, fraud detection, and trading operations across Britain’s £11 trillion financial sector.
“The current approach to AI in financial services risks serious harm to consumers and the wider system,” the committee stated, highlighting concerns that regulators cannot adequately monitor algorithmic decision-making that affects millions of customers daily. The report follows an inquiry examining how AI adoption in banking and insurance is outpacing regulatory capacity.
The committee identified three critical vulnerabilities: opacity in algorithmic decision-making that prevents consumers from understanding why they were denied services, concentration risk from multiple institutions relying on identical AI systems from a handful of providers, and inadequate testing requirements before deployment in live environments.
MPs expressed particular concern about “model monoculture,” where banks and insurers increasingly purchase AI systems from the same technology vendors. This creates correlation risk—if one widely-used system malfunctions or produces biased outputs, the impact could cascade across multiple institutions simultaneously, potentially triggering market instability reminiscent of the 2008 financial crisis.
The business implications are substantial. Financial institutions have invested billions in AI to reduce operational costs and improve customer targeting, with UK banks alone spending an estimated £2 billion annually on AI and machine learning initiatives. Stricter oversight could slow deployment timelines and increase compliance costs, particularly for smaller firms lacking dedicated regulatory teams.
However, technology vendors specialising in regulatory technology and AI governance tools stand to benefit. The committee’s recommendations include mandatory algorithmic impact assessments and third-party auditing requirements—creating new service markets for firms offering AI explainability, bias detection, and compliance monitoring solutions.
Consumer advocacy groups welcomed the findings. “Automated systems are making life-changing decisions about mortgages, insurance, and credit with minimal transparency,” said one financial inclusion charity cited in the report. The committee documented cases where consumers were denied financial services due to algorithmic decisions they could neither understand nor effectively appeal.
The report arrives as the European Union finalises its AI Act, which classifies AI systems used in credit scoring and insurance underwriting as “high-risk” applications requiring stringent oversight. The UK government has resisted prescriptive AI legislation, favouring a principles-based approach that relies on existing sectoral regulators—a strategy the Treasury Committee now deems insufficient.
The committee made several specific recommendations: granting regulators explicit statutory powers to audit AI systems, requiring firms to maintain human oversight of consequential automated decisions, and establishing mandatory reporting when AI systems produce unexpected outcomes. It also called for the FCA and PRA to receive additional funding specifically for building AI expertise.
The government must respond formally to the committee’s findings within two months. Industry observers will watch whether Treasury ministers accept the call for expanded regulatory powers or maintain the current light-touch approach, which technology firms argue is essential for maintaining London’s competitiveness as a financial technology hub.
The debate reflects broader tensions in AI governance: balancing innovation incentives against consumer protection and systemic stability. With AI adoption in finance accelerating faster than in most other sectors, the UK’s regulatory response will likely influence approaches in other jurisdictions navigating similar challenges.




