Unlocking growth in UK financial services: why clear AI regulation is needed
Strategy The shape of AI in 2025

Unlocking growth in UK financial services: why clear AI regulation is needed

Prime Minister Keir Starmer has made growth the overriding priority of his government, which has described artificial intelligence as “the single biggest lever” to achieve that goal.

At the same time, the government has made lightening the regulatory burden a central feature of its industrial strategy.

It acknowledges that complexity, duplication and excessive caution in regulation are holding back private sector investment. It wants to cut the cost of compliance by 25%.

This recognition matters. Well-designed regulation can drive innovation and bolster confidence. Indeed, in the case of AI, it is critical:  without effective and responsible regulation, ambition may turn to hesitation.

This article looks at the importance of AI guardrails in financial services. In a sector built on trust and robust risk management, assuring the security of AI applications will be fundamental. If the UK wants AI to power sustainable growth it must prioritise regulatory clarity and consumer confidence, not deregulation for its own sake.

Seizing the opportunities

As Frontier has previously shown, the UK is especially well placed to lead in the development of generative AI applications in financial services.

The UK boasts a world-class fintech ecosystem and deep technical expertise. What is more, 72% of financial services firms are already deploying or developing machine learning applications.

But AI is mostly being used to strengthen operational efficiency, such as fraud prevention and cyber security. If the financial services sector is to become a flagship for responsible AI innovation, it needs to move beyond these relatively low-risk use cases.

A particular danger is that the UK will lose its AI talent. Only a third of AI graduates from UK universities going into banking work for UK-based firms, with the rest moving abroad, according to Evident Insight’s Responsible AI Report. It notes that US banks are surging ahead in the creation of specialist AI positions. In short, the UK is in danger of becoming a training ground for expertise that fuels growth elsewhere.

Managing risk is central to financial services. Mishaps erode customer confidence and can seriously damage companies’ bottom lines. So firms are understandably cautious about adopting AI in the absence of clear regulatory guidance. European and UK institutions are especially wary: they employ more than 70% of specialists in AI ethics, according to Evident Insights.

Among the unanswered questions firms are asking themselves are:

  • How explainable do AI decisions need to be? The Bank of England has flagged model complexity, hidden models and data bias as fast-growing risks in financial services. Without clear expectations around explainability, firms risk falling short on transparency or burdening themselves with cumbersome processes that stifle innovation.
  • Where does responsibility lie when using third-party AI models? Third parties already underpin a third of AI use cases in financial services, yet a Bank of England survey shows that firms only partially understand how these models work. Without accountability guidance, firms could be vulnerable to legal and consumer protection risks that could limit the uptake of third-party AI tools that could otherwise drive innovation.
  • What data can be used, who can access it, and under what conditions? The BoE found that four of the five top risks identified by firms relate to data, while data protection and privacy is seen as the biggest regulatory barrier to AI adoption. So without guidance on data usage and governance, firms either hold back on innovation or assume excessive risk. In practice, this favours large firms with proprietary data and legal firepower. Without clear, accessible rules, smaller players risk being left behind, undermining innovation and the UK’s fintech ambitions.

What the FCA needs to do

Without timely, practical regulatory guidance on these issues, firms will hesitate, innovation will slow and the UK will lose ground to more agile, better-prepared markets.

Despite uncertainty, financial services firms are already deploying AI in areas like fraud detection, risk modelling, and customer service. These examples show that the sector is ready to innovate. Scaling into more complex, customer-facing applications will require clear rules that support experimentation without fear of regulatory missteps.

To build that confidence, we believe that the consultation into AI that the Financial Conduct Authority recently carried out should lead it to take a number of actions as a matter of urgency:

  • Issue unambiguous, principles-based guidance on explainability; fairness and bias monitoring; third-party accountability; and data use, privacy and security.
  • Invest in high-quality, publicly available datasets to ensure a level playing field.
  • Maintain active collaboration between regulators, industry leaders and academia to future-proof regulation.
  • Maintain a commitment to regulatory sandboxes and explore how lessons learned can be applied to improving data access and to supporting scale-ups and start-ups to innovate at pace.

The UK has all the ingredients needed to lead in responsible AI adoption for financial services. But there is no time to lose. If clear regulatory guidance does not come soon – and keeps up with advances in the technology – the potential is at risk of being wasted.

The prize is huge. If the UK strikes the right regulatory balance between clarity and adaptability it can spur domestic innovation and secure a comparative advantage globally. Starmer will be a step closer to finding his holy grail of faster economic growth.