
From potential to policy: Shaping AI’s role in the insurance industry
Insurance Foresight 2025 | Mid-year review
Global | Publication | July 2025
2025 has seen a surge of developments in the AI arena as regulators take on the challenge of technological advances where the potential for economic growth (through the delivery of tangible real-world benefits for consumers, firms and the financial services market) needs to be balanced against the risks to the safety and soundness of the market. Alongside initiatives to promote growth and innovation, regulators and governments have continued to press ahead with introducing or preparing to enforce new rules regulating AI.
UK
In the UK, the government and relevant regulators have initiated multiple workstreams designed to review AI in financial services, including the insurance market. The government is emphasising growth and innovation, though we have also seen regulators outlining key areas for action and enforcement.
UK Government
AI Opportunities Action Plan
- In January 2025, the Department for Science, Innovation and Technology published its AI Opportunities Action Plan (the AI Action Plan). The AI Action Plan comprises three sections:
- Investing in the foundations of AI – Investment in and access to talent, a pro-innovation approach to regulation, robust data infrastructure and advanced computing.
- Emphasising cross-economy AI adoption – Rapid piloting and scaling of AI services and products in the public sector and encouragement of the private sector to follow suit in order to increase productivity and cultivate better experiences for citizens.
- Position the UK to be an AI maker, not an AI taker – The UK should aim to have true national champions at key points in the AI stack to enable the UK to benefit economically from AI advancement and to also allow the UK to hold real influence on the future values, governance and safety principles of AI.
Government legislative agenda
The Data (Use and Access) Act 2025 (the Act) received Royal Assent on 19 June 2025. It includes a reframing of the UK General Data Protection Regulation (GDPR)’s restrictions on solely automated decision-making to lift the default prohibition for some decisions. For decisions not involving ‘special category data’ (like health data), the rules will be reframed as a right to safeguards such as being able to contest the decision. This change looked to promote innovation by encouraging AI use with safeguards. The rules for data to which the EU GDPR applies remain unchanged. Most provisions of the Act, including the changes around automated decision-making, are not yet in force. At the time of writing, the government has not announced when they will be brought into force.
The government has not proposed comprehensive AI legislation (nor has it introduced AI legislation), though it may consult on legislation to regulate the most powerful models.
The government consulted on AI and copyright between December 2024 and February 2025, setting out four policy options to “clarify copyright law and meet its objectives for AI innovators and the creative industries”. The Act includes a statutory commitment on the government to assess these options and produce reports.
Information Commissioner’s Office (ICO)
As the cross-sectoral regulator for data protection, the ICO continues to take a leading role in regulating AI. The ICO recently published its AI and biometrics strategy. The ICO confirmed that it plans to publish a statutory code of practice for organisations developing or deploying AI and automated decision-making.
Its priorities include:
- giving organisations certainty on how they can use AI and automated decision making responsibly under data protection law, including updating its guidance to reflect the changes to the law discussed above;
- setting clear expectations for the responsible use of automated decision-making in recruitment; and
- anticipating and acting on emerging AI risks.
Financial Conduct Authority (FCA) and Bank of England (BoE) / Prudential Regulation Authority (PRA)
The FCA and BoE / PRA continue to engage with stakeholders to promote innovations and understand the barriers and risks for AI use in firms. No comprehensive AI guidance has been provided at this stage, though the regulators emphasise the application of existing rules to firms’ AI use.
FCA: AI Sprint
In January 2025, the FCA hosted a 2-day artificial intelligence sprint (the AI Sprint). The AI Sprint (which involved 115 participants from across the industry, academia, regulators, technology providers and consumer representatives), discussed the opportunities and challenges of AI in financial services.
- The AI Sprint focussed on how AI may develop in financial services over the next 5 years and the current financial services regulatory regime.
- Next 5 years – Participants discussed the potential for increased personalisation for consumers through virtual AI assistants, the advancement in agentic AI (i.e., autonomous models) and emotion AI, to help personalise interactions; and increasing automation for firms e.g., by automating regulatory compliance, fraud detection and customer support agents. The AI Sprint also reviewed factors seen as contributing to safe AI adoption, including measurable success criteria, model/data/cloud/tech foundations, staff upskilling and internal governance, common standards and interoperability.
- Current financial services regulatory regime – The importance of effective internal governance and clear accountability for AI use cases within firms (including the Senior Managers & Certification Regime (SM&CR)) was highlighted, particularly where AI is outsourced to / provided by third parties. The AI Sprint also covered explainability and transparency (setting out clear explanations of how AI models are used and clear routes for escalation to a human for redress purposes); fairness (noting the inherent potential risk of bias and impacts on vulnerable consumers); and consumer benefits to improve customer experience.
- The common themes that were discussed included:
- Regulatory clarity – The importance of firms understanding the application of the existing regulatory frameworks to AI; including areas where the FCA could clarify or build on existing requirements to help firms understand regulatory expectations.
- Trust and risk awareness – Trust in AI being vital for its successful implementation.
- Collaboration and coordination – The development of solutions should include domestic and international regulators, model developers, government, financial services firms, academics and end users all working together.
- Safe AI innovation through sandboxing – The need for a safe testing environment to encourage responsible innovation.
FCA: Supercharged Sandbox
- On 9 June 2025, the FCA announced that it will launch a Supercharged Sandbox (the Sandbox) in order to help firms experiment safely with AI to support innovation, including using NVIDIA accelerated computing and NVIDIA AI Enterprise Software.
- The Sandbox is open to any financial services firm that is seeking to experiment with AI in a secure environment and will give firms access to improved data, regulatory support and technical expertise required to facilitate innovation.
FCA / PRA: Letters supporting AI innovation and growth in financial services
In March 2025, the FCA also published a letter on supporting AI innovation and growth in financial services, which was written jointly by the FCA and the ICO and addressed to Trade Association chairs and CEOs. This was followed by the BoE and the PRA publishing a strategic update on their approach to the regulation and supervision AI on 22 April 2024. The primary focus of the BoE and PRA is to maintain financial stability and ensure that regulated firms practice safe and sound operations. Both regulators are assessing how to facilitate AI and Machine Learning (ML) adoption and have been exploring four potential areas where further clarification on the regulatory framework could be beneficial in the context of AI / ML: (i) data management, (ii) model risk management, (iii) governance, and (iv) operational resilience and third-party risks.
Ongoing collaboration with the FCA and other regulatory bodies aims to establish a unified approach to AI and ML, by developing a regulatory framework that balances the benefits and risks of AI, whilst aligning with their statutory objectives and the government's principles for AI regulation, which highlight innovation, proportionate regulation, and cross-regulatory cooperation.
Financial Policy Committee’s view on AI in the financial system
As part of this collaboration and to assist in the wider understanding of the impact of AI use in the financial system, on 9 April 2025, a report titled: "Financial Stability in Focus: Artificial Intelligence in the Financial System’’, was published setting out the Financial Policy Committee’s (FPC) view in relation to this topic (utilising the regular PRA and FCA AI Surveys, the AI Consortium and supervisory intelligence). The report highlights advanced forms of AI as a likely area of development, increasingly helping to inform the core financial decisions of financial institutions, such as credit and insurance underwriting, potentially shifting the allocation of capital.
- Given the uncertainty around how AI will evolve, the FPC is considering the potential macroprudential implications of more widespread use of AI in the financial system in order to contribute to the safe and sustainable adoption of technology in respect of financial stability. The FPC’s focus is currently on:
- Greater use of AI in banks’ and insurers’ core financial decision-making: bringing potential risks to systemic institutions i.e. including introducing risks in relation to models and data;
- Greater use of AI in financial markets: generating potential risks to systemic markets i.e. the potential future use of more advanced AI-based trading strategies could lead to firms taking increasingly correlated positions and acting in a similar way during a stress and amplifying shocks. This market instability can then affect the availability and cost of funding for the real economy;
- Operational risks in relation to AI service providers: bringing potential impacts on the operational delivery of vital services, i.e. reliance on a small number of providers for a given service could lead to systemic risks in the events of disruptions to them; and
- Changing external cyber threat environment: while AI may increase financial institutions’ cyber defensive capabilities it could also increase the risk of successful cyberattacks against the financial systems or create new vulnerabilities for financial institutions.
Engagement Paper: Proposal for AI Live Testing
As part of its response to AI use in UK financial services, on 29 April 2025, the FCA has published an Engagement Paper setting out proposals for AI Live Testing (as part of the existing AI Lab). The aim is for its regulatory and technical teams to work directly with firms to provide tailored support as firms develop, assess and deploy live AI models in UK financial markets.
The FCA is seeking views from stakeholders (in particular, from chief information officers, chief AI officers, chief data officers and AI solution providers) on how its proposals can help them to deploy safe and responsible AI, which will in turn benefit UK consumers and markets. This service would enable firms to collaborate with the FCA while checking that their new AI tools are ready to be used as well as providing the FCA with intelligence to better understand how AI may impact UK financial markets. The proposed live testing service would run for 12 to 18 months, with plans to launch in September 2025.
EU
EU AI Act and AI governance
The EU AI Act’s prohibitions and AI literacy obligations began to apply on 2 February 2025. The Commission continues to press forward with the codes and guidelines needed for enforcement:
- The Commission published guidelines on the prohibitions on 4 February 2025 and on the definition of AI systems on 6 February 2025. It has published a working document for consultation on the rules for general-purpose AI models, with a view to providing guidelines on when downstream modifiers can come into scope for the general-purpose AI model obligations. It recently launched a consultation on the rules for providers of high-risk AI systems to collect feedback ahead of providing guidelines.
- On 11 March 2025, the third draft of the "General-Purpose AI Code of Practice" (the "Code") was published. The Code will detail the EU AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risk. The code will provide a means of complying with the rules for model providers on transparency to downstream providers and regulators, as well as on their copyright policies. It will be accompanied by a template for information to be made public on training data. The Code is currently due to be ready by July 2025, a little later than the statutory deadline of 2 May 2025.
- As a result of the delay on the Code and delays on publishing standards for providers of high-risk AI systems, there have been suggestions that the Commission could delay enforcement of the EU AI Act. However, it is not clear at this stage how long such a “stop the clock” would last and what provisions it would affect. The Commission appears to be pressing ahead with actions to allow enforcement in the meantime.
Meanwhile, the European Insurance and Occupational Pensions Authority (EIOPA) has continued to focus on AI governance and compliance with existing legal and regulatory obligations:
- On 12 February 2025, EIOPA launched a consultation on its Opinion on Artificial Intelligence governance and risk management, which provides supervisors and insurance undertakings guidance on how to interpret and implement insurance sector provisions in light of the use of AI systems in insurance.
- On 15 May 2025, EIOPA published a survey on the adoption of generative AI solutions in the European insurance sector. The survey aims to gather information about the extent to which insurance undertakings have implemented or are planning to implement generative AI solutions, whether and how these differ from their adoption of traditional AI systems and what governance and risk management measures they are taking to ensure a responsible use of the technology.
Initiatives to boost the EU’s AI capabilities
The EU has also launched initiatives that look to promote investment in AI and boost AI capabilities:
- On 11 February 2025, the EU launched the "InvestAI" initiative aimed at mobilising €200 billion for investment in AI, including a new European fund of €20 billion for AI gigafactories. Such investments aim to make Europe an "AI continent." The Commission also plans to set up a European AI Research Council where Europe can pool resources and explore how it can exploit the untapped potential of data to support AI and other technologies. Later this year, the Commission is also planning to launch an ‘‘Apply AI'’ initiative to drive industrial adoption of artificial intelligence in key sectors.
- On 9 April 2025, the European Commission published the "AI Continent Action Plan" (the "Action Plan"). The Action Plan intends to boost AI capabilities in the EU by encouraging initiatives in the following five main areas: (i) building a large-scale AI computing infrastructure; (ii) increasing access to high-quality data; (iii) promoting AI in strategic sectors; (iv) strengthening AI skills and talents; and (v) simplifying the implementation of the EU AI Act. An "AI Act Service Desk" will also be established within the AI Office to act as a central point of contact for businesses seeking guidance.
US
- On 12 May 2025, the National Association of Insurance Commissioners’ Big Data and Artificial Intelligence (H) Working Group released a Request for Information asking stakeholders whether uniform statutory requirements relating to AI are needed across the US and how such requirements should deal with key principles such as governance, transparency and accountability. The questions aim to examine matters such as the type of obligations third-party vendors should be subject to and whether company size should impact the level of obligations.
- On 22 May 2025, following the increasing introduction of US state bills relating to AI regulation, the US House of Representatives passed a set of AI-related provisions in the budget reconciliation package that is currently progressing through Congress. Such provisions relate to, among other matters, the introduction of a 10-year moratorium period on the enforcement of state and local AI regulations and legislation and the designation of funding allocated to the Department of Commerce ($500 million until 2035) to the modernisation and deployment of AI systems.
The outlook for AI regulation for the Insurance sector
Legislators and regulators have proven themselves keen to promote AI use and harness the power of AI. However, regulatory and legislative initiatives to ensure its safe and responsible use continue to press forward. As the insurance sector continues to explore the possibilities for efficiencies and innovation AI brings, it will be essential to ensure that this innovation is carried out within a comprehensive AI governance programme. This will allow for appropriate mitigation and management of existing legal and regulatory risk, alongside new and emerging laws and regulation, and broader organisational and societal risks.
Subscribe and stay up to date with the latest legal news, information and events . . .