Blog
Government surplus plans revealed
On May 29, 2025, the Government published the outcome of its February 2024 consultation on options for DB schemes.
EMEA | Publication | May 2025
Forgive us as we take a small detour into the meaning and history of the word: ‘Hype’, whether used as a noun or a verb, denotes an overselling, an exaggeration of benefits. ‘AI’ encapsulates a variety of applications the purpose of which is to emulate human intelligence (such as learning, decision-making and problem solving). GenAI is a subset of this, the distinctive feature of which is that it involves content creation. ChatGPT was, at the time, the fastest app to reach one million downloads and took a mere two months to reach 100 million users. The humble telephone by contrast took 900 months to reach that same level of saturation. So, has GenAI lived up to its initial hype?
Possibly. It was a bit of a slow start. In April 2023, around a year after GenAI tools were introduced, figures suggested that approximately 22% were using GenAI tools in their workplace on a regular basis2. By the middle of 2024 this figure had increased to 71%3. Whilst we may have been quick to download the tools personally it took some time to get comfortable with them and incorporate them into our working lives.
The reason for that is really quite simple. GenAI isn’t an off-the-shelf app you download onto your desktop (well, it can be, but that is, we submit, not only missing the point but potentially dangerous). It can, and should, rewire the way in which the business is working. GenAI may mimic human intelligence and the ability to create, but it isn’t a simple replacement of a human employee. It has to be woven into the fabric of the workflow and risks and controls need to be adjusted to account for this change. That takes time and a deep level of understanding.
GenAI use in business can range in significance from the percussion triangle to first violin, and there is no music sheet detailing how it should be played. Determining how to incorporate GenAI into the existing ‘orchestra’ requires careful planning and most importantly, consistent conducting.
It all should start with a clear, overarching vision from the top down of what you want to achieve by integrating GenAI. Starting with this vision allows the correct data to be captured. That initial data drives which tools are integrated and how. Is the intention to make work-flows more efficient, safer, cheaper, more customer friendly? Possibly a combination of all of these.
That initial design will then highlight areas of concern: what controls are needed, what new risks does it introduce and what monitoring needs to be carried out.
The potential risks introduced by GenAI (and AI more broadly) are inevitably a key focus point for regulators. The Bank of England’s Financial Policy Committee recently released its report on the financial stability implications of AI for the financial system4. The Financial Conduct Authority (FCA) meanwhile hosted a two-day ‘AI Sprint’ in January, with a focus on discussing the challenges and opportunities for AI in financial services, with the results of this published in April5.
Both of these reports, as you would expect, highlight the need for controls, accountability and risk management. The UK has to date shied away from introducing AI specific legislation, preferring to focus on an ‘activities-based approach’6. This is an intentional, pro-innovation, approach (as it should arguably allow flexibility in application). However, where are businesses to look for guidance on what good governance in the context of GenAI looks like?
The reports don’t specifically answer that question, but they do both point to the existing SM&CR (senior managers and certification regime) as a potential source for managing risk.
The SM&CR regime, as a reminder, is designed to:
“reduce harm to consumers and strengthen market integrity by creating a system that enables firms and regulators to hold people to account. As part of this, the SM&CR aims to:
Accountability is at the very centre of this regime. Intuitively this also feels correct – if we hold people accountable (and tell them we are doing so), surely we achieve better outcomes.
But does it?
It is beyond the scope of this article to delve into the psychology of ‘accountability’ but long before the introduction of GenAI, research showed it is not a simple equation of adding accountability on one side equalling better decisions on the other side:
“Two decades of research now reveal that (a) only highly specialized subtypes of accountability lead to increased cognitive effort; (b) more cognitive effort is not inherently beneficial; it sometimes makes matters even worse; and (c) there is ambiguity and room for reasonable disagreement over what should be considered worse or better judgment when we place cognition in its social or institutional context. In short, accountability is a logically complex construct that interacts with characteristics of decision makers and properties of the task environment to produce an array of effects — only some of which are beneficial.7”
It is too early to tell how accountability, as a social construct, will change following the introduction of GenAI. But if we are to draw any sort of conclusion from the previous research then adding ‘accountability’ at the end of the process does little to create better outcomes. Accountability has to be tailored and adapted if it is to achieve greater cognitive effort and better outcomes.
In the context of GenAI ‘accountability’ as a concept becomes even more blurred. Is it realistic for one person to understand the full implications of the particular GenAI product that is being used? The SM&CR regime already acknowledges that delegation will be required in larger firms, and deals with the ‘accountability’ part of this by requiring the delegator to retain effective controls and oversight. Is using GenAI for certain work processes a form of ‘delegation’ and how (in particular for black-box applications) does that translate to effective control and oversight? Most people would agree that there can be no defence of ‘the AI tool did it’ but what might a reasonable defence look like?
The first cars arrived in the UK in the 1890’s. It wasn’t until 1935 that driving tests became mandatory. Since then, the driving test has dramatically changed and ultimately become harder8. As the technology and understanding has increased, so have the requirements to be considered a safe road user.
The analogy here is hopefully apparent. New technology requires new tests – be that in the form of guidance or an actual assessment that needs to be passed. It would seem madness when viewed through today’s lens to allow someone to drive a car (with only the threat of punishment if they run someone over) without also mandating that they first pass a specific, detailed test. We don’t let drivers self-select and certify that they have considered all risks and are appropriately trained. GenAI needs a human ‘driver’ who is accountable, and that driver needs to very clearly understand not only the metrics against which they are to be held accountable but also the complexities of the tools they are deploying. Accountability, in itself, isn’t enough.
Whilst there is discussion of guidelines being produced for different sectors of what good governance in the AI-age looks like, it is lagging far behind the implementation of the technology. Harking back to the figures at the start of the article – ChatGPT was lightning fast in reaching 100 million users compared to the telephone, and although businesses have been slower to follow suit, adoption rates by business are increasing year on year. What we shouldn’t have is a situation where the technology is adapting and being implemented at rates that far exceed the rate of regulatory change. Or to put it another way, you don’t want your regulatory response to move at the speed of adoption of the telephone.
How does any of this fit into capital markets? Figures suggest it is clearly being used by financial institutions in a variety of ways. The recent report from IOSCO9 highlights various existing use cases, amongst a variety of financial institutions, though interestingly responses from trade groups query whether some of these use cases are accurate based on their own member feedback10. Whatever the exact adoption rate is, do we, as lawyers, have the necessary understanding of how it might be used?
Realistically the only way to get that understanding is to start asking the questions, to start the conversation. Playing catch-up is always harder but with the speed of technological advancement it becomes almost impossible.
To date, risk factors in offering documents have focused only in a limited way on risks created through the use of AI. They are predominantly concerned with cyber-attack risks or the risks of competition if other businesses gain greater efficiencies through the use of AI. What they haven’t focused on is the inherent risks within the use of AI (and GenAI in particular).
Risk factors in offering documents are designed to ensure that investors are aware of and can assess the relevant risks related to their investment and enable them to make informed investment decisions. Taking securitisation as an example, if any part of the origination or servicing is reliant upon GenAI processes wouldn’t an investor reasonably what to know about this?
It surely can only be a matter of time before we also see AI risks being incorporated into rating methodologies. One rating agency has noted it as an emerging risk11 but we have yet to see a more widespread, and documented, inclusion of this as part of the ratings process.
The FCA have recently announced the launch of a new AI testing service, to allow firms to explore AI tools in a safe environment and also, crucially, to help fill a knowledge gap on how AI will potentially be used by financial institutions12. In fairness to the regulators, it is difficult to provide guidance on and parameters for safe use of technology that is constantly evolving that they have little first-hand knowledge of. We need them to catch up fast though if we are to avoid the proverbial horse having already bolted.
In the meantime, starting with a clear understanding of ‘why’ a particular GenAI tool is proposed to be used allows most other concerns to be more easily addressed. The ‘why’ should flush out which human activities are being replaced, or supplemented, and how. The ‘how’ then drives the risk assessment and ultimate monitoring and auditing processes. The understanding of those risks then ultimately drives the disclosure process and hopefully successful and safe integration of these tools.
Let’s start that conversation now.
https://www.bankofengland.co.uk/financial-stability-in-focus/2025/april-2025
Blog
On May 29, 2025, the Government published the outcome of its February 2024 consultation on options for DB schemes.
Publication
On 3 June 2025, the Financial Reporting Council (FRC) published an updated Stewardship Code, the UK Stewardship Code 2026, following a consultation process that ran from November 2024 to February 2025. It will apply from 1 January 2026 and replace the UK Stewardship Code 2020 (2020 Code).
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2025