Publication
Global Asset Management Review: Issue 4
Welcome to the third issue of Global Asset Management Review.
Author:
Global | Publication | December 2025
AI has moved from the margins to the mainstream of asset management. Firms are deploying AI to accelerate research, modernize operations, sharpen risk surveillance, and enhance client service. A significant percentage of managers are either already using or planning to use AI to inform asset class research and portfolio decisions.1 A Mercer survey found that 91 percent of managers are engaged with AI – 54 percent are currently using it, while 37 percent plan to incorporate it into their investment strategies or asset class research.2 An EY survey of 100 firms found that 95 percent have scaled GenAI adoption across multiple use cases, with 78 percent already exploring agentic AI for deeper strategic benefits.3
At the same time, regulators have made clear that longstanding rules governing supervision, marketing, recordkeeping, and fiduciary duty apply equally to AI-enabled activity. The result is one of significant promise, but also heightened scrutiny, where managers who balance innovation with strong governance are most likely to gain a competitive advantage.
Across the investment and operating lifecycle, AI is already proving useful as a companion rather than an autonomous decision-maker. On the investment side, managers are using AI to synthesize large unstructured data sets, summarize earnings calls and filings, compare investment guidelines to portfolio rules, and prioritize signals for human review. These tools can significantly shorten research and quality-control cycles, enabling professionals to focus on strategic decision-making.
In risk and compliance, AI-powered surveillance assists with scale and consistency. Email and communications review, trade surveillance, and behavioral analytics can be triaged by models that flag outliers for escalation, supporting timely detection of potential issues. In marketing and client service, AI can pre-screen communications, including social content and performance claims, to help identify statements that may be unfair, unbalanced, or unsubstantiated. In legal and operations, AI expedites document review, contract clause extraction, privilege tagging, and e-discovery tasks – areas where speed and accuracy can translate into meaningful cost savings.
These use cases share two common traits. First, AI is most effective when constrained to tasks with clear objectives and well-understood boundaries. Second, the strongest results come when human experts remain firmly “in the loop”, supervising inputs and validating outputs.
While there are few prescriptive AI-specific rules for US asset managers today, existing frameworks apply with full force. The Investment Advisers Act prohibits investment advisers from making false or misleading statements about a firm’s capabilities, including claims about AI use. The SEC’s Marketing Rule requires fair, balanced presentation and substantiation of performance claims. That extends to AI-generated materials and to representations about AI-driven forecasts. The Financial Industry Regulatory Authority’s communications communications standards similarly apply to AI-enabled content and chatbots; guidance emphasizes pre-deployment evaluation, explainability, and ongoing supervision.
Recent enforcement actions underscore two themes. First, “AI washing” – overstating or fabricating the role of AI in investment processes, will attract antifraud scrutiny. Second, hypothetical or AI-enhanced performance claims, if unsubstantiated or improperly presented to the public, can violate the Marketing Rule.
Regulators are also modernizing their own toolkits. Units focused on cyber and emerging technologies have highlighted AI-related risks and are using analytics to detect market abuse. Their posture remains technology-neutral: innovative tools are permitted, but fiduciary duty, supervision, and recordkeeping expectations remain unchanged. Managers must still be able to explain the “why” behind decisions and the “how” behind models sufficient to satisfy oversight inquiries.
Even as AI accelerates workflows, it introduces distinct risks that require active management. Model risk, including errors, bias, or weak explainability, can undermine outcomes and erode trust. Hallucinations and overconfident summaries can produce inaccurate or misleading outputs, especially when models are applied outside their training domain. Over-reliance on AI for nuanced judgment can miss context that experienced professionals would catch.
Data governance is equally central. Using public or consumer-grade tools for sensitive inputs can jeopardize confidentiality or privilege; in some configurations, user prompts and documents may be retained and used to train third-party models. Discovery and recordkeeping obligations also extend to AI-generated content and prompt histories in many contexts; if AI is used within a decision process subject to books-and-records rules, the inputs and outputs should be captured.
Finally, integration risk is real. Poorly specified implementations, weak vendor diligence, or unclear user policies can result in inconsistent practices across business lines. The result can be a perception of disorder, even when the firm’s actual risk controls are sound.
A pragmatic governance approach can unlock AI’s benefits while mitigating downside risk. The most effective programs begin with an inventory of use cases: what tools are in use across research, trading, client service, compliance, legal, and operations; what data they touch; and where human review sits in the workflow. Clear scoping helps identify high-impact, low-risk opportunities and highlights areas requiring tighter controls.
From there, firms should align policies and procedures to existing obligations rather than reinvent the rulebook. Communications generated or screened with AI must meet the same standards as traditional content. Where AI informs investment decisions, contemporaneous documentation should describe data inputs, key prompts or parameters, backtesting or validation steps where applicable, and the human rationale for the final decision.
Tool selection and configuration matter. Enterprise-grade solutions that offer tenant isolation, data control options, and audit logs are generally preferable to public tools for sensitive workflows. Contracting and vendor diligence should evaluate retention settings, model training practices, security controls, export and logging capabilities, and support for prompt/output capture. Where feasible, enable features that preserve an audit trail of inputs and outputs.
Human oversight remains the foundation. Users should be trained to craft precise prompts, anticipate common failure modes, and sanity-check answers against known facts or source documents. For critical outputs such as marketing claims, investor communications, and legal conclusions, AI should augment, not replace, professional review. Periodic testing of models against known benchmarks, plus spot checks for bias or drift, helps sustain confidence over time. Firms should also maintain thorough records of all testing activities, including methodologies, results, and any remedial actions taken, to support oversight and demonstrate compliance.
Finally, governance should be right-sized and iterative. Some firms convene cross-functional committees; others designate an accountable owner within compliance or risk with input from technology and the business. What matters is that someone is paying attention, policies are consistent across disclosures and channels, and the program can evolve as tools and use cases mature.
AI is rapidly becoming an essential component of asset management. Used thoughtfully, it enhances speed, consistency, and insight across the enterprise. Deployed carelessly, it can amplify old risks and create new ones, from misleading claims to data leakage and inadequate documentation. Success lies in pairing ambition with accountability: candid, accurate disclosures about how AI is used; enterprise-grade tooling with appropriate data controls; robust human oversight and recordkeeping; and a governance framework that is practical today and adaptable tomorrow. Managers who strike that balance will be best placed to capture AI’s upside while staying on the right side of evolving regulatory expectations.
Publication
Welcome to the third issue of Global Asset Management Review.
Publication
The Financial Conduct Authority’s (FCA) October 2025 Consultation, CP25/28 Progressing Fund Tokenisation, takes a clear step towards embedding distributed ledger technology (DLT) in the UK’s authorised fund regime.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2025