Publication
Keeping your dawn raid guidance current
Unannounced inspections or ‘dawn raids’ are used by antitrust authorities to obtain evidence when there are suspicions that individuals or businesses have infringed the antitrust rules.
Global | Publication | May 2024
While AI can help resolve data issues in sustainable investing, it can create problems such as information breaches and inherent bias in data.
Artificial intelligence (AI) has the potential to transform sustainable investing, given its ability to streamline processes and manage large quantities of data. However, there are concerns regarding AI’s application which potential users need to be cautious of.
Imperfect and inaccessible information can plague sustainable investors, favouring those with either resources to review and understand the reams of technical data or those outsourcing to certification/ratings providers. Compounding this issue, collection technologies are under-developed.
A lot of data collection is undertaken manually, meaning that it is time consuming, prone to human error and can lead to important information being overlooked.
Once collected, data must be analysed and reported. Reporting comes in varying formats as global investors strive to comply with a patchwork quilt of overlapping and conflicting legislative frameworks across different jurisdictions. Investment portfolios will often span more than one industry such that performance indicators will differ between investments introducing further variation to reporting frameworks.
Partly because of these issues, sustainable investing is arguably facing a backlash in some circles.
Alongside the difficulties outlined above, the backlash is attributed to (non-exclusively) short termism, negative externalities, and political attitude. Moreover, greenwashing concerns still continue to dissuade some investors from transactions labelled as sustainable, even though such this investment is badly needed to effect the just transition.
The above issues could be partially resolved with well-placed AI solutions relating to deep learning, specifically large language models (LLMs) and generative AI (GAI).
Arguably, the best use case for AI in sustainable investing is data verification and analysis. Such technologies can identify, analyse, summarise and translate vast amounts of data from multiple, structured and unstructured sources into reliable, comparable data in a comparably short time.
When considering the large amount of data needed to be collected and presented in multiple formats (each acceptable to stakeholders and regulators) and the increasing demand for more complex data, simplifying the data reporting cycle is crucial. Without accessible and comparatively affordable technologies, only the largest entities can afford to undertake AI powered reporting. Service providers are already pioneering the use of LLMs and GAI to assist corporates and investors streamlining compliance processes and providing a platform for investors and companies to build more effective sustainability strategies.
Even managing unmanageable data requires some data to begin with. Data gaps occur where a company cannot ascertain real-life data, and such gaps are currently usually ‘filled’ using data from a comparable company or third-party provider. A different form of GAI, predictive modelling, can use existing data to provide more accurate substitutes than those currently available for data gaps. Such modelling also has a wider use case in sustainability in more accurate climate modelling and identifying environmental/physical risks to investments. Paired with satellite technology, predictive modelling can also monitor deforestation, qualitatively assess reforestation carbon credit projects, and monitor live greenhouse gas leaks. Providing factually accurate data using GAI can start addressing short termism and negative externalities, connecting unsustainable behaviour with physical and/or financial consequences.
LLMs can also guard against greenwashing through verification and sustainability diligence. Natural language processing and sentiment analysis conducts a comprehensive survey of all available media in real time, identifying information which has been missed in reporting, or which is oversimplified or misleading. These technologies can compare the consistency of these results against reported data for a broader, more realistic appraisal.
These AI processes make the sustainability reporting lifecycle less cumbersome, and a combination of such technologies are increasingly being implemented into internal processes.
While the use of AI has great potential to increase transparency and accessibility in sustainable investing, it is not a panacea.
AI can remove some of the human effort in complying with multiple deadlines and frameworks but does not solve the fact that there are multiple, complex regulatory regimes. Without a standardised set of certifications, reports or ratings, investors are rarely comparing apples with apples and furthermore appreciating the differences. Any confusion is compounded when, with the assistance of GAI, different reporting obligations are fulfilled using the same data, just with different labels. Broad adoption of fewer frameworks, such as those implementing guidance provided by market bodies such as the International Sustainability Standards Board and non-profits such as Task Force on Climate Related Financial Disclosures and Task Force on Nature-Related Financial Disclosures (and future iterations) could address such issues.
Predictive modelling still relies on a minimum level of ‘real world’ data in order to produce viable results and cannot alone solve the missing data issue. It requires some time consuming manual data collection for faithful modelling. Data collection methodologies are ripe for robotic process automation using internet of things and process intelligence to streamline the process and improve the veracity of results, while being cautious of the governance issues arising.
Finally, AI does not (yet) have decision making capability akin to human intelligence, and is still prone to making mistakes when working with language. These mistakes could fail to be picked up leaving data open to manipulation in any verification process, and errors extrapolated in analysis, multiplying reporting and compliance issues. Human oversight to correct any errors combined with a blockchain network would maintain verification integrity and help counter the risk of any unauthorised data manipulation.
The moral and ethical issues of using AI have been well documented, but for sustainable investors a few such issues are more prevalent than others.
Using AI technology can be both environmentally and financially costly, with AI’s lifecycle and the requisite specialist hardware being expensive and energy intensive. While training LLM technology consumes the most energy, in one test, the whole lifecycle of a LLM powered by low carbon energy emitted carbon dioxide equivalent to 60 London to New York flights. As prices of energy and carbon intensive activity are increasing, more energy efficient AI models are needed. Pooling resources in AI development would also be more cost-effective, fairer and introduce greater competition.
Data privacy is a large concern which is, currently, largely financially unquantified. Machine learning algorithms require masses of personal data, raising questions about data collection, protection and storage. Analysis of data and inferences by AI risks sharing personal data without consent. The political agreement of the EU’s Artificial Intelligence Act is ground-breaking legislation specifically regulating the deployment of AI, however the majority of its provisions will likely not be enforceable until at least two years after the consolidated text is finalised.
AI will not avoid any inherent bias in data if its algorithm is trained on data containing a bias – the technology will learn and reinforce such bias. Users of AI need robust processes to ensure their use of AI is necessary and responsible, as well as to monitor results. Maintaining transparency will help the technology’s development and facilitate swift rectification and re-programming of the identified algorithm.
Some may also argue that the existence of AI poses threats to society itself. Is it ethical to replace a human workforce with technology? Can we control a machine with general intelligence greater than any human? There is a clear case for independent governance of AI and its stakeholders. Global multi-stakeholder institutions including the Partnership on AI, the Institute of Electrical and Electronics Engineers’ Global Initiative on Ethics of Autonomous and Intelligent Systems and United Nations' Multistakeholder Advisory Body on Artificial Intelligence promote civil, democratic and global supervision and avoid a single point of failure.
AI has huge potential to help streamline sustainable reporting and in turn increase confidence and scale sustainable investing. However, a guided and nuanced approach is required to ensure secure and effective AI deployment. While the C-suite’s optimism for AI is already evident, caution should be liberally applied to keep AI’s involvement in sustainable investing clean.
This article was first published in IFLR 1000.
Publication
Unannounced inspections or ‘dawn raids’ are used by antitrust authorities to obtain evidence when there are suspicions that individuals or businesses have infringed the antitrust rules.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023