Artificial intelligence remains in the headlines for all banks these days and I’m joined today by Peter McBurney who heads up our technology consulting group to try and answer some of the questions that are arising in the market.Peter, how is this being applied to banks?
|Peter McBurney||It’s a big and very hot topic in banks across a whole range of operations of banks and financial institutions, so most of us have heard about what happens at the customer-facing end with programmes like robo-advisors or programmes to automatically recruit people through using techniques in AI but inside bank operations, for instance in high frequency trading or for cross-selling between different parts of a bank, AI is being applied and even in the support services which support the business operations of the bank. So I’ve recently heard of a bank that has developed an AI system that can automatically decide the best order, the best priority, for upgrades to software. Software often depends on other software and the order you do things in can have a big impact on the business operations, so it’s across the spectrum of banking activities.|
|Alan Bainbridge||What about ethical or legal challenges for the banking sector?|
There are ethical challenges, particularly with some of the AI methods that use machine learning. One challenge or one issue that might arise is that the AI system might simply encode a human process and thereby it might ‘embed’ a bias that exists in the human process. Decision makers for mortgages for example have some human bias, then that might be embedded in the machine learning that imitates them. But AI has the capability to acquire its own biases and there are lots of cases where this has happened, without anyone intending to, but with it occurring.Another ethical question, or another issue, is that increasingly we are seeing regulators ask or require that that banks or other institutions who use AI programmes, the programmes have to be able to explain their decisions. So with the MFDII regulation for example, any automated decision process that affects, materially impacts, consumers or small enterprises, has to be able to provide an explanation of how that decision was made and we are increasingly finding banks and other institutions saying you need to have a human in the process in order to be able to make that explanation.
|Alan Bainbridge||What about senior managers in banks, what should they be doing about these kinds of risks?|
|Peter McBurney||I think the biggest danger is to think there is no risk and we would encourage banks and institutions to set up a governance process, to have a committee that would decide which AI systems they will proceed with, which prototypes they will build and to monitor those and the deployment of those. In the pharmaceutical industry, this is common for drug trials and the experience there has been that it’s important to have people on that committee who are not part of the internal culture, it is important to have outsiders on that committee and we recommend the same situation here. The danger that any individual AI system might have a bias is one issue, but once we have multiple AI systems then we see that the outputs of one system become the inputs of another, and if one of them has a bias that may ricochet all the way through all these different systems, so ensuring that the whole process and looking at the flows of that becomes a big issue with these multiple AI systems.|
Welcome to Banking horizons. With insights and commentary from across our global banks group, the series will explore some of the hot topics that we see as being front of mind for our bank clients.
This episode sees Alan Bainbridge, partner and global head of banks, and Peter McBurney, head of technology consulting, discuss artificial intelligence and answer some of the questions that are arising in the market.