AI: Getting your house in order – planning for an intelligent future

Global Publication June 2018

The insurance industry is looking at the means to develop new business models that rely on the mining of large data sets in order to identify customers, price risk and analyse claims. Not only does the application of artificial intelligence (AI) have the potential to reduce costs by reducing headcount, it also has the potential to improve the accuracy and speed of decision-making and transform business processes. Ultimately, the benefits of AI should be significant for the insurance industry’s customers.

What do we mean by AI? In broad terms, AI is the field of computer science that includes machine learning, natural language processing, speech processing, robotics and similar automated decision-making. AI enables machines to carry out tasks that would otherwise be dependent upon grey matter. There is a spectrum of sophistication of processes and this includes making decisions of various degrees of complexity.

Applications of AI might include the use of ‘chatbots’ to assist customers with insurance applications online and guide them towards tailored products and services. Over time, chatbots can learn from each interaction to enable them to provide better and more sophisticated products as they develop in a real-world environment. Machines can hold far more information about the product and its suitability for the customer than a sales agent can. Customer on-boarding, claims documentation and customer records can be easily stored, searched and analysed with minimal human interaction. This means existing insurance approaches to customer information can be transformed to deliver a much faster and more streamlined experience, often bringing new insights into the customer’s risk profile. Through Big Data analytics, data about the customer can be sourced from a far greater number of sources and analysed with limited or no human intervention.

Connected devices or the ‘Internet of Things’ enable insurers to get accurate risk information that can help the underwriter understand the risks that their customers face with far greater depth and accuracy as they can monitor the use of the insured asset (whether a life, vehicle or property) over the period of cover. Such devices allow insurers to develop a ‘scorecard’ of customers’ risk profiles, thereby providing more accurate pricing at inception and renewal. Claims handling can be undertaken without human intervention as evidence can be scanned, assessed and paid out (or denied) by machines. Trends in claims can be identified by AI in conjunction with Big Data analytics, leading to far more accurate risk management information and more effective fraud prevention.

All of these new applications will rely on access to data. This might be customer data of varying degrees of sensitivity and might also be data gleaned from third party sources, such as social media or public records. In themselves, such data access and analysis would seem to be a good thing, as in theory they lead to better products for customers which can be purchased at a keener price.

Why, therefore, has Tesla and Space X founder Elon Musk called work on AI applications “summoning the demon”?1 Musk believes that, without regulation, some AI applications will be harmful to humanity. So, what are the risks and how might they be managed by firms in advance of prescriptive regulatory requirements?

In 2016 the UK’s Financial Conduct Authority (FCA) launched a Call for Inputs on Big Data in retail general insurance to better understand how large data sets and Big Data analytics were being used in the general retail insurance market, and to understand the risks such applications might pose to consumers. By Big Data, the FCA means the use of new and expanded datasets and data; the adoption of new technologies to generate, collect and store data; advanced data processing techniques; sophisticated analytical techniques; and the application of this data in business decisions and activities.

The feedback from the Call for Inputs identified a concern that increasing risk segmentation could be a problem alongside greater price differentiation between customers. In addition, the increasing use of third party data sources could be a cause for concern, especially in the light of recent data privacy scandals linked to social media companies.2

Risk segmentation occurs where customers who would currently be in the same risk categories are split into an increasing number of risk groups (as access to greater information enables more nuanced risk profiles to be drawn). The result is that, rather than customers sharing risks in a large pool of people, they share risks among a much smaller cohort. Customers who are identified to pose higher risks will pay a much higher premium as the burden of their risk is not shared among a sufficiently large number of people.

Furthermore, as risks are increasingly segmented, more people may be priced out of the market. This could happen for a variety of reasons: there is insufficient data available about a person or asset to enable them to benefit from pricing based on large data sets; they refuse to share certain types of personal information; or information about them (whether accurate or not) identifies them as a high risk. The opposite is also the case where those who currently struggle to find cover may benefit from the opportunity to provide more information about their risk profile (for example, young drivers who are willing to use telematics devices to demonstrate that they are careful drivers).

A real world example of risk segmentation is flood risk where postcode underwriting has left some people unable to get insurance cover for their homes; the UK Government responded by establishing Flood Re to ensure that no-one was without access to cover.

Increasing price differentiation occurs where firms are able to charge customers different premiums because of factors other than risk. Access to greater sources of data increase firms’ ability to find price sensitivities. For example, dual pricing exists where long-standing customers who do not switch are charged more than more price-sensitive customers. There is also the potential to look at customers’ data sets and infer what price (which may be different for two identical customers in terms of risk) a customer is willing to pay. Customers who enter a number of different variables into a price comparison website may be offered cheaper premiums than customers who do not. AI in conjunction with Big Data analytics enables firms to get a much better understanding of how price-sensitive a customer is and use this and other data sets to their advantage.

There are also a number of ethical and legal issues that the insurance industry will need to work through as AI is used in products and services that the insurance industries’ customers themselves offer up for use by consumers or businesses. This will include liability consideration in relation to AI-enabled products (for example, hardware or software or AI-enabled professional services such as legal services, in addition to more obvious cases such as autonomous vehicles).

Should the application of AI in insurance be regulated? If so, would such regulation be sector-specific in relation to the use of the technology in undertaking particular activities, or would there be generally applicable regulation of the technology itself, regardless of sector or specific application? There is currently no consensus on the issue globally.

From a UK perspective, in October 2017 the UK Government published recommendations from an independent review written by Professor Dame Wendy Hall and Jérôme Pesenti, Growing the Artificial Intelligence Industry in The UK.3 The recommendations did not include the creation of a national regulator for the application of AI, instead preferring that a number of organisations take responsibility for different aspects of the challenges posed by the use of AI. Among the recommendations were:

  • the establishment of a UK AI Council to help co-ordinate strategic growth of AI-based businesses in the UK;
  • the creation of ‘data trusts’ overseen by a Data Trusts Support Organisation, which would lead on best practice in terms of templates, tools and guidance for those who wish to use data; and
  • the role of national institute for AI to be given to The Alan Turing Institute, which would develop a framework to increase accountability and transparency for uses of AI.

In April 2018 the House of Lords Select Committee on AI published a report, AI in the UK: Ready, Willing and Able?, in which the Committee proposed five principles that could become the basis for a shared “ethical AI framework”, and concluded that “while AI-specific regulation is not appropriate at this stage, such a framework provides clarity in the short term, and could underpin regulation, should it prove to be necessary, in the future. Existing regulators are best placed to regulate AI in their respective sectors.”

Clearly how data is processed and questions of consent are a matter for the UK Information Commissioner’s Office, which the UK Government recognises will be tasked with monitoring how personal data is used in line with its role as the body with responsibility for the protection of information rights in the UK.

The FCA has statutory authority to ensure that markets work well, with specific focus on the protection of consumers, market integrity and the promotion of competition. Where the application of AI impacts upon consumers (for example if AI makes it harder for vulnerable customers to get cover) the FCA can act to ensure that firms’ use of AI does not conflict with the obligation to treat customers fairly. Similarly, the FCA can take action against the application of AI where it results in competition concerns, for example should the concentration of data sources result in monopolies.

The UK’s Prudential Regulation Authority - which is responsible for regulating insurers from a solvency perspective – has largely been silent on how it views the risks AI might pose to the wider UK economy. It is clear that AI, if deployed in strategic thinking at board level and for capital structuring and solvency purposes (such as reinsurance purchasing), could give rise to systemic issues if it is making poor decisions humans do not understand.

What is evident is that AI will be applied in many areas of insurance for different purposes. Until guidance is produced by the various different organisations with the remit of regulating how AI is applied, firms would be well advised to develop their own internal governance structures for the use of AI in their business.

The following are some possible elements of an internal governance policy for the application of AI within insurance:

Internal AI best-practice

  • ‘Ethical audits’ of algorithms4 – firms should regularly review how their algorithms are performing to ensure that biases are not developing which could cause harm to groups of vulnerable customers or cause AI to make decisions humans do not understand.
  • Ethics / conduct committees – in order to provide robust challenge to the AI business model, firms should review the outcomes of the ethical audits and ensure that appropriate action is taken to address emerging concerns. They should also question the use of AI in particular processes to ensure that it is appropriate and will not cause harm to vulnerable customers. Ethics committees should review decisions to apply new AI systems so that they are comfortable that they are designed with human impact in mind.
  • Consider the diversity of the AI designers – before either purchasing off-the-shelf AI or asking for new systems to be designed, firms should review the conduct practices of the provider, and in particular take into account the diversity of those individuals who will be responsible for designing and training the algorithms. Without awareness of how the perspective of the AI designer (and training data) might determine its in-built prejudices (for example, racism, sexism and other conscious or unconscious biases), significant reputational risks can emerge.
  • Disclosure of data sources and decision-making ‘gateways’ – firms should ensure that they understand what information they can and should disclose about their algorithm, whether required under the General Data Protection Regulation (GDPR) or according to their own internal policy. The EU data protection authorities working together as the Article 29 Working Party (WP29)5 have clarified how the GDPR will affect disclosure in this respect. The WP29 recommends that data controllers find simple ways to explain the rationale or criteria relied upon behind automated decisions without necessarily being required to explain the algorithm. Customers should be able to request information about the ‘gateways’ through which decisions are made in relation to risk profiling in underwriting or claims handling, even if simply in terms of the types of information that can determine the customer’s profile (this may be challenging when opaque AI ‘black boxes’ are used). If the algorithm heavily relies upon data from customers’ social media profiles, for example, this should be transparent. Over the past few years credit reference agencies have published more information about which factors influence a customer score; the same should be true for insureds so that customers are able to challenge the application of the algorithm to their particular set of facts, where necessary. For more information on data protection and AI, see Norton Rose Fulbright’s blogpost, Data Privacy: AI and the GDPR.
  • ‘Data governance’ should be part of ‘product governance’ – we have seen over the past few years how regulation has embraced product governance. For example, the Insurance Distribution Directive requires that the manufacturer of a policy monitors how the product works for its target audience. The application of AI and its effects on a target customer base should be included within product governance. Where negative outcomes emerge for particular groups of customers from the use of AI, this should be addressed.
  • Rights to appeal AI decision-making – If a customer believes that a decision made by the algorithm is wrong, they should be able to request that the decision be reviewed by a human. This should enable complaints against AI decisions to be brought within the existing complaints handling procedure and reassure customers that AI is there to assist but not control the business.

There is limited available guidance on how best to ensure that customers are treated fairly by the application of AI to insurance business models. The FCA is yet to make any recommendations. Until official guidance is published, firms should determine their own internal policies to ensure that their usage of AI is appropriate for their target market and delivers fair outcomes for customers.


Footnotes

1

Elon Musk was interviewed in October 2014 at the MIT AeroAstro Centennial Symposium.

2

Facebook’s Zuckerberg responds to Cambridge Analytica scandal’ (Financial Times, March 21, 2018).

3

Growing the artificial intelligence industry in the UK, Professor Dame Wendy Hall and Jérôme Pesenti (October 15, 2017)

4

An idea proposed by data scientist Cathy O’Neil in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Penguin, 2016).

5

Article 29 Data Protection Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679p.14.



Contact

Partner

Recent publications

Subscribe and stay up to date with the latest legal news, information and events . . .