Good practices post-Minerva Surgical, Inc. v. Hologic, Inc.
Hao Wu and David Ben-Meir discuss a patent continuation suit between Minerva Surgical, Inc. and Hologic, Inc.
Constrained by unfavourable macroeconomic factors (including historically low-level interest rates), lower premiums, more claims, the harsher regulatory capital requirements imposed by Solvency II and the Prudential Regulation Authority and a soft market inundated with excess capital, insurers are under increasing pressure to price more competitively and estimate reserves more precisely, which means assessing risks more accurately.
These broader market and regulatory conditions, coupled with changing consumer preferences and wider cultural trends, have led insurers to harness the power of technology to reduce risks, cut costs, profit and grow. These objectives are at the centre of attempts by the industry to harness the power of “Big Data” – the aggregation of large amounts of data derived from exploitable data sources – to analyse consumer behaviour. The sources of such data, in particular social media platforms, and how that information could be used to fulfil customer expectations and to set premiums are matters which regulators and lawmakers are struggling to address in a coherent and modern way.
Computer algorithms can be used as proxies for underwriting calculations. Previously untapped data can be analysed to get a richer and more accurate assessment of an individual as a risk. For example, the use of a potential insured’s social media posts can be digitally analysed in order to determine whether or not that person would be conscientious behind the wheel. The concept is simple: if the data retrieved from social media and other sources indicate that the prospective insured is a careful driver, the premium can be adjusted to reflect the risk. “Posts” and “likes” can give the insurer insight into personality traits that it associates with good driving. But, where should the boundary lie between consensual use of personal data and an intrusive use of personal information? Certainly, falling on the wrong side of this line might lead to insurers and brokers that use personal data being subjected to a barrage of public criticism, not least from human rights campaigners, data privacy and digital rights groups, but also from consumers and regulators, the combined effect of which could sink Big Data initiatives involving social media and impact profits.
The appetite within the insurance industry to see how Big Data might be mined to price products more accurately attuned to individual consumer behaviour is inescapable. However, the notion of harvesting and using social media data – which may include “personal data” heavily regulated by national and EU legislation – to determine insurance eligibility and set premiums raises a number of legal (and policy) concerns. Given the speed of exponential technological advancement, the law and regulation can only address these issues up to a point, begging the question: what regulatory boundaries and ethical parameters will, in the future, be applied to insurers in the deployment of data derived from unconventional sources like Facebook, Twitter or Instagram? This article seeks to answer this question by:
Big Data is part of the wider ecosystem of “InsurTech”, the developing union between insurance and technology. The most prevalent sources for Big Data are the ubiquitous “connected” devices like telematic (or “black-box”) sensors installed in vehicles, location-based sensors fitted in offices and homes, and wearable devices such as watches and step counters. These devices, now regularly offered alongside motor, home and life polices, are capable of collecting, storing and transmitting vast quantities of real-time, objective and unfiltered data, which can be used to construct an individual profile of a policyholder’s behaviour and the risks associated with his habits.
In addition to the information received from telematic devices which provide data on physical hazard, insurers are naturally interested in data sources that also provide information on moral hazard – what makes the insured tick, how they perceive risk, their honesty and likelihood of committing fraud. Accordingly, data sources such as social media can be used to produce a “personality-based risk assessment” constructed from an applicant’s interests and purported levels of organisation and other characteristics. “Liking” a particular athlete, writing in concise sentences, publishing lists and using exclamation marks are amongst data that one UK insurer considered using to generate a personality type on a prospective policyholder and then determine how safely his driving habits would be – for example a customer’s posts which featured superlatives like “always” or “never” may have implied overconfidence, a trait at least one study has concluded is emblematic of risky driving. Insurers could seek to rely on such data (and the science purportedly linking personality traits to driving) to adjust premiums. A number of major UK insurers have taken similar moves by offering reduced premiums (a) in return for more data derived from connected devices and (b) as a reward for good driving and healthier lifestyles.
Although consumers are likely to be drawn to the prospect of the reduced premiums that access to Big Data has the potential to offer, there are a number of legal concerns which merit careful consideration, principally in the areas of data privacy, confidentiality, cyber security, intellectual property and even competition. Though these matters are outside the scope of this article, it is worth noting that some of the data that would be analysed might constitute “personal data” under the Data Protection Act 1998 (the DPA). As data processors under the DPA, insurance companies must take care in how they obtain the data and be transparent about its usage. The market will soon be subject to a stricter regime in the form of the EU General Data Protection Regulation, which sets out more robust obligations on companies harvesting data, including from digital sources like Facebook.
Beyond data protection law, use of social media raises regulatory concerns over pricing practices and risk segmentation. Extensive usage of Big Data (as well as the transmission of data over connected devices) in underwriting has the potential to segregate the risk pool, resulting in certain consumers being unable to obtain or afford insurance. Big Data initiatives could also penalise loyal (and inert) customers, who, satisfied with their existing policies, may be less likely to “shop around” for more competitive quotes. To the extent that gathering and using social media data were to become commonplace in underwriting, the transfer of such data between firms could become a logistical and regulatory nightmare, for example, on a portfolio transfer.
Data mining from social media, in particular, could result in indirect discrimination unwittingly creeping into underwriting decisions. Scouring through social media and classifying a Facebook post or tweeted information as “high risk” could reproduce unconscious bias against structurally disadvantaged groups of people with protected characteristics like race, sex, religion, disability, sexual orientation or gender identity and exaggerate existing inequalities facing those groups. There is also the potential for data derived from social media to be used to charge a certain category of customer higher premiums which do not reflect his actual risk profile or the cost of providing the insurance (e.g., simply because the customer has the willingness or ability to pay more for his insurance).
From the perspective of insurers, there is an inherent risk from using social media data, as customers may eventually “game” the system as they learn what types of information to publish on social media, in order to procure a lower premium, which would seriously undermine the underwriting process.
The Financial Conduct Authority (the FCA) considered several of these issues in its September 2016 Feedback Statement on the use of Big Data in the retail general insurance sector. However, with the exception of this publication, the corresponding “Call for Inputs” from stakeholders on the issue and various speeches, there has been a palpable lack of responsiveness from the regulator on the increasing use of Big Data by insurers, which, initially, seems surprising, given the complex and heavily regulated nature of insurance. This should not, however, imply a lack of regulatory interest in the area. Rather, it is reflective of the inability of law and regulation to match the pace of technological innovation. The FCA’s full-scale regulatory market studies, which take months to produce, are simply not viable options. As they continue to develop and launch their Big Data and other InsurTech plans, insurance firms are looking for sufficient comfort now that they are not falling foul of the rules to which they are subject. This need for expediency could result in the publication of industry-specific Codes of Practices from the ABI as guidance on the parameters for using social media (and Big Data more generally) in assessing and pricing risks, an approach adopted by the market in relation to predictive genetic testing a few years ago.
The current uncertainties and concerns set out in this article are not entirely dissimilar to those identified at the time predictive genetic test results were increasingly being used in the market to price life insurance. Predictive genetic tests can be used to predict future illness – such data is rich for insurers, as it enables them to assess the policyholder’s risk more accurately and set a level of cover more closely aligned to that risk. However, as is the case with using social media, there is a natural risk of risk segmentation and disparate pricing practices.
In its joint 2014 Concordat and Moratorium on Genetics and Insurance with the Government, the ABI promulgated an overarching policy framework for cooperation between the Government and insurers on the use of genetic information within the underwriting process. The Code, though not legally binding, set boundaries which the industry agreed not to cross – in particular that (a) predictive genetic tests should not be taken into account when deciding cover unless it was to the insured’s advantage and (b) use of such tests must be transparent, fair and subject to regular review to ensure that consumers have rights of access to life insurance. The Code further addresses the delicate balance between this right of the insured and the right of the insurer to information material and relevant for underwriting the risk (i.e., disclosure). Of particular application to Big Data social media initiatives is the Code’s approach to the relationship between data and insurance underwriting – namely that it must be proportionate and based on robust evidence. Therein lies a primary concern where dubious information such as whether a policyholder prefers Beyoncé to Dolly Parton or uses “LOL” in the social media vernacular could be used to determine whether the insured is a precarious driver. Despite the alleged link between personality traits and driving, the data which firms may be tempted to use cannot predict the same level of certainty on policyholder habits as a genetic test or even the level of exercise transmitted to an insurer from a technology wearable. It is this lack of sound evidence, together with the regulatory and ethical concerns highlighted above, that may sink attempts to rely on algorithms for character type and risk in underwriting.
The use of data from social media could enable long-term and general insurers to offer more personalised coverage to consumers and for a premium aligned more closely to the customer’s actual risk profile. Given the regulatory uncertainties, it is not surprising that insurers are treading carefully and, at present, are only offering reduced premiums to reward good habits. Indeed, there are numerous challenges to navigate before insurers can even think of requiring consumers to pass on more data as a condition precedent to underwriting the risk at a normal premium or using the data to impose unjustified exclusions. As technology continues to advance, the boundaries are likely to be set by a Code of Practice and each insurers’ own policies. In the interim, insurers will continue to launch their Big Data initiatives; however, they will need to ensure that such plans are fair, reasonable and transparent; otherwise, they could risk losing customers, regardless of how admirable or beneficial to the insured it may initially appear to be.
Hao Wu and David Ben-Meir discuss a patent continuation suit between Minerva Surgical, Inc. and Hologic, Inc.
COVID-19 has made it difficult for many companies to perform some of their contractual obligations, giving rise to a high number of corporate disputes, particularly relating to the application of force majeure and change in law provisions.
On 29 September 2021, the Parliamentary Joint Committee on Intelligence and Security (PJCIS) published its Advisory Report on the Security Legislation Amendment (Critical Infrastructure) Bill 2020 (Original Bill) and statutory review of the Security of Critical Infrastructure Act 2018 (SOCI Act).
© Norton Rose Fulbright LLP 2021