“Artificial Intelligence” is defined as, “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”
When I first signed on to write this article, I (with human ignorance) employed all my human cognitive brain cells to author and shape its form. However, knowing what I know now, I ought in hindsight to have deployed artificial intelligence (such as an IBM Watson system) so that I could simply input the title of the article and flick the switch, and the masterpiece of an article would have been written for me with my name at the end. Soon we will be replaced: our brains will atrophy; and the administration of justice will be achieved by remote robotic pieces of machinery. Until then, however, I am burdened with the task of having to write this article and, unless you have already been replaced by artificial intelligence, you will likewise be burdened by the reading of it.
According to Stephen Hawking, “[t]he development of full artificial intelligence could spell the end of the human race.” If so, we may only have a limited time before we are replaced by machines and relegated to tending our gardens and clipping our rhododendron bushes but hopefully that will not come in our lifetime. For insurers and insureds, delving into speculation or spheromancy as to our existential purpose once we have been displaced by machines, will not pay dividends. Nevertheless, the deployment of artificial human intelligence in underwriting and claims handling operations might yield significant benefits.
For example, in December 2016, a Japanese Insurer reported that it was planning to reduce 30% of its payment assessment department’s human staff after introducing an artificial intelligence system to improve efficiency. It was reported that the “cognitive technology can think like a human” and “can analyze and interpret all of your data, including unstructured text, images, audio and video.” It was proposed that the artificial intelligence system would read medical certificates written by doctors and other documents and information necessary for paying claims as well as checking coverage clauses in the insurance contracts issued so as to prevent overpayments.
As novel, avant-garde, artificial human simulation systems are developed and manufactured for sale, insurers and insureds are increasingly considering their deployment in underwriting and claims handling processes in order to streamline costs and improve efficiency.
In the underwriting process:
- Insurance companies are considering the possible introduction and use of artificial cognitive technology in order (among other things) to: (i) evaluate loss and other underwriting information, (ii) decide whether to underwrite a particular risk, and (iii) calculate the appropriate premium for a particular risk.
- Insureds are considering the possible introduction and use of artificial intelligence systems in order (among other things) to: (i) collate loss, financial and other relevant company information as part of their underwriting submission materials, (ii) evaluate their prior loss history and future likely exposures, and (iii) determine the necessary limits and type of coverage which are required.
In the claims handling process:
- Insurance companies are considering the possible introduction and use of artificial intelligence systems in order to administer claims from the first notice of loss through to resolution including but not limited to: (i) the review/ analysis of documentation and information submitted by the insured, (ii) the evaluation of potential coverage issues and defences, (iii) the appropriate reserves which should be set, and (iv) the impact of particular insurance clauses upon the claim.
- Insureds are considering the possible introduction and use of artificial intelligence in order (among other things) to: (i) determine when to give notice of claims, occurrences, and integrated or batch occurrences, (ii) provide notice to the insurer of a claim, occurrence or integrated or batch occurrences, and (iii) determine whether to settle underlying claims and if so, for what amount.
In the event that artificial intelligence is deployed by an insured or an insurer in underwriting and claims handling operations, a number of insurance coverage issues might arise in light of the fact that artificial cognitive technology is making decisions which are typically made by human beings (e.g., the executive officer, risk manager of an insured, or, in the case of an insurer, an underwriter and/or claims professional). Therefore, facts and matters that are typically within the subjective knowledge of a particular individual are within the “knowledge” of artificial cognitive technology.
This article explores: (i) the potential insurance coverage issues that might arise under excess casualty insurance policies such as the Bermuda Form Policy in the event that artificial intelligence is deployed in claims handling or underwriting activities by an insured or insurer, (ii) the difficulties involved in dealing with those issues, and (iii) a practical analysis as to how those issues might be dealt with.
I should note, by way of caveat, that this article is not intended to provide an exhaustive list or answer to the insurance coverage issues that might be, or potentially are, implicated. Rather, it is intended to provide an overview of some of the potentially challenging issues that might arise and how they might be approached by insurers and insureds in what is still very much an innovative and developing area. Indeed, the ultimate impact of artificial intelligence upon the insurance world remains to be seen in relation to matters such as: (i) the insurance coverage issues that might arise, (ii) new insurance products and/or policies that are written, and (iii) more generally, the landscape of the insurance industry and whether human involvement will be replaced by artificial intelligence machinery, as Dr. Stephen Hawking predicts.
Summary of insurance coverage issues
Some of the key insurance coverage issues that might arise may be summarized as follows:
- Misrepresentation / non-disclosure: Can an insurer seek rescission of an insurance policy for material misrepresentation and/or non-disclosure in circumstances where artificial intelligence evaluated the answers to the questions on an excess liability application form? In particular, can an insurer establish that a particular misrepresentation was material and that it would not have written the risk either at all or on the terms upon which it was written in circumstances where the decision to write the risk was made by artificial intelligence?
- Expected / intended: Can insurance coverage be denied on the basis that the insured expected and/or intended the injuries or damage (or a level or rate of injuries or damages) in circumstances where an artificial intelligence system had or was deemed to have “knowledge” of the historical facts and information relating to the injuries and damages that the insured might expect or intend?
- Notice issues: Can an insured be denied coverage for late notice of an occurrence and/or an occurrence likely to involve the policy in circumstances where artificial intelligence was responsible for evaluating when to give notice of (and for giving notice of) an occurrence likely to involve the policy to the insurer?
Each of these issues is considered, in turn, below. It is assumed, for the purposes of this article, that the governing law of the policy is (as is typically the case in Bermuda Form policies) that of New York.
The intrinsic problem underpinning the issues identified above is that each involves either a subjective inquiry as to facts and matters that are within the actual knowledge and understanding of the insured or the insurer, or alternatively, an objective inquiry as to matters of which the insured or insurer ought to have been aware and how a prudent insured or insurer ought, therefore, to have acted. The fait accompli presented to any Tribunal is the fact that it is the artificial intelligence machinery which has those facts and matters within its “knowledge” and/or “understanding.” The question is how can one prove the subjective knowledge of an artificial human simulation system? Unless technology advances significantly in the very near future, it is doubtful that artificial intelligence can be called upon to give testimony to explain the facts and matters which were within its knowledge, how it acted and why. Does this mean that an insured or insurer can, therefore, never be responsible for facts and matters that are within the knowledge of a machine? Or will the software of the artificial intelligence machine provide the answers by the results of simulated operations (the accuracy of which might have to be checked and evaluated by another artificial intelligence machine employed by the opposite party)?
Moreover, the question arguably becomes increasingly more difficult if one were to apply an objective test – how can an objective test be applied to artificial intelligence? Against what objective standard would one judge the artificial intelligence?
The starting point in determining each of the issues identified above (i.e., issues which involve an element of subjectivity or objectivity) is likely to be to personify the artificial intelligence and treat it as though it were a human being or the individual person of the insured or the insurer. The questions one has to ask are: (i) what facts and matters were within the “knowledge” of the artificial intelligence system at the time that it made the relevant decisions; (ii) what processes did it undertake in order to arrive at its ultimate decision (e.g., was a computer programming sequence or cognitive process of analysis deployed); and (iii) what caused it to ultimately act in the manner that it did?
Against this background, I proceed to address each of the issues mentioned above, in turn.
New York law on misrepresentation is governed by Insurance Law § 3105 which provides, in pertinent part, as follows:
- A representation is a statement as to past or present fact, made to the insurer by, or at the authority of, the applicant for insurance or the prospective policyholder, at or before the making of the insurance contract as an inducement to the making thereof. A misrepresentation is a false representation, and the facts misrepresented are those facts which make the representation false. See N.Y. Ins. Law § 3105(a).
- No misrepresentation shall avoid any contract of insurance or defeat recovery thereunder unless such misrepresentation was material. No misrepresentation shall be deemed material unless knowledge by the insurer of the facts misrepresented would have led to refusal by the insurer to make such contract. See N.Y. Ins. Law § 3105(b) (emphasis added).
- In determining the question of materiality, evidence of the practice of the insurer which made such contract with respect to the acceptance or rejection of similar risks shall be admissible. See N.Y. Ins. Law § 3105(c).
A non-disclosure (or partial disclosure) by the insured in response to an inquiry by the insurer constitutes a misrepresentation as well as a non-disclosure. See Chicago Ins. Co. v. Kreitzer & Vogelman, 265 F. Supp. 2d 335, 343 (S.D.N.Y. 2003) (citing Mutual Benefit Life Ins. Co. v. Morley, 722 F. Supp. 1048, 1051 (S.D.N.Y. 1989) (“Morley”) (“[t]he failure to disclose is as much a misrepresentation as a false affirmative statement”)).
New York law entitles an insurer to rescind an insurance policy (which is then void ab initio) “if it was issued in reliance on material misrepresentations.” See Fid. & Guar. Ins. Underwriters, Inc. v. Jasam Realty Corp., 540 F.3d 133, 139 (2d Cir. 2008) (emphasis added); see also Interboro Ins. Co. v. Fatmir, 89 A.D.3d 993, 933 (N.Y. App. Div. 2011).
The burden of establishing the existence of a material misrepresentation is on the insurer. In order to demonstrate materiality:
- An insurer must show that the misrepresentation induced it to accept an application that it might otherwise have refused. See Vella v. Equitable Life Assur. Soc., 887 F.2d 388, 392 (2d Cir. 1989); Mut. Benefit Life Ins. Co. v. JMR Elecs. Corp., 848 F.2d 30, 32 (2d Cir. 1988).
- The insurer need not show that it would not have issued the policy at all, but only that the policy in question would not have been issued. Morley, 722 F. Supp. at 1051; Chicago Ins. Co., 265 F. Supp. 2d at 343. In this regard, a fact is material as a matter of law if it could reasonably be considered as affecting the insurer’s decision to enter into the particular policy at issue. See Geer v. Union Mut. Life Ins. Co., 7 N.E.2d 125, 127, 129 (N.Y. 1937).
- The relevant inquiry may involve a question of fact in the particular circumstances of each case, to be determined as necessary by reference to the views and practices of the particular underwriter who issued the policy. See N.Y. Ins. Law § 3105(c).
- Evidence regarding the insurer’s underwriting practices is admissible including underwriting manuals, bulletins or rules pertaining to similar risks, to establish that the insurer would not have issued the same policy if the correct information had been disclosed in the application. See Cont'l Cas. Co. v. Marshall Granger & Co., LLP, 6 F. Supp. 3d 380, 390 (S.D.N.Y. 2014) (citing Curanovic v. N.Y. Cent. Mut. Fire Ins. Co., 307 A.D.2d 435, 438 (N.Y. App. Div. 2003)).
Hypothetical 1: Artificial Intelligence is the underwriter
In order to conceptualize the issues that might arise, let us assume the following hypothetical:
- The insurer “Bright Light” procures an artificial intelligence system to assist it in evaluating and underwriting insurance policies on its behalf. The artificial intelligence system is responsible for among other things: reviewing the underwriting submission materials, evaluating the insured’s potential exposures including the loss information submitted as well as the pricing of the risk.
- In 2000, an insured “Rainbow” sought excess liability insurance coverage up to a limit of $50 million excess of $50 million from Bright Light. Question 5 on the excess liability application form (“Application Form”) asked whether Rainbow had incurred defense costs or damages in excess of $5 million in relation to any particular occurrence or claim and if so, to give details thereof.
- Rainbow reported that it had incurred defense costs and damages in relation to a number of personal injury and property claims arising from a chemical that it manufactured called “DCM” (an organochlorine compound) in the sum of $7 million.
- Bright Light’s artificial intelligence system reviewed the underwriting submission materials and priced the premium for the policy at $1.5 million. Bright Light issued the insurance policy to Rainbow.
- From 2000 onwards, the artificial intelligence system continues to underwrite the same risk on behalf of Bright Light, on an annual basis, up to the present date. Rainbow advises Bright Light each year that the claims arising out of DCM (“DCM Claims”) are not likely to reach the attachment point of the policy.
- In 2017, the DCM Claims amount to $150 million (i.e., well in excess of the insurance policy’s full limits). It also transpires that Rainbow had in fact incurred defense costs and damages in the sum of $45 million at the time that the Application Form was completed in 2000 and that Rainbow had annually reported an incorrect (and substantially lower figure) in response to Question 5 on the Application Form from 2000 onwards.
- Rainbow’s representation as to the defense costs and damages that had been incurred by it in 2000 (i.e., $7 million) and all subsequent answers to Question 5 on the Application Form were therefore false.
- Bright Light contends that that the figures that had been reported by Rainbow on the Application Form from 2000 onwards were false and that it would not have written the risk on the terms on which it had been written and/or would have charged a higher premium had it known: (i) that the defense costs and damages had in fact amounted to $45 million as early as 2000, and (ii) what the correct figures had been from 2000 onwards.
- Bright Light seeks to rescind the insurance policies for misrepresentation.
As a matter of New York law, the critical question will be whether Bright Light can satisfy the test of materiality (i.e., inducement). In other words, can Bright Light prove that it would have acted differently (e.g., by not writing the policy at all or by imposing different terms or charging a higher premium) had the true figures been disclosed and the misrepresentation had not been made in circumstances where artificial intelligence evaluated and made the decision to underwrite the risk?
In a typical Bermuda Form arbitration, the actual underwriter who underwrote the risk would give evidence as to: the documentation and information that was provided upon placement of the policy; his or her evaluation of the risk that was being written including the pricing of the risk; and whether he or she would have written the risk and, if so, on what terms. For obvious reasons set out above, this is likely impossible where artificial intelligence underwrote the risk and there was no specific human involvement.
In this event, in order to determine materiality/ inducement the following factors (which are similar to those one would consider if one were dealing with an actual underwriter) should be considered, namely:
- Firstly, what facts, documentation and/or information were provided to the artificial intelligence system and thus could be said to have been within its “knowledge”?
- Secondly, what cognitive or analytical processes were utilized by the artificial intelligence system in order to evaluate the risk and on what terms? Although an actual underwriter would be able to say how he or she would have evaluated the risk, an artificial intelligence system likely employed some form of cognitive technology in order to evaluate the risk (e.g., computer software or other manuals might exist to show how the cognitive technology worked and evaluated risks).
- Thirdly, if in response to Question 5 on the Application Form, the figure of $45 million had been reported in 2000 and the correct figures had been reported thereafter, would the artificial intelligence program have declined to write the risk or would it have imposed other terms or conditions or would it have charged a higher premium? If a computer program or algorithm was designed to evaluate risks based upon a particular set of facts, figures and/or information, then presumably evidence could be given (query, by a human) to show whether, if it had been given the correct information, the artificial intelligence system would have rejected the risk or charged a higher premium for the policy and/or agreed other terms or conditions to the policy.
- Fourthly, what other similar risks were underwritten by the artificial intelligence system? As highlighted above, as a matter of New York law and § 3105(c) of the New York Insurance Law, despite persuasive evidence by an actual underwriter, materiality is unlikely to be satisfied unless documentary evidence of the insurer’s underwriting practices including underwriting manuals, guidelines or other information pertaining to other similar risks is also proffered. In the same vein, similar documentary evidence could be proffered in relation to an artificial intelligence system in respect of other risks which were similarly evaluated by it.
In light of the above, although the deployment of artificial intelligence to underwrite risks may make it difficult for insurers to prove materiality/ inducement, these challenges may be overcome, and for the most part New York law is helpful in this regard. This is because, under New York law, evidence of an underwriter’s other similar underwriting practices is often required for the purposes of proving materiality/ inducement pursuant to § 3105(c) of the New York Insurance Law. Instead of involving a subjective inquiry as to what the actual underwriter would have done, the impact of artificial intelligence would principally involve a factual inquiry into what the artificial intelligence, programmed as it was, had done in the past and would have done according to that program in relation to the specific risk in question.
In this regard, insurers may face greater challenges in proving rescission under English law pursuant to which, in addition to proving that the misrepresentation or non-disclosure was material (in the same sense as under New York law), the insurer has to prove that the notional prudent insurer would have been influenced in his/her decision-making processes by the misrepresentation / non-disclosure. This is unlikely to be able to be proved other than by reference to expert evidence given by a human.
The real problem would arise if artificial intelligence had replaced all human underwriters: so that no human could give cogent evidence as to how a prudent (non-human) insurer would have reacted to the misrepresentation / non-disclosure. Will it be possible to say how a prudent artificial intelligence system should have acted? What standard should apply to artificial intelligence? Can an objective standard be applied if each artificial intelligence underwriting system is unique to each insurer?
It follows that the analysis under English law might in some respects be much more challenging.
That said, the analysis might in fact be more challenging under whichever law, New York or English, is to be applied. In considering materiality, it might be asked:
- what would have happened if the insurer’s artificial intelligence system had generated a different set of terms for underwriting the risk if the misrepresentation / non-disclosure had not happened but, in response, the insured or, more likely, the insured’s artificial intelligence system which was undertaking the submission would have responded by declining the different terms and would have negotiated for better terms (i.e., the terms on which the risk was actually written)?
- In the world of artificial intelligence, would the opposing systems have the intelligence to negotiate with each other as though they were humans?
- What would be the position if the market was a soft market: would the insurer’s system recognize the imperative of writing a risk in a soft market even on less favorable terms? Would the insured’s system recognize that it was in a stronger bargaining position than the insurer and so could demand better terms?
If these issues (mentioned above) had arisen at the underwriting stage, would they have required human intervention for the ultimate underwriting decision?
Rescission for non-disclosure
Let us assume that, in hypothetical 1 above, an artificial intelligence system filled out the Application Form on behalf of Rainbow but inadvertently (e.g., due to a software glitch, or error in the algorithm) omitted to report something which, while not on the Application Form, an executive officer of Rainbow knew Bright Light needed or wanted to be told if it existed.
Under New York law, rescission for pure non-disclosure requires the insurer to prove, by clear and convincing evidence, fraudulent concealment of the material facts or bad faith with intent to mislead the insurer. See Home Ins. Co. of Illinois (New Hampshire) v. Spectrum Info. Techs., Inc., 930 F. Supp. 825, 840 (E.D.N.Y. 1996).
In these circumstances, where artificial intelligence is deployed by insureds in completing Application Forms, it will be exceedingly difficult to prove actionable non-disclosure on the part of the insured and thereby rescind an insurance policy. Of course, it is conceptually possible that an insured designed an artificial intelligence system with the specific intent to deceive insurers and be selective in the information submitted to insurers as a means of concealing material information. In that event, there is no reason to suppose that a case for non-disclosure would not be made out.
Notice of claims/ occurrence
Relevant policy provisions
The Bermuda Form Policy makes it a condition precedent to an insured’s rights under the policy that, if any of its managers or equivalent level employees of its risk management, insurance or law departments, or any of its executive officers, become aware of any occurrence “likely to involve” the policy, then the insured should “as soon as practicable” thereafter give written notice in writing and directed to the insurers’ Claims Department at its specified address (Articles V(A) and V(D)).
It is established as a matter of New York law that:
- the insured’s obligation arises when the relevant notifying individual is aware of an occurrence giving rise to a “reasonable possibility of the policy’s involvement” based upon an objective assessment of the information available to it at that time. This is even though some factors may suggest the opposite. See Century Indem. Co. v. Keyspan Corp., 15 Misc. 3d 1132(a), 7 (N.Y. Sup. Ct. 2007) (emphasis added); see also Christiana Gen. Ins. Corp. of New York v. Great Am. Ins. Co., 979 F.2d 268, 276 (2d Cir. 1992). The word “likely” does not require that it is “more likely than not” or a probable certainty that the policy will be involved.
- It may be argued, and it would be right under English law that, while the insured must be subjectively aware of relevant facts, it is not necessary that the insured should also subjectively believe that those facts give rise to the real possibility of the policy being involved. It would be sufficient for the insured’s obligation to arise that the insured should objectively have realized that the facts were such as to give rise to the real possibility of the policy’s involvement.
However, it has been argued that, under New York law, the inquiry is a purely subjective one. In other words, when did the relevant notifying individual of the insured in fact become aware of an occurrence and that the occurrence was one which (it believed) was likely to involve the policy.
Timing of notice:
Under the Bermuda Form Policy, the notice must be given “as soon as practicable” which means a reasonable time in all the circumstances. Under New York law, a reasonable time has been held to have been a matter of days in some cases and, in other cases, a matter of months. See Am. Ins. Co. v. Fairchild Indus., Inc., 56 F.3d 435, 440 (2d Cir. 1995) (holding that “delays for two months are routinely held unreasonable” and violated the requirement that notice be given as soon as practicable).
Since the notice provision is a condition precedent to the policy, non-compliance by the insured with it would result in a forfeiture of any of its rights to coverage under the policy. This is a reflection of the strict approach which the New York Courts have taken to “notice of occurrence” provisions in insurance contracts. See Olin Corp. v. Ins. Co. of N. Am., 743 F. Supp. 1044, 1053 (S.D.N.Y. 1990) (“Under New York law, compliance with a notice-of-occurrence provision in an insurance contract is a condition precedent to an insurer’s liability under the policy. . . . Compliance with notice-of occurrence requirements promotes important policy goals.”).
Hypothetical 2: Artificial Intelligence notifies an occurrence
Let us assume that the facts of hypothetical 1 apply. In addition:
- Rainbow deploys artificial intelligence to determine when there is an occurrence which is likely to involve a policy and to give the relevant notice of that occurrence to Bright Light.
- Rainbow reports the DCM Claims as an occurrence to Bright Light in 2017 when the costs incurred in relation to the DCM claims have fully exhausted the limits of the policy.
- However, as noted above, Rainbow had in fact incurred $45 million in defense costs and damages in respect of the DCM Claims as early as 2000 and there was, thus, a reasonable possibility that the policy would be implicated at that time.
In this scenario, can Bright Light establish a late notice defense against Rainbow? Rainbow might argue that the artificial intelligence system was not aware in 2000 of both an occurrence and that it was likely to involve the policy. It only became so aware in 2017, at the time that it gave notice of the occurrence. Thus, it will say, notice was promptly and timely given and a defense based upon late notice cannot be made out.
However, this is unlikely to be right. Otherwise, an insured would be given a license to excuse its failure to give proper and timely notice based upon the “incompetence” or errors of the artificial intelligence system which it deploys.
The first question one has to ask is: what facts and information did the artificial intelligence have access to and thus what was it deemed to know? If it is the case that the artificial intelligence had, within its system, a repository (and thus deemed to have knowledge) of all relevant documents and information relating to the DCM Claims including the fact that $45 million in defense costs and damages had been incurred, then it might be argued that the artificial intelligence system was thus subjectively aware of both the occurrence and, if it had been programmed to consider the matter, that it was an occurrence which was likely to involve the policy. Perhaps, one has to assume that, if artificial intelligence has replaced humans, it is deemed to have the subjective awareness of a reasonable human being and that the human insured cannot hide behind the fact that the artificial intelligence system has not been designed to have all the same cognitive characteristics of humans.
However, let us assume the issue of late notice is determined by reference to a subjective-objective standard i.e., when did the artificial intelligence system become aware of an occurrence which, objectively viewed, was one which was likely to involve the policy.
Based on the facts above, it might be argued that it is sufficient that the artificial intelligence system became aware of the occurrence in 2000 because it was aware, at that time, that $45 million had been incurred in defense costs and damages (even though it might not have been programmed to have worked out that the occurrence was one which was likely to involve the policy). The reason why it would be sufficient to prove the actual knowledge of the system is because, based upon an objective analysis of the facts, one could show through expert evidence that the occurrence was one which was likely to involve the policy in the future without reference to what the system thought or did not think. Therefore, Rainbow ought to have given notice of the occurrence in 2000. The objective analysis would not depend upon any artificial intelligence system – unless, of course, human expertise is also to be replaced by machines in the administration of justice.
Expected/ intended defence
Relevant policy provisions
The occurrence definition of the Bermuda Form Policy contains the proviso that, “any actual or alleged Personal Injury or Property Damage or Advertising Liability which is expected or intended by any Insured shall not be included in any Occurrence.” (Article III(V)(2)).
Article III(L)(1) further defines the nature of expectation or intent as follows:
- The actual or alleged injury or damage must be expected or intended by the Insured. (Article III(L)(1)(a)).
- As respects an integrated occurrence, injury or damage is expected or intended if: (i) the insured “has historically experienced a level or rate of actual or alleged personal injury or property damage” (Article III(L)(1)(b)), or (ii) if the insured “expects or intends a level or rate of actual or alleged personal injury or property damage.” (Article III(L)(1)(c)).
The requirement in (b) above (Articles III(L)(1)(b) and (c) of the Bermuda Form Policy) is subject to the further proviso that, “if actual or alleged personal injury or property damage fundamentally different in nature or at a level or rate vastly greater in order of magnitude occurs, all such actual or alleged fundamentally different in nature or vastly greater Personal injury or Property Damage shall not be deemed ‘expected or intended.’”
In order to determine whether the defense of expected or intended might apply, one must first ask whether the insured had the relevant expectation or intent.
Key inquiries that emerge are: (i) what injury or damages were expected or intended by the insured? (ii) what historical level or rate of actual or alleged injuries or damages was experienced by the insured? (iii) what level or rate of actual or alleged injuries does the insured expect or intend? (iv) what injuries or damages are “fundamentally different in nature or at a level or rate vastly greater in order or magnitude”?
The New York courts previously held that the question of expectation and intention required the application of both an objective test and a subjective test. However, the recent trend is likely towards a purely subjective test:
- See City of Johnstown v. Bankers Standard Ins. Co., 877 F.2d 1146, 1150 (2d Cir. 1989) (“In general, what make injuries or damages expected or intended rather than accidental are the knowledge and intent of the insured. . . . It is not enough that the insured was warned, an insured decided to take a calculated risk and proceed as before…Recovery will be barred only if the insured intended the damages…or it can be said that the damages were, in the broader sense, ‘intended’ by the insured because the insured knew that the damages would flow directly and immediately from its intentional act.”) (citing McGroaty v. Great Am. Ins. Co., 36 N.Y.2d 358, 358 (N.Y. 1975) (emphasis added)).
- See also Cont’l Cas. Co. v. Rapid-American Corp., 80 N.Y.2d 640, 649 (N.Y. 1993) (“The injury must be unexpected and unintentional. We have read such policy terms narrowly, barring recovery only when the insured intended the damages. Resulting damage can be unintended even though the act leading to the damage was intentional. . . . A person may engage in behavior that involves a calculated risk without expecting that an accident will occur. . . ordinary negligence does not constitute an intention to cause damage; neither does a calculated risk amount to an expectation of damage.”) (emphasis added); Union Carbide Corp. v. Affiliated FM Ins. Co., 101 A.D.3d 434, 435 (N.Y. App. Div. 2012) (The Supreme Court, Appellate Division found that the record showed that the plaintiff (manufacturer of asbestos-containing products) was merely aware that asbestos could cause injuries and that claims could be filed. Plaintiff’s “calculated risk” in manufacturing and selling its products despite its awareness of possible injuries and claims does not amount to an expectation of damage.)
A subjective construction is arguably supported by the language of the occurrence definition and the nature of expectation or intent which is further defined in the Bermuda Form Policy as personal injury or damage which is “expected or intended by the insured” (Article III(L)(1)(a)), the focus arguably being on what “the insured” intended as opposed to that which it ought to have intended.
Hypothetical 3: Artificial Intelligence underwrites risks, collates claims and other financial information
Let us assume the following facts:
- The insured, Rainbow, deploys artificial intelligence to prepare its Underwriting Submissions including the completion of the Application Form itself from 1998 to 2018.
- As part of its program, the artificial intelligence collated, among other things: financial information pertaining to the insured’s risks including historical loss information pertaining to claims, the defense costs and damages incurred, outstanding damages and expenses; details of the nature and the number of the claims.
- In 2018, Rainbow notifies the insurer, Bright Light of an integrated occurrence in relation to claims made by individuals arising out of their exposure to a drug called “Cherry” which was manufactured in the 1970s through to 1997 and which caused them to develop various cancers (“Cherry Claims”).
- The Cherry Claims arose out of injuries which, according to Rainbow, it did not expect or intend although, at the time of the Cherry sales and thereafter, Rainbow knew from its clinical trials and post-sales reports that there was a risk that certain types of individuals consuming Cherry might develop cancer as a result.
The issue that arises is whether the artificial intelligence system can be deemed to have expected or intended actual or alleged injuries such that it can be argued that the level or rate of injuries actually experienced was expected or intended by the insured and thus not within the scope of coverage.
The obvious starting point is: what would the artificial intelligence be deemed to know? Unless and until it can be established what the artificial intelligence was deemed to know, one cannot proceed to ask whether it expected or intended the injuries and if so, at what level or rate.
In order to answer this question, one must ascertain what access did the artificial intelligence have, and to what information? If, as in hypothetical 3 above, the artificial intelligence system was collating and/or evaluating all relevant underwriting materials as part of the insured’s submissions, then presumably it would have had access to all documentation and information in respect of the Cherry clinical trials, post-sales reports and claims. This is especially so if the artificial intelligence system acted as a repository for all relevant pharmaceutical and medical material, historical underwriting documentation and loss information including that relating to historical claims, the severity of the claims, the costs that had been incurred and the potential future costs that might be incurred. In this event, it is arguable that the artificial intelligence system is deemed to have all of this information within its knowledge.
The next question thus presented is, assuming that the artificial intelligence was deemed to have the relevant knowledge, did it expect or intend the injuries and if so, at what level or rate?
This might be more difficult to answer and will be contingent upon whether the nature of expectation or intention is an objective or subjective test.
It might be thought that a subjective inquiry will be more difficult to satisfy for similar reasons set out above in relation to satisfying the test of materiality for misrepresentation. In other words, how can an artificial intelligence system give evidence as to what injuries it did expect or intend? However, one can assume that the artificial intelligence is (as its name suggests) “intelligent” in that it has a program which is designed to perform some level of cognitive analysis (as an individual would do so). Thus, it is conceivable that evidence could be given as to the program deployed and the analyses performed or at least capable of having been performed by the artificial intelligence system in order to establish what injuries it would or could be said to have expected or intended in respect of the Cherry Claims.
Notably, the term “level or rate” is not defined in the Bermuda Form Policy. There is debate as to what those words mean. For example, does one solely take into account injuries, or does one also take into account: the existence and number of claims, their severity, the liabilities that might flow from the injuries as well as the potential damages that have been and might be incurred. If one were to take into account these other factors, then presumably they would all be deemed to be within the knowledge of the artificial intelligence system. To this end, it might be easier to perform an analysis as to: what the system is deemed to have known in terms of past injuries and the risk of existing and future injuries, and what it may be deemed, as an intelligent system, to have expected in terms of a level or rate of injuries to which Cherry would (or could) give rise.
By contrast, let us assume that the nature of expectation or intention is to be determined by reference to an objective standard i.e., whether the injury ought to have been expected or intended by the artificial intelligence system as a substantial certainty. The issue that arises is: what objective standard should apply to the artificial intelligence system? Can one apply a test of a reasonable prudent individual, because one cannot anticipate what injuries an artificial intelligence system ought to have expected or intended given that presumably each artificial intelligence system has its own particular design and program which would dictate what information it would have access to, and thus be deemed to have knowledge of? Or is one bound to apply a test by reference to a reasonably designed artificial intelligence system? The objective standard should be that of a reasonable insured in the position of the actual insured: the de-personification of the insured by its deployment of an artificial intelligence system should not alter the basic principle or modify the application of the objective test either in its favor or against it.
It is likely that the deployment of artificial intelligence in the insurance industry will be riddled with complexities, in particular, in relation to the insurance coverage issues that might or potentially are implicated. The synopsis above is just a glimpse of some of the issues that might arise.
In the sixth century, Parmenides viewed the world as being divided into polar opposites e.g., light/ darkness, being/ non-being, warmth/ cold: one half of the opposition being positive, the other negative. For example, he viewed light as positive and darkness as negative.
In a similar vein, the questions for the insurance industry in the twenty-first century will be: whether the world will be divided by the opposition of artificial intelligence versus human intelligence; and if so, which part of that dichotomy will be positive, and which negative.
However, as Pope Benedict XVI pointed out, “[a]rtificial intelligence, in fact, is obviously an intelligence placed in equipment. It has a clear origin, in fact, in the human creators of such equipment.” Thus, perhaps after all, human intelligence will conquer and, more importantly in the insurance context, will ultimately be determinative.
 Artificial Intelligence, Oxford English Dictionary (2d ed. 1991).
 Insurance Firm to Replace Human Workers with AI System, Mainichi, December 30, 2016, https://mainichi.jp/english/articles/20161230/p2a/00m/0na/005000c (emphasis added).
 “Automating the Underwriting of Insurance Applications,” Kareem S. Aggour, William Cheetham (General Electric Global Research), American Association for Artificial Intelligence (2005), https://www.aaai.org/Papers/IAAI/2005/IAAI05-001.pdf.
 “XL Catlin considers how Artificial Intelligence can assist Risk Managers,” http://youtalk-insurance.com/news/xl-catlin/xl-catlin-considers-how-artificial-intelligence-can-assist-risk-managers?da7bfc41=618d105d.
 Any references to provisions of the Bermuda Form Policy are to the current XL-004 Policy.
 McKinney 2011.
 McKinney 2011.
 McKinney 2011.
 McKinney 2011.