Could large language models make coverage disputes a thing of the past?

Contacts: Joe Cunningham

17-08-23

Kennedys IQ recently wrote about the pragmatic and promising application of large language models, which enable the IQ Platform to deliver further tangible value to our clients. This article takes a look at the use case of policy review, data collection and identifying risk to mitigate coverage disputes and the work Kennedys IQ have undertaken to address it.

Insurance policies are designed to enable positive and swift response at a time of loss; the purpose of the policy wording is to record the terms of the contractual bargain between the insured and insurer. Coverage disputes arise when the insurer is requested to provide cover for loss which the insurer believes the policy does not cover. This non-coverage scenario can arise from differing interpretations of wording, intention, or breach of express or implied policy terms.

As any seasoned insurance professional will agree, coverage claims arise in nuanced, unexpected, and sometimes alarming ways. After the resolution of the dispute, both underwriters and claims teams often ask:

“How many other claims, today or tomorrow, are affected by this issue”.

A stark example of this was the FCAs COVID-19 business interruption (BI) test case. Where a multitude of spreadsheets, e-mails and hasty system changes were put in place to support manual data capture and analysis. In this example, some property insurance policies focused on property damage and only had basic cover for BI as a consequence of property damage. But some policies also covered BI from other causes, like infectious, or notifiable diseases or prevention of access and public authority closures or restrictions. To understand exposure, insurers faced the gargantuan task of categorising a multitude of claim scenarios and cross referencing numerous factual scenarios with the content of the policy wording itself.

The technology landscape has changed dramatically since the FCA’s case came before the courts in 2021. Large language models have emerged as powerful tools that will help insurers identify and quantify risk. Is it now systematically possible to establish the content and context of a clause, thereby reducing our Covid-19 example from weeks of work to hours? Kennedys IQ think so, LLM-enabled systems can quickly accomplish tasks practically impossible for the individual, and with a high degree of efficacy. But how is this possible?

The problem with policy language

The language of policies requires interpretation. Policies consist of words, formed into sentences, formed into paragraphs, which so often reference each other to provide clarity (or create ambiguity). These words have meaning, but the policy is not some self-contained object ready to be fed to a machine. It requires the reader to understand the law, context, regulatory environment, and a great deal of understanding about how to make those words into something that has real-life meaning.

Our research and development work in the past months had led to the creation of models, not technical algorithmic models, but models of human-driven language interpretation embedded into software that enable the LLM input and output. Here are some examples;

Cyber exclusions

General liability policies, as an example, may contain endorsements related to cyber exclusions. These exclusions may differ in construction, the nature of the coverage exclusion may be claims ‘directly or indirectly caused by…,’ ‘for’, ‘resulting from’, ‘attributable to’, ‘arising out of’ or ‘in connection with’ or none of these words at all… our models seek to computationally classify how each is interpreted in law, and then encode that interpretation. It is how the language model is instructed to interpret the language that enables the best results, and not to simply query the existence of the cyber exclusion itself.

Conflicting jurisdiction clauses

It is not unusual for jurisdiction clauses to be erroneously confused with territorial scope. Traditional natural language processing techniques are well equipped to identify this error, but less so when attempting to cross reference terms of the policy and associated documentation. Take for example a combined public and products liability policy, with a product section that changes the governing law from jurisdiction A to B. An LLM can identify the context in which it applies and highlight conflicting terms. This is particularly useful for facultative reinsurance, where risk lay in buried in long, bespoke wordings alleged to be back-to-back.

Additional Insured Endorsements

Endorsements are often added to include additional insureds, the original policy may well have excluded claims arising in the territory where the additional insured is located. This gives rise to ambiguity. Does the policy cover the additional insured for losses arising outside the territory they are located, or was it the intent to change the territorial scope by way of endorsement? LLMs can effectively identify this unintended ambiguity, avoiding a potential coverage dispute at a later date.

Our development in deploying LLMs has illustrated that LLMs not only help us to extract information from policies, but systematically quantify risk when embedded within the right software. It is embedding LLMs with the right software and process that enables us to utilise large language models, beyond simply ‘chatting’ with ChatGPT. Being the technology arm of a law firm, Kennedys IQ is in a unique position to combine the expertise of our own data scientists with a global firm of lawyers. Eliciting the judgement of experts, be it underwriters, claims professions or lawyers – carefully encoding that expertise into LLM-enabled systems.

This not only assists underwriters, but underwriting management, conduct risk, compliance, claims and wording teams. These systems compliment internal feedback loops and allows a greater degree of agility by scaling solutions to meet existing and unforeseen risk. By scaffolding the decision-making and review process, a higher degree of consistency is achievable – at a time when it really matters – before a dispute.

As with every lawyer, technologist, and client – our personal lives are beset with technological advancement. LLMs provide a promising future, for us – less so in chatbots, but by incorporation of generative AI into symbolic and rule-based systems. Such systems provide process and compliance frameworks for generative AI to be used safely and securely. At Kennedys IQ, we are excited about the promising delivery of our client-focused innovation in this area.

Wording issues continuing to distract you? Do you want a deeper understanding of what is contained in your portfolio of policies? Challenge our data science and claim experts to apply LLM-enabled systems to your problems. Get in touch to find out more.

Contact us

Get in touch and we’ll be happy to help

Related news and insights