The future of professional services in the Large Language Model era

Author: Karim Derrick
Contact: Joe Cunningham

19-02-25

Part 1 of the Kennedys IQ SmartRisk Series

The professional services industry is at an inflection point. The rise of Large Language Models (LLMs) is challenging long-standing traditions of expert judgment, decision-making, and risk assessment. But can AI truly replicate professional judgment? Or does it offer an opportunity to enhance decision-making while mitigating inconsistencies? In this six-part LinkedIn series, leading up to the launch of Kennedys IQ SmartRisk in March 19th, we will explore how LLMs, Evidential Reasoning, and Belief Rule Base (BRB) methodologies are shaping the future of insurance underwriting and claims handling.

The inconsistency of human expertise

For over a century, research has shown that expert judgment—whether in education, medicine, law, or insurance claims handling – is often inconsistent. Studies have demonstrated that:

  • Doctors diagnose identical symptoms differently.
  • Judges issue different sentences for similar cases.
  • Claims handlers reach different conclusions on comparable claims.

In our own research that has studied professional judgement across multiple clients, expert agreement was found to have a Krippendorf Alpha consistency measure of less than 0.4, indicating substantial variance in professional judgment. This lack of consistency can impact risk assessment, claim outcomes, and underwriting efficiency.

Large Language Models: A game changer?

The recent surge in LLMs like OpenAI’s GPT has redefined what AI can achieve. These models have passed the Bar Exam, medical licensing tests, and auditing assessments. They are already boosting productivity in professional services:

  • Developers using AI coding assistants see a 56% increase in efficiency.
  • Writers experience a 40% boost in speed.
  • Lawyers complete tasks faster with AI-assisted research.

Yet, when it comes to professional judgment, challenges remain. LLMs struggle with decision-making, consistency, and contextual reasoning. They generate outputs based on probability, but true professional judgment requires structured reasoning, domain expertise, and real-world context.

The limits of AI in professional judgment

While LLMs are powerful, they have limitations:

  1. Inconsistency – Their outputs vary unpredictably.
  2. Hallucination – They can generate inaccurate or misleading information.
  3. Bias – They inherit biases from their training data.
  4. Lack of explainability – Their reasoning process is a “black box.”

For insurers, these issues pose significant risks. Imagine an LLM inaccurately assessing a policy’s coverage or misinterpreting a legal clause – leading to incorrect claim decisions. The insurance sector cannot afford such unpredictability.

The Kennedys IQ SmartRisk approach: Beyond LLMs

Kennedys IQ SmartRisk takes a hybrid neuro-symbolic approach, combining:

  • LLMs for attribute extraction – Identifying key facts in documents.
  • Evidential Reasoning (ER) and Belief Rule Base (BRB) – Ensuring structured, explainable decision-making.

This approach mitigates bias, improves decision consistency, and enhances professional judgment in claims handling and underwriting.

Join us for the Kennedys IQ SmartRisk launch

Over the next five weeks, we will delve deeper into how AI and structured reasoning are transforming professional judgment, risk assessment, and claims decisioning.

In March 19, Kennedys IQ will unveil SmartRisk, the world’s first neuro-symbolic AI system for professional services. Join us at our launch event to see how it can redefine insurance underwriting and claims handling.

Register today

An unmissable event at The Steel Yard: the official launch of Kennedys IQ SmartRisk

Related news and insights