Language models and professional judgment

Author: Karim Derrick
Contact: Joe Cunningham

26-02-25

Part 2 of the Kennedys IQ SmartRisk Series

The rise of Large Language Models (LLMs) has approaches to Artificial Intelligence, but how effective are they in replicating professional judgment? As we continue our six-part Kennedys IQ SmartRisk Series, leading up to the launch event on March 19th, we explore the limits of LLMs in making complex decisions and how a hybrid approach can overcome these challenges.

The challenge: Can AI replace professional judgment?

LLMs like GPT-4 and other AI-driven technologies have demonstrated impressive capabilities, from passing professional exams to automating administrative tasks. Yet, there is a stark difference between answering structured queries and making nuanced, high-stakes professional decisions.

Key concerns include:

  • Probabilistic nature – LLMs generate responses based on statistical probabilities rather than logical reasoning.
  • Inconsistency – The same input may yield different outputs.
  • Hallucinations – AI can fabricate facts, which poses significant risks in legal and insurance applications.
  • Bias – Models inherit biases from their training data, leading to unpredictable and potentially discriminatory decisions.

For insurers and claims handlers, these shortcomings raise fundamental questions about reliability and accountability in AI-driven decision-making. LLMs alone are not the solution—they must be integrated with structured methodologies to ensure transparency, consistency, and explainability.

The importance of multi-attribute decision making

Professional judgment involves weighing multiple attributes simultaneously. Claims handlers, underwriters, and legal professionals assess and weigh up a range of factors in any decision, including policy coverage, legal precedents, and case-specific evidence.

Traditional rule-based expert systems attempted to codify professional judgment, but failed due to their rigidity and inability to adapt to evolving knowledge.

SmartRisk’s Hybrid Neuro-Symbolic Approach

Kennedys IQ forthcoming SmartRisk product takes a hybrid neuro-symbolic approach by combining:

  • LLMs for attribute extraction – AI identifies relevant clauses, legal arguments, and contextual factors.
  • Evidential reasoning and Belief Rule Base (BRB) – A structured decision-making framework that ensures logic-driven, transparent, and explainable conclusions.

This neuro-symbolic approach enables AI to support, rather than replace, human judgment, ensuring consistency, accuracy, and accountability in risk assessment and claims handling.

Looking ahead: From theory to application

In the next article, we will explore how Evidential Reasoning and Belief Rule Base (BRB) methodologies create structured, explainable AI decision systems. We’ll show how these techniques bridge the gap between AI automation and human professional judgment, ensuring optimal outcomes for insurers and legal professionals.

Join the SmartRisk launch event!

In March 19 we will unveil SmartRisk, the world’s first hybrid AI system for professional services. Join us to see how it is redefining risk assessment and claims decisioning.

Register today

An unmissable event at The Steel Yard: the official launch of Kennedys IQ SmartRisk

Related news and insights