Algorithmic Aversion: overcoming resistance to AI in professional judgment

Contacts: Karim Derrick,  Joe Cunningham, Harvey Maddocks, Lisa Liu

12-03-25

Part 4 of the Kennedys IQ SmartRisk Series

As artificial intelligence (AI) continues to transform professional services, many experts remain reluctant to trust AI-driven decision-making. This phenomenon, known as algorithmic aversion, presents a significant challenge to the adoption of AI in claims handling, underwriting, and risk assessment. In this fourth instalment of our Kennedys IQ SmartRisk Series, we explore why professionals hesitate to rely on AI and how SmartRisk addresses these concerns.

Why do professionals distrust AI decision-making?

Research has shown that even when AI outperforms humans, professionals often resist adopting AI-driven recommendations. Several key factors contribute to algorithmic aversion:

  1. Perceived infallibility of human judgment – Professionals tend to overestimate the reliability of their own decisions while underestimating AI’s ability to learn from vast datasets.
  2. Lack of transparency – Many AI models operate as “black boxes,” providing answers without clear explanations.
  3. Error magnification – When AI makes a mistake, it is often judged more harshly than similar human errors.
  4. User control and comfort – Professionals want to feel in control of decision-making processes rather than relying on automated recommendations.

The SmartRisk approach: enhancing trust in AI

Kennedys IQ SmartRisk is designed to overcome algorithmic aversion by integrating explainability, transparency, and expert-driven validation. Our neuro-symbolic AI model addresses key concerns in the following ways:

  • Explainability – Unlike black-box AI, SmartRisk provides detailed explanations for every decision, allowing users to see the reasoning behind each output.
  • Human-AI collaboration – AI augments, rather than replaces, human expertise, ensuring professionals remain in control.
  • Error calibration – SmartRisk continuously learns from experts, refining its decision models to improve accuracy and build user confidence.
  • Structured decision logic – The Evidential Reasoning and Belief Rule Base (BRB) methodology ensures structured, traceable, and consistent AI reasoning.

Building confidence in AI-assisted decisions

The key to overcoming algorithmic aversion lies in user experience and engagement. SmartRisk ensures professionals feel empowered, not replaced, by AI. By providing transparent explanations and incorporating human expertise, we make AI-driven decisions actionable, understandable, and trustworthy.

Next: introducing SmartRisk’s neuro-symbolic approach

In our next article, we’ll introduce the neuro-symbolic AI approach that powers SmartRisk. This unique methodology ensures AI-assisted professional judgment is accurate, consistent, and explainable, addressing long-standing challenges in risk assessment and claims handling.

Join the SmartRisk launch event!

on March 19, Kennedys IQ will unveil SmartRisk, the first hybrid AI system for professional services. Don’t miss this opportunity to see how AI and human expertise combine to redefine insurance decision-making.

Register today

An unmissable event at The Steel Yard: the official launch of Kennedys IQ SmartRisk

Related news and insights