Automating hallucination detection with chain-of-thought reasoning

Novel three-pronged approach combines claim-level evaluations, chain-of-thought reasoning, and classification of hallucination error types.

When a large language model (LLM) is prompted with a request such as Which medications are likely to interact with St. John's wort?, it doesn’t search a medically validated list of drug interactions (unless it's been trained to do so). Instead, it generates a list based on the distribution of words associated with St. John’s wort.

The result will likely be a mix of real and potentially fictional medications with varying degrees of interaction risk. These types of LLM hallucinations — assertions or claims that sound plausible but are verifiably incorrect — still impede LLMs’ commercial implementation. And while there are ways to reduce hallucinations in domains such as health care, the need to identify and measure hallucinations remains key to the safe use of generative AI.

Related content
Representing facts using knowledge triplets rather than natural language enables finer-grained judgments.

In a paper we presented at the last Conference on Empirical Methods in Natural Language Processing (EMNLP), we describe HalluMeasure, an approach to hallucination measurement that employs a novel combination of three techniques: claim-level evaluations, chain-of-thought reasoning, and linguistic classification of hallucinations into error types.

HalluMeasure first uses a claim extraction model to decompose the LLM response into a set of claims. Using a separate claim classification model, it then sorts the claims into five key classes (supported, absent, contradicted, partially supported, and unevaluatable) by comparing them to the context — retrieved text relevant to the request, which are also fed to the classification model.

Additionally, HalluMeasure classifies the claims into 10 distinct linguistic-error types (e.g., entity, temporal, and overgeneralization) that provide a fine-grained analysis of hallucination errors. Finally, we produce an aggregated hallucination score by measuring the rate of unsupported claims (i.e., those assigned classes other than supported) and calculate the distribution of fine-grained error types. This distribution provides LLM builders with valuable insights into the nature of the errors their LLMs are making, facilitating targeted improvements.

HalluMeasure process.png
The HalluMeasure process for automatically identifying LLM hallucinations.

Decomposing responses into claims

The first step in our approach is to decompose an LLM response into a set of claims. An intuitive definition of “claim” is the smallest unit of information that can be evaluated against the context; typically, it’s a single predicate with a subject and (optionally) an object.

Related content
Dependency graphs of business processes with constrained decoding can reduce API hallucinations and out-of-order executions.

We chose to evaluate at the claim level because classification of individual claims improves hallucination detection accuracy, and the higher atomicity of claims allows for more precise measurement and localization of hallucinations. We deviate from existing approaches by directly extracting a list of claims from the full response text.

Our claim extraction model uses few-shot prompting and begins with an initial instruction, followed by a set of rules outlining the task requirements. It also includes a selection of example responses accompanied by their manually extracted claims. This comprehensive prompt effectively teaches the LLM (without updating model weights) to accurately extract claims from any given response. Once the claims have been extracted, we classify them by hallucination type.

Advanced reasoning in claim classification

We initially followed the traditional method of directly prompting an LLM to classify the extracted claims, but this did not meet our performance standards. So we turned to chain-of-thought (CoT) reasoning, in which an LLM is asked not only to perform a task but to justify each action it takes. This has been shown to improve not only LLM performance but also model explainability.

Related content
“Best-fit packing” adapts bin-packing to avoid unnecessary truncation of training documents, improving LLM performance across a wide range of tasks and reducing hallucination.

We developed a five-step CoT prompt that combines curated examples of claim classification (few-shot prompting) with steps that instruct our claim classification LLM to thoroughly examine each claim’s faithfulness to the reference context and document the reasoning behind each examination.

Once implemented, we compared HalluMeasure’s performance against other available solutions on the popular SummEval benchmark dataset. The results clearly demonstrate improved performance with few-shot CoT prompting (2 percentage points, from 0.78 to 0.8), taking us one step closer to the automated identification of LLM hallucinations at scale.

AUC ROC performance.png
Area under the receiver-operating-characteristic curve across hallucination identification solutions on the SummEval dataset.

Fine-grained error classification

HalluMeasure enables more-targeted solutions that enhance LLM reliability by providing deeper insights into the types of hallucinations produced. Beyond binary classifications or the commonly used natural-language-inference (NLI) categories of support, refute, and not enough information, we propose a novel set of error types developed through analyzing linguistic patterns in common LLM hallucinations. For example, one proposed label type is temporal reasoning, which would apply to, say, a response stating that a new innovation is in use, when the context claims that the new innovation will be used in the future.

Hallucination examples.png
Examples of hallucinations and their linguistic-error types.

Additionally, understanding the distribution of error types across an LLM’s responses allows more-targeted hallucination mitigation. For example, if a majority of wrong claims contradict a particular assertion in the context, a common cause — say, allowing a large number (e.g., >10) of turns in a dialogue — could be explored. If fewer turns decrease this error type, restricting the number of turns or using summaries of previous turns could mitigate hallucination.

While HalluMeasure can provide scientists with insights into the source of a model’s hallucinations, there is still evolving risk for generative AI. As a consequence, we continue to drive innovation in the space of responsible AI by exploring reference-free detection, employing dynamic few-shot prompting techniques tailored to specific use cases, and incorporating agentic AI frameworks.

Research areas

Related content

ES, B, Barcelona
Are you interested in defining the science strategy that enables Amazon to market to millions of customers based on their lifecycle needs rather than one-size-fits-all campaigns? We are seeking a Applied Scientist to lead the science strategy for our Lifecycle Marketing Experimentation roadmap within the PRIMAS (Prime & Marketing analytics and science) team. The position is open to candidates in Amsterdam and Barcelona. In this role, you will own the end-to-end science approach that enables EU marketing to shift from broad, generic campaigns to targeted, cohort-based marketing that changes customer behavior. This is a high-ambiguity, high-impact role where you will define what problems are worth solving, build the science foundation from scratch, and influence senior business leaders on marketing strategy. You will work directly with Business Directors and channel leaders to solve critical business problems: how do we win back customers lost to competitors, convert Young Adults to Prime, and optimize marketing spend by de-averaging across customer cohorts. Key job responsibilities Science Strategy & Leadership: 1. Own the end-to-end science strategy for lifecycle marketing, defining the roadmap across audience targeting, behavioral modeling, and measurement 2. Navigate high ambiguity in defining customer journey frameworks and behavioral models – our most challenging science problem with no established playbook 3. Lead strategic discussions with business leaders translating business needs into science solutions and building trust across business and tech partners 4. Mentor and guide a team of 2-3 scientists and BIEs on technical execution while contributing hands-on to the hardest problems Advanced Customer Behavior Modeling: 1. Build sophisticated propensity models identifying customer cohorts based on lifecycle stage and complex behavioral patterns (e.g., Bargain hunters, Young adults Prime prospects) 2. Define customer journey frameworks using advanced techniques (Hidden Markov Models, sequential decision-making) to model how customers transition across lifecycle stages 3. Identify which customer behaviors and triggers drive lifecycle progression and what messaging/levers are most effective for each cohort 4. Integrate 1P behavioral data with 2P survey insights to create rich, actionable audience definitions Measurement & Cross-Workstream Integration: 1. Partner with measurement scientist to design experiments (RCTs) that isolate audience targeting effects from creative effects 2. Ensure audience definitions, journey models, and measurement frameworks work coherently across Meta, LiveRamp, and owned channels 3. Establish feedback loops connecting measurement insights back to model improvements About the team The PRIMAS (Prime & Marketing Analytics and Science) is the team that support the science & analytics needs of the EU Prime and Marketing organization, an org that supports the Prime and Marketing programs in European marketplaces and comprises 250-300 employees. The PRIMAS team, is part of a larger tech tech team of 100+ people called WIMSI (WW Integrated Marketing Systems and Intelligence). WIMSI core mission is to accelerate marketing technology capabilities that enable de-averaged customer experiences across the marketing funnel: awareness, consideration, and conversion.
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques
ES, M, Madrid
At Amazon, we are committed to being the Earth's most customer-centric company. The European International Technology group (EU INTech) owns the enhancement and delivery of Amazon's engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects. You will be joining the Tamale team to work on Haul. As part of EU INTech and Haul, Tamale strives to create a discovery-driven shopping experience using challenging machine learning and ranking solutions. You will be exposed to large-scale recommendation systems, multi-objective optimization, and state-of-the-art deep learning architectures, and you'll be part of a key effort to improve our customers' browsing experience by building next-generation ranking models for Amazon Haul's endless scroll experience. We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading ranking solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers. Key job responsibilities We look for applied scientists who possess a wide variety of skills. As the successful applicant for this role, you will work closely with your business partners to identify opportunities for innovation. You will apply machine learning solutions to optimize multi-objective ranking, improve discovery engagement through contextual signals, and scale ranking systems across multiple marketplaces. You will work with business leaders, scientists, and product managers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed ranking services. You will be part of a team of scientists and engineers working on solving ranking and personalization challenges at scale. You will be able to influence the scientific roadmap of the team, setting the standards for scientific excellence. You will be working with state-of-the-art architectures and real-time feature serving systems. Your work will improve the experience of millions of daily customers using Amazon Haul worldwide. You will have the chance to have great customer impact and continue growing in one of the most innovative companies in the world. You will learn a huge amount - and have a lot of fun - in the process!
IN, HR, Gurugram
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for International Emerging Stores (IES). Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the International Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions across International Emerging Store (India, MENA, Far-East, LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.