LoCal: Logical and Causal Fact-Checking with LLM-Based Multi-Agents

Jiatong Ma, Linmei Hu*, Rang Li, Wenbo Fu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

With the development of social media, people are exposed to a vast amount of unverified information, making fact-checking particularly important. Existing fact-checking methods primarily encourage breaking down claims into more easily solvable sub-tasks, and deriving final answers through reasoning with external evidence. However, these models face logical issues regarding whether and how the sub-tasks can logically be combined to form the original claims, and encounter causal errors in the reasoning process due to insufficient evidence or hallucinations from LLMs. In addition, they often suffer from a lack of interpretability. In this paper, we propose Logical and Causal fact-checking (LoCal), a novel fact-checking framework based on multiple LLM-based agents. The usage of multi-agent systems is due to their increasingly demonstrated ability to perform complex tasks in a manner similar to humans. LoCal primarily consists of a decomposing agent, multiple reasoning agents, and two evaluating agents. Specifically, the decomposing agent first utilizes the in-context learning ability of LLMs to break down complex claims into simpler sub-tasks, including fact verification tasks and question answering tasks. Afterwards, two types of reasoning agents are respectively utilized to retrieve external knowledge to address the fact verification tasks that require comparative analysis skills, and the question answering tasks that necessitate the ability of information extraction from evidence. We then combine the subtasks and their corresponding responses to generate a solution for evaluation. In order to enhance logical and causal consistency, two evaluating agents are respectively employed to examine whether the generated solution is logically equivalent to the original claim and determine whether the solution still holds when challenged by the counterfactual label. The evaluating agents provide confidence degrees for the solutions based on the evaluation results and iteratively correct the logical and causal errors in the reasoning process. We evaluate LoCal on two challenging datasets, and the results show that LoCal significantly outperforms all the baseline models across different settings of evidence availability. In addition, LoCal offers better interpretability by providing a structured solution along with detailed evaluating processes. We believe LoCal will provide valuable insights for future misinformation detection.

Original languageEnglish
Title of host publicationWWW 2025 - Proceedings of the ACM Web Conference
PublisherAssociation for Computing Machinery, Inc
Pages1614-1625
Number of pages12
ISBN (Electronic)9798400712746
DOIs
Publication statusPublished - 28 Apr 2025
Event34th ACM Web Conference, WWW 2025 - Sydney, Australia
Duration: 28 Apr 20252 May 2025

Publication series

NameWWW 2025 - Proceedings of the ACM Web Conference

Conference

Conference34th ACM Web Conference, WWW 2025
Country/TerritoryAustralia
CitySydney
Period28/04/252/05/25

Keywords

  • Confidence Evaluation
  • Fact-Checking
  • Interpretability
  • LLM-Based Agents
  • Logical and Causal Consistency

Fingerprint

Dive into the research topics of 'LoCal: Logical and Causal Fact-Checking with LLM-Based Multi-Agents'. Together they form a unique fingerprint.

Cite this