FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models

Ziqiang Yuan, Kaiyuan Wang, Shoutai Zhu, Ye Yuan, Jingya Zhou, Yanlin Zhu, Wenqi Wei*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Large Language models (LLMs) usually rely on extensive training datasets. In the financial domain, creating numerical reasoning datasets that include a mix of tables and long text often involves substantial manual annotation expenses. To address the limited data resources and reduce the annotation cost, we introduce FinLLMs, a method for generating financial question-answering (QA) data based on common financial formulas using LLMs. First, we compile a list of common financial formulas and construct a graph based on the variables these formulas employ. We then augment the formula set by combining those that share identical variables as new elements. Specifically, we explore formulas obtained by manual annotation and merge those formulas with shared variables by traversing the constructed graph. Finally, utilizing LLMs, we generate financial QA data that encompasses both tabular information and long textual content, building on the collected formula set. Our experiments demonstrate that the synthetic data generated by FinLLMs effectively enhances the performance of various numerical reasoning models in the financial domain, including both pre-trained language models (PLMs) and fine-tuned LLMs. This performance surpasses that of two established benchmark financial QA datasets.

Original languageEnglish
JournalIEEE Transactions on Big Data
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Data Generation
  • Large Language Models
  • Question Answering

Fingerprint

Dive into the research topics of 'FinLLMs: A Framework for Financial Reasoning Dataset Generation with Large Language Models'. Together they form a unique fingerprint.

Cite this