How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering

Jinxin Liu, Shulin Cao, Jiaxin Shi, Tingjian Zhang, Lunyiu Nie, Linmei Hu, Lei Hou*, Juanzi Li

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases. A typical approach to KBQA is semantic parsing, which translates a question into an executable logical form in a formal language. Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance. However, although it is validated that LLMs are capable of solving some KBQA problems, there has been little discussion on the differences in LLMs' proficiency in formal languages used in semantic parsing. In this work, we propose to evaluate the understanding and generation ability of LLMs to deal with differently structured logical forms by examining the inter-conversion of natural and formal language through in-context learning of LLMs. Extensive experiments with models of different sizes show that state-of-the-art LLMs can understand formal languages as well as humans, but generating correct logical forms given a few examples remains a challenge. Most importantly, our results also indicate that LLMs exhibit considerable sensitivity. In general, the formal language with a lower formalization level, i.e., the more similar it is to natural language, is more friendly to LLMs. Code and data can be found at https://github.com/Matthewlliu/structure_probe.

源语言英语
主期刊名62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Proceedings of the Conference
编辑Lun-Wei Ku, Andre Martins, Vivek Srikumar
出版商Association for Computational Linguistics (ACL)
792-815
页数24
ISBN(电子版)9798891760998
出版状态已出版 - 2024
活动Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 - Hybrid, Bangkok, 泰国
期限: 11 8月 202416 8月 2024

出版系列

姓名Proceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN(印刷版)0736-587X

会议

会议Findings of the 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024
国家/地区泰国
Hybrid, Bangkok
时期11/08/2416/08/24

指纹

探究 'How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering' 的科研主题。它们共同构成独一无二的指纹。

引用此