LLMProto: A Hardware-Efficient Finetuning Model for Few-Shot Relation Extraction with Large Language Model

Research output: Contribution to journalConference articlepeer-review

Abstract

Recent studies have demonstrated that supervised fine-tuning of Large Language Models (LLMs) can significantly enhance performance across various Information Extraction (IE) tasks. However, the critical IE task of Relation Extraction (RE) faces substantial cost barriers in the supervised fine-tuning of LLMs within few-shot learning contexts. Addressing this challenge, we introduce a hardware-efficient finetuning model for few-shot RE with large language model (LLMProto), which aims to fine-tune LLMs at low hardware costs, thereby improving the performance of few-shot RE tasks. LLMProto tackles few-shot RE by integrating the LLM Base Layer and Prototypical Network Layer.LLM Base Layer effectively reduces task complexity, and the Prototypical Network Layer captures underlying structural patterns in the data. Experimental results on the datasets demonstrate LLMProto’s superior performance in few-shot RE tasks, significantly outperforming existing baseline methods.

Keywords

  • few-shot relation extraction
  • hardware efficient
  • large language model

Fingerprint

Dive into the research topics of 'LLMProto: A Hardware-Efficient Finetuning Model for Few-Shot Relation Extraction with Large Language Model'. Together they form a unique fingerprint.

Cite this