Abstract
Recent studies have demonstrated that supervised fine-tuning of Large Language Models (LLMs) can significantly enhance performance across various Information Extraction (IE) tasks. However, the critical IE task of Relation Extraction (RE) faces substantial cost barriers in the supervised fine-tuning of LLMs within few-shot learning contexts. Addressing this challenge, we introduce a hardware-efficient finetuning model for few-shot RE with large language model (LLMProto), which aims to fine-tune LLMs at low hardware costs, thereby improving the performance of few-shot RE tasks. LLMProto tackles few-shot RE by integrating the LLM Base Layer and Prototypical Network Layer.LLM Base Layer effectively reduces task complexity, and the Prototypical Network Layer captures underlying structural patterns in the data. Experimental results on the datasets demonstrate LLMProto’s superior performance in few-shot RE tasks, significantly outperforming existing baseline methods.
| Original language | English |
|---|---|
| Journal | Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing |
| DOIs | |
| Publication status | Published - 2025 |
| Event | 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025 - Hyderabad, India Duration: 6 Apr 2025 → 11 Apr 2025 |
Keywords
- few-shot relation extraction
- hardware efficient
- large language model