Abstract
Pre-trained language models have demonstrated outstanding performance across various natural language understanding tasks. Exploiting the sophisticated understanding capacities of pre-trained language models for humor detection poses a pivotal challenge in the domain of humor recognition. Current methodologies primarily rely on full-parameter fine-tuning of pre-trained language models, often neglecting implicit emotional expressions and incongruity theories inherent in humorous texts. This limitation hinders the effectiveness of humor recognition models. To address these challenges, we propose PromptHR (Prompt-based Humor Recognition), a multi-task learning framework integrating deep commonsense prompt learning and incongruity theory. Specifically, our approach incorporates commonsense knowledge into pre-trained language models through prompt learning, subsequently identifying semantic incongruity through analysis of disparities and interactions between the set-up and punchline in humorous texts. Experimental results show our model achieves state-of-the-art performance on PQA and HAHA tasks, with relative error rate reductions of 12.97% and 17.01%, respectively. This highlights the effectiveness of our approach in pushing the boundaries of humor recognition.
| Original language | English |
|---|---|
| Journal | Proceedings of the International Joint Conference on Neural Networks |
| DOIs | |
| Publication status | Published - 2025 |
| Event | 2025 International Joint Conference on Neural Networks, IJCNN 2025 - Rome, Italy Duration: 30 Jun 2025 → 5 Jul 2025 |
Keywords
- Humor Recognition
- Natural Language Processing
- Prompt Learning
- Semantic Incongruity
Fingerprint
Dive into the research topics of 'PromptHR: A Humor Recognition Network Integrating Deep Commonsense Prompt Learning and Semantic Incongruity'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver