Skip to main navigation Skip to search Skip to main content

DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning

Research output: Contribution to journalArticlepeer-review

Abstract

Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.

Original languageEnglish
Article number100270
JournalHigh-Confidence Computing
Volume5
Issue number4
DOIs
Publication statusPublished - Dec 2025
Externally publishedYes

Keywords

  • Continual learning
  • Domain correlation
  • Domain incremental learning
  • Parameter- efficient fine-tuning

Fingerprint

Dive into the research topics of 'DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning'. Together they form a unique fingerprint.

Cite this