TY - JOUR
T1 - LVFUS
T2 - Vertical Federated Unlearning for Intelligent Network Security via Adaptive Optimizer Switching
AU - Tang, Xiangyun
AU - Hong, Xinxin
AU - Gan, Minggang
AU - Lin, Yijing
AU - Zhang, Tao
AU - Duan, Junxian
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2026
Y1 - 2026
N2 - In next generation intelligent networks, security analytics increasingly span feature-partitioned data silos and cross-organizational boundaries, thereby elevating compliance with the “right to be forgotten” to a first-order design requirement while maintaining noncentralized data governance. Vertical Federated Unlearning (VFU) has emerged as a promising solution to “the right to be forgotten” of Vertical Federated Learning, which allows participants to erase their data from global models without compromising model performance in Vertical Federated Learning. However, most VFU schemes are either tailored to shallow models or support only limited unlearning levels. Only a few VFU schemes are applicable to neural network architectures and support the three levels of unlearning requests, but they either suffer from suboptimal post-unlearning accuracy or incur significant storage overhead. In this paper, we propose LVFUS, a lightweight VFU framework that supports arbitrary model architectures and handles all three levels of unlearning requests with minimal resource overhead and high post-unlearning accuracy. Extensive experiments show that LVFUS outperforms the state-of-the-art, accelerating recovery time by 1.08x–4.68x and improving model accuracy by 0.64%–15.00%, with the storage overhead remaining at a constant level.
AB - In next generation intelligent networks, security analytics increasingly span feature-partitioned data silos and cross-organizational boundaries, thereby elevating compliance with the “right to be forgotten” to a first-order design requirement while maintaining noncentralized data governance. Vertical Federated Unlearning (VFU) has emerged as a promising solution to “the right to be forgotten” of Vertical Federated Learning, which allows participants to erase their data from global models without compromising model performance in Vertical Federated Learning. However, most VFU schemes are either tailored to shallow models or support only limited unlearning levels. Only a few VFU schemes are applicable to neural network architectures and support the three levels of unlearning requests, but they either suffer from suboptimal post-unlearning accuracy or incur significant storage overhead. In this paper, we propose LVFUS, a lightweight VFU framework that supports arbitrary model architectures and handles all three levels of unlearning requests with minimal resource overhead and high post-unlearning accuracy. Extensive experiments show that LVFUS outperforms the state-of-the-art, accelerating recovery time by 1.08x–4.68x and improving model accuracy by 0.64%–15.00%, with the storage overhead remaining at a constant level.
KW - Data privacy
KW - federated learning
KW - lightweight structure
UR - https://www.scopus.com/pages/publications/105023169016
U2 - 10.1109/TNSE.2025.3637602
DO - 10.1109/TNSE.2025.3637602
M3 - Article
AN - SCOPUS:105023169016
SN - 2327-4697
VL - 13
SP - 4138
EP - 4154
JO - IEEE Transactions on Network Science and Engineering
JF - IEEE Transactions on Network Science and Engineering
ER -