TY - JOUR
T1 - Casper
T2 - A Causality-Inspired Defense With Confounder Against Label Inference Attacks in Vertical Split Federated Learning
AU - Shen, Meng
AU - Meng, Jin
AU - Peng, Bohan
AU - Tang, Xiangyun
AU - Wang, Wei
AU - Niyato, Dusit
AU - Zhu, Liehuang
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2026
Y1 - 2026
N2 - Vertical Split Federated Learning (VSFL) allows participants to collaboratively train a better model with different features vertically partitioned in the same sample space, where the model is divided into bottom model and top model by the cut layer, trained by passive and active participants respectively. However, in the process, the labels owned by the active participant will still be inferred or stolen by curious or malicious passive participants. In this paper, we propose Casper, a causality-inspired defense mechanism with a confounder against label inference attacks in VSFL. Casper first analyzes the feasibility of optimizing the training process in VSFL at the intervention level from a causal perspective. It then introduces a confounder consisting of cut layer output reconstruction and label obfuscation to disrupt the direct causality between cut layer outputs and labels. Additionally, we integrate selective discrepancy training to further ensure model utility by strategically balancing training between active and passive participants. Extensive experiments conducted on four datasets across different tasks demonstrate that Casper effectively preserves label privacy while maintaining model performance, significantly outperforming current advanced defending methods in VSFL.
AB - Vertical Split Federated Learning (VSFL) allows participants to collaboratively train a better model with different features vertically partitioned in the same sample space, where the model is divided into bottom model and top model by the cut layer, trained by passive and active participants respectively. However, in the process, the labels owned by the active participant will still be inferred or stolen by curious or malicious passive participants. In this paper, we propose Casper, a causality-inspired defense mechanism with a confounder against label inference attacks in VSFL. Casper first analyzes the feasibility of optimizing the training process in VSFL at the intervention level from a causal perspective. It then introduces a confounder consisting of cut layer output reconstruction and label obfuscation to disrupt the direct causality between cut layer outputs and labels. Additionally, we integrate selective discrepancy training to further ensure model utility by strategically balancing training between active and passive participants. Extensive experiments conducted on four datasets across different tasks demonstrate that Casper effectively preserves label privacy while maintaining model performance, significantly outperforming current advanced defending methods in VSFL.
KW - causality
KW - defense
KW - label inference attack
KW - Vertical split federated learning
UR - https://www.scopus.com/pages/publications/105027983842
U2 - 10.1109/TIFS.2026.3652013
DO - 10.1109/TIFS.2026.3652013
M3 - Article
AN - SCOPUS:105027983842
SN - 1556-6013
VL - 21
SP - 1050
EP - 1064
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -