TY - JOUR
T1 - PraVFed
T2 - Practical Heterogeneous Vertical Federated Learning via Representation Learning
AU - Wang, Shuo
AU - Gai, Keke
AU - Yu, Jing
AU - Zhang, Zijian
AU - Zhu, Liehuang
N1 - Publisher Copyright:
1556-6021 © 2025 IEEE. All rights reserved, including rights for text and data mining, and training of artificial intelligence and similar technologies. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.
PY - 2025
Y1 - 2025
N2 - Vertical federated learning (VFL) provides a privacy-preserving method for machine learning, enabling collaborative training across multiple institutions with vertically distributed data. Existing VFL methods assume that participants passively gain local models of the same structure and communicate with active pary during each training batch. However, due to the heterogeneity of participating institutions, VFL with heterogeneous models for efficient communication is indispensable in real-life scenarios. To address this challenge, we propose a new VFL method called Practical Heterogeneous Vertical Federated Learning via Representation Learning (PraVFed) to support the training of parties with heterogeneous local models and reduce communication costs. Specifically, PraVFed employs weighted aggregation of local embedding values from the passive party to mitigate the influence of heterogeneous local model information on the global model. Furthermore, to safeguard the passive party’s local sample features, we utilize blinding factors to protect its local embedding values. To reduce communication costs, the passive party performs multiple rounds of local pre-model training while preserving label privacy. We conducted a comprehensive theoretical analysis and extensive experimentation to demonstrate that PraVFed reduces communication overhead under heterogeneous models and outperforms other approaches. For example, when the target accuracy is set at 60% under the CINIC10 dataset, the communication cost of PraVFed is reduced by 70.57% compared to the baseline method.
AB - Vertical federated learning (VFL) provides a privacy-preserving method for machine learning, enabling collaborative training across multiple institutions with vertically distributed data. Existing VFL methods assume that participants passively gain local models of the same structure and communicate with active pary during each training batch. However, due to the heterogeneity of participating institutions, VFL with heterogeneous models for efficient communication is indispensable in real-life scenarios. To address this challenge, we propose a new VFL method called Practical Heterogeneous Vertical Federated Learning via Representation Learning (PraVFed) to support the training of parties with heterogeneous local models and reduce communication costs. Specifically, PraVFed employs weighted aggregation of local embedding values from the passive party to mitigate the influence of heterogeneous local model information on the global model. Furthermore, to safeguard the passive party’s local sample features, we utilize blinding factors to protect its local embedding values. To reduce communication costs, the passive party performs multiple rounds of local pre-model training while preserving label privacy. We conducted a comprehensive theoretical analysis and extensive experimentation to demonstrate that PraVFed reduces communication overhead under heterogeneous models and outperforms other approaches. For example, when the target accuracy is set at 60% under the CINIC10 dataset, the communication cost of PraVFed is reduced by 70.57% compared to the baseline method.
KW - heterogeneous model architecture
KW - representation learning
KW - Vertical federated learning
KW - weight aggregation
UR - http://www.scopus.com/inward/record.url?scp=86000782392&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2025.3530700
DO - 10.1109/TIFS.2025.3530700
M3 - Article
AN - SCOPUS:86000782392
SN - 1556-6013
VL - 20
SP - 2693
EP - 2705
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -