TY - JOUR
T1 - Adaptive vertical federated learning via feature map transferring in mobile edge computing
AU - Li, Yuanzhang
AU - Sha, Tianchi
AU - Baker, Thar
AU - Yu, Xiao
AU - Shi, Zhiwei
AU - Hu, Sikang
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2022.
PY - 2024/4
Y1 - 2024/4
N2 - To bring more intelligence to edge systems, Federated Learning (FL) is proposed to provide a privacy-preserving mechanism to train a globally shared model by utilizing a massive amount of user-generated data on devices. FL enables multiple clients collaboratively train a machine learning model while keeping the raw training data local. When the dataset is horizontally partitioned, existing FL algorithms can aggregate CNN models received from decentralized clients. But, it cannot be applied to the scenario where the dataset is vertically partitioned. This manuscript showcases the task of image classification in the vertical FL settings in which participants hold incomplete image pieces of all samples, individually. To this end, the paper discusses AdptVFedConv to tackle this issue and achieves the CNN models’ aim for training without revealing raw data. Unlike conventional FL algorithms for sharing model parameters in every communication iteration, AdptVFedConv enables hidden feature representations. Each client fine-tunes a local feature extractor and transmits the extracted feature representations to the backend machine. A classifier model is trained with concatenated feature representations as input and ground truth labels as output at the server-side. Furthermore, we put forward the model transfer method and replication padding tricks to improve final performance. Extensive experiments demonstrate that the accuracy of AdptVFedConv is close to the centralized model.
AB - To bring more intelligence to edge systems, Federated Learning (FL) is proposed to provide a privacy-preserving mechanism to train a globally shared model by utilizing a massive amount of user-generated data on devices. FL enables multiple clients collaboratively train a machine learning model while keeping the raw training data local. When the dataset is horizontally partitioned, existing FL algorithms can aggregate CNN models received from decentralized clients. But, it cannot be applied to the scenario where the dataset is vertically partitioned. This manuscript showcases the task of image classification in the vertical FL settings in which participants hold incomplete image pieces of all samples, individually. To this end, the paper discusses AdptVFedConv to tackle this issue and achieves the CNN models’ aim for training without revealing raw data. Unlike conventional FL algorithms for sharing model parameters in every communication iteration, AdptVFedConv enables hidden feature representations. Each client fine-tunes a local feature extractor and transmits the extracted feature representations to the backend machine. A classifier model is trained with concatenated feature representations as input and ground truth labels as output at the server-side. Furthermore, we put forward the model transfer method and replication padding tricks to improve final performance. Extensive experiments demonstrate that the accuracy of AdptVFedConv is close to the centralized model.
KW - Convolutional neural network
KW - Federated learning
KW - Machine learning
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85138293234&partnerID=8YFLogxK
U2 - 10.1007/s00607-022-01117-x
DO - 10.1007/s00607-022-01117-x
M3 - Article
AN - SCOPUS:85138293234
SN - 0010-485X
VL - 106
SP - 1081
EP - 1097
JO - Computing (Vienna/New York)
JF - Computing (Vienna/New York)
IS - 4
ER -