TY - JOUR
T1 - Secure Multi-party Learning
T2 - Fundamentals, Frameworks, State of the Art, Trends, and Challenges
AU - Li, Yuhang
AU - Wang, Yajie
AU - Fan, Qing
AU - Pan, Zijie
AU - Wu, Yan
AU - Zhang, Zijian
AU - Zhu, Liehuang
AU - Zhou, Wanlei
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - The proliferation of networked data across various interconnected systems has intensified concerns about data leakage, particularly when computing information from multiple sources. Ensuring privacy while training high-performance machine learning (ML) models within these complex networks remains a significant challenge. Secure Multi-party Learning (SML), a fundamental area within Privacy-preserving Machine Learning (PPML), addresses this issue by utilizing secure computation techniques to protect data during both training and prediction phases. Motivated to demonstrate the research progress and discuss the insights on the future directions, we conduct an in-depth investigation into secure multi-party learning protocols and frameworks used by them up to 2024. This paper systematically compares typical SML frameworks from multiple dimensions, including technical approaches, threat models, and application scenarios. Based on the techniques they utilize, the frameworks are categorized into four types. In addition, the paper provides a detailed analysis of each framework type from the perspectives of main functional scenarios, computational complexity, and other factors, discussing the advantages, disadvantages, and development trends of each. Finally, the paper contrasts SML with other PPML techniques, highlighting their differences, strengths, and summarizes the current challenges facing SML. The paper also outlines future research directions for improving efficiency and ensuring security.
AB - The proliferation of networked data across various interconnected systems has intensified concerns about data leakage, particularly when computing information from multiple sources. Ensuring privacy while training high-performance machine learning (ML) models within these complex networks remains a significant challenge. Secure Multi-party Learning (SML), a fundamental area within Privacy-preserving Machine Learning (PPML), addresses this issue by utilizing secure computation techniques to protect data during both training and prediction phases. Motivated to demonstrate the research progress and discuss the insights on the future directions, we conduct an in-depth investigation into secure multi-party learning protocols and frameworks used by them up to 2024. This paper systematically compares typical SML frameworks from multiple dimensions, including technical approaches, threat models, and application scenarios. Based on the techniques they utilize, the frameworks are categorized into four types. In addition, the paper provides a detailed analysis of each framework type from the perspectives of main functional scenarios, computational complexity, and other factors, discussing the advantages, disadvantages, and development trends of each. Finally, the paper contrasts SML with other PPML techniques, highlighting their differences, strengths, and summarizes the current challenges facing SML. The paper also outlines future research directions for improving efficiency and ensuring security.
KW - Data Privacy
KW - Machine Learning
KW - Multi-Party Computation
KW - Privacy-preserving Machine Learning
UR - http://www.scopus.com/inward/record.url?scp=105004284042&partnerID=8YFLogxK
U2 - 10.1109/TNSE.2025.3566140
DO - 10.1109/TNSE.2025.3566140
M3 - Article
AN - SCOPUS:105004284042
SN - 2327-4697
JO - IEEE Transactions on Network Science and Engineering
JF - IEEE Transactions on Network Science and Engineering
ER -