Privacy-preserving Training Algorithm for Naive Bayes Classifiers

Rui Wang, Xiangyun Tang, Meng Shen*, Liehuang Zhu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The growing popularity of Machine learning (ML) that appreciates high quality training datasets collected from multiple organizations raises natural questions about the privacy guarantees that can be provided in such settings. Our work tackles this problem in the context of multi-party secure ML wherein multiple organizations provide their sensitive datasets to a data user and train a Naive Bayes (NB) model with the data user. We propose PPNB, a privacy-preserving scheme for training NB models, based on Homomorphic Cryptosystem (HC) and Differential Privacy (DP). PPNB achieves a balance performance between efficiency and accuracy in multi-party secure ML, enabled flexible switch among different tradeoffs by parameter tuning. Extensive experimental results validate the effectiveness of PPNB.

Original languageEnglish
Title of host publicationICC 2022 - IEEE International Conference on Communications
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5639-5644
Number of pages6
ISBN (Electronic)9781538683477
DOIs
Publication statusPublished - 2022
Event2022 IEEE International Conference on Communications, ICC 2022 - Seoul, Korea, Republic of
Duration: 16 May 202220 May 2022

Publication series

NameIEEE International Conference on Communications
Volume2022-May
ISSN (Print)1550-3607

Conference

Conference2022 IEEE International Conference on Communications, ICC 2022
Country/TerritoryKorea, Republic of
CitySeoul
Period16/05/2220/05/22

Keywords

  • Naive Bayes
  • Privacy Preservation
  • Security

Fingerprint

Dive into the research topics of 'Privacy-preserving Training Algorithm for Naive Bayes Classifiers'. Together they form a unique fingerprint.

Cite this