GradInvDiff: Stealing Medical Privacy in Federated Learning via Diffusion-Based Gradient Inversion

  • Zhiyuan Wang
  • , Daisong Gan
  • , Wenzhuo Fang
  • , Yuliang Zhu
  • , Kun Liu*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated learning (FL) has become a crucial technique for medical imaging analysis, enabling multiple institutions to train machine learning models while preserving patient privacy collaboratively. However, recent research has uncovered the vulnerability of shared gradients in FL, which can be exploited through the gradient inversion attack (GIA) to reconstruct private medical images. While existing methods show promise in generic image tasks, their application to high-resolution medical images remains underexplored and ineffective due to data complexity. This paper introduces GradInvDiff, a novel GIA tailored for medical FL scenarios. Unlike traditional methods that rely solely on gradient guidance, our approach combines diffusion models with gradient matching optimization to iteratively refine the inference process. By replacing the standard random noise in the diffusion process with a direction derived from the difference between the optimized and original means, we inject a gradient-based condition into the noise to enhance image reconstruction quality. This method enables high-quality, pixel-level reconstruction of private medical images, even in the presence of large batch sizes or gradient noise. Our experiments demonstrate that GradInvDiff outperforms existing state-of-the-art gradient inversion methods and shows better accuracy and visibility when attacking medical FL models. We hope that this paper can raise public awareness of privacy leakage risks when using medical FL.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention, MICCAI 2025 - 28th International Conference, 2025, Proceedings
EditorsJames C. Gee, Jaesung Hong, Carole H. Sudre, Polina Golland, Jinah Park, Daniel C. Alexander, Juan Eugenio Iglesias, Archana Venkataraman, Jong Hyo Kim
PublisherSpringer Science and Business Media Deutschland GmbH
Pages262-272
Number of pages11
ISBN (Print)9783032051844
DOIs
Publication statusPublished - 2026
Externally publishedYes
Event28th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2025 - Daejeon, Korea, Republic of
Duration: 23 Sept 202527 Sept 2025

Publication series

NameLecture Notes in Computer Science
Volume15973 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference28th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2025
Country/TerritoryKorea, Republic of
CityDaejeon
Period23/09/2527/09/25

Keywords

  • Diffusion Models
  • Federated Learning
  • Gradient Inversion Attack

Fingerprint

Dive into the research topics of 'GradInvDiff: Stealing Medical Privacy in Federated Learning via Diffusion-Based Gradient Inversion'. Together they form a unique fingerprint.

Cite this