PriDM

Shuchao Pang, Yihang Rao, Zhigang Lu, Haichen Wang, Yongbin Zhou, Minhui Xue

Research output: Contribution to journalArticlepeer-review

Abstract

Deep models excel in analyzing image data. However, recent studies on Black-Box Model Inversion (MI) Attacks against image models have revealed the potential to recover concealed (via specific masks) private training images using publicly available images from the same domain as the training data. This study introduces PriDM, a novel diffusion model-based MI attack, illustrating the increased vulnerability of image models. PriDM leverages range-null space decomposition to extract essential range-space information and incorporates it into the diffusion model's sampling process. This enables the recovery of private information from arbitrarily masked images relying solely on images only aligned with the same machine-learning tasks as the target model. To demonstrate PriDM's effectiveness, we conducted experiments with various adversary background knowledge, including different public dataset domains and image masks. Results show PriDM produces recovered images of significantly higher quality, approximately twice as good as existing methods. Moreover, in scenarios involving complex backgrounds, PriDM outperforms the state-of-the-art by approximately 70%. In specific background knowledge scenarios, such as compressed and blurred images, our method achieves an almost 100% success rate. Additionally, PriDM performs well with real-world background knowledge including individuals wearing masks and randomly masked face images, which are not considered by existing works.
Original languageEnglish
JournalIEEE Transactions on Dependable and Secure Computing
DOIs
Publication statusE-pub ahead of print (In Press) - 2025

Bibliographical note

Publisher Copyright:
© 2004-2012 IEEE.

Keywords

  • deep neural networks
  • diffusion models
  • model inversion attacks
  • Privacy

Fingerprint

Dive into the research topics of 'PriDM'. Together they form a unique fingerprint.

Cite this