Defense against Membership Inference Attack Applying Domain Adaptation with Addictive Noise

Huang, Hongwei (2021) Defense against Membership Inference Attack Applying Domain Adaptation with Addictive Noise. Journal of Computer and Communications, 09 (05). pp. 92-108. ISSN 2327-5219

[thumbnail of jcc_2021052715471933.pdf] Text
jcc_2021052715471933.pdf - Published Version

Download (735kB)

Abstract

Deep learning can train models from a dataset to solve tasks. Although deep learning has attracted much interest owing to the excellent performance, security issues are gradually exposed. Deep learning may be prone to the membership inference attack, where the attacker can determine the membership of a given sample. In this paper, we propose a new defense mechanism against membership inference: NoiseDA. In our proposal, a model is not directly trained on a sensitive dataset to alleviate the threat of membership inference attack by leveraging domain adaptation. Besides, a module called Feature Crafter has been designed to reduce the necessary training dataset from 2 to 1, which creates features for domain adaptation training using noise addictive mechanisms. Our experiments have shown that, with the noises properly added by Feature Crafter, our proposal can reduce the success of membership inference with a controllable utility loss.

Item Type: Article
Subjects: STM Academic > Computer Science
Depositing User: Unnamed user with email support@stmacademic.com
Date Deposited: 16 May 2023 08:04
Last Modified: 06 Feb 2024 04:30
URI: http://article.researchpromo.com/id/eprint/813

Actions (login required)

View Item
View Item