摘要:The unprecedented success of deep learning is largely dependent on the availability ofmassive amount of training data. In many cases, these data are crowd-sourced and may contain sensitiveand confidential information, therefore, pose privacy concerns. As a result, privacy-preservingdeep learning has been gaining increasing focus nowadays. One of the promising approaches forprivacy-preserving deep learning is to employ differential privacy during model training which aimsto prevent the leakage of sensitive information about the training data via the trained model. Whilethese models are considered to be immune to privacy attacks, with the advent of recent and sophisticatedattack models, it is not clear how well these models trade-off utility for privacy. In this paper,we systematically study the impact of a sophisticated machine learning based privacy attack calledthe membership inference attack against a state-of-the-art differentially private deep model. Morespecifically, given a differentially private deep model with its associated utility, we investigate howmuch we can infer about the model’s training data. Our experimental results show that differentiallyprivate deep models may keep their promise to provide privacy protection against strong adversariesby only offering poor model utility, while exhibit moderate vulnerability to the membership inferenceattack when they offer an acceptable utility. For evaluating our experiments, we use the CIFAR-10and MNIST datasets and the corresponding classification tasks.
关键词:differential privacy; membership inference attack; deep learning; privacy-preserving deep;learning; differentially private deep learning