首页 | 本学科首页   官方微博 | 高级检索  
     


Towards Securing Machine Learning Models Against Membership Inference Attacks
Authors:Sana Ben Hamida  Hichem Mrabet  Sana Belguith  Adeeb Alhomoud  Abderrazak Jemai
Affiliation:1.Key Laboratory of Artificial Intelligence Application Technology State Ethnic Affairs Commission, Qinghai Minzu University, Xining, 810007, China2 Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300072, China3 Japan Advanced Institute of Science and Technology, Ishikawa, Japan
Abstract:From fraud detection to speech recognition, including price prediction, Machine Learning (ML) applications are manifold and can significantly improve different areas. Nevertheless, machine learning models are vulnerable and are exposed to different security and privacy attacks. Hence, these issues should be addressed while using ML models to preserve the security and privacy of the data used. There is a need to secure ML models, especially in the training phase to preserve the privacy of the training datasets and to minimise the information leakage. In this paper, we present an overview of ML threats and vulnerabilities, and we highlight current progress in the research works proposing defence techniques against ML security and privacy attacks. The relevant background for the different attacks occurring in both the training and testing/inferring phases is introduced before presenting a detailed overview of Membership Inference Attacks (MIA) and the related countermeasures. In this paper, we introduce a countermeasure against membership inference attacks (MIA) on Conventional Neural Networks (CNN) based on dropout and L2 regularization. Through experimental analysis, we demonstrate that this defence technique can mitigate the risks of MIA attacks while ensuring an acceptable accuracy of the model. Indeed, using CNN model training on two datasets CIFAR-10 and CIFAR-100, we empirically verify the ability of our defence strategy to decrease the impact of MIA on our model and we compare results of five different classifiers. Moreover, we present a solution to achieve a trade-off between the performance of the model and the mitigation of MIA attack.
Keywords:Machine learning  security and privacy  defence techniques  membership inference attacks  dropout  L2 regularization
点击此处可从《计算机、材料和连续体(英文)》浏览原始摘要信息
点击此处可从《计算机、材料和连续体(英文)》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号