首页 | 本学科首页   官方微博 | 高级检索  
     


DISTINÏCT: Data poISoning atTacks dectectIon usiNg optÏmized jaCcarddisTance
Authors:Maria Sameen  Seong Oun Hwang
Affiliation:1.Department of IT Convergence Engineering, Gachon University, Seongnam-si, 13120, Korea2 Department of Computer Engineering, Gachon University, Seongnam-si, 13120, Korea
Abstract:Machine Learning (ML) systems often involve a re-training process to make better predictions and classifications. This re-training process creates a loophole and poses a security threat for ML systems. Adversaries leverage this loophole and design data poisoning attacks against ML systems. Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance. Data poisoning attacks are challenging to detect, and even more difficult to respond to, particularly in the Internet of Things (IoT) environment. To address this problem, we proposed DISTINÏCT, the first proactive data poisoning attack detection framework using distance measures. We found that Jaccard Distance (JD) can be used in the DISTINÏCT (among other distance measures) and we finally improved the JD to attain an Optimized JD (OJD) with lower time and space complexity. Our security analysis shows that the DISTINÏCT is secure against data poisoning attacks by considering key features of adversarial attacks. We conclude that the proposed OJD-based DISTINÏCT is effective and efficient against data poisoning attacks where in-time detection is critical for IoT applications with large volumes of streaming data.
Keywords:Data poisoning attacks  detection framework  jaccard distance (JD)  optimized jaccard distance (OJD)  security analysis
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号