Machine learning in adversarial environments |
| |
Authors: | Pavel Laskov Richard Lippmann |
| |
Affiliation: | (1) Department of Electrical and Electronic Engineering, University of Cagliari, Piazza d’Armi, 09123 Cagliari, Italy |
| |
Abstract: | Whenever machine learning is used to prevent illegal or unsanctioned activity and there is an economic incentive, adversaries
will attempt to circumvent the protection provided. Constraints on how adversaries can manipulate training and test data for
classifiers used to detect suspicious behavior make problems in this area tractable and interesting. This special issue highlights
papers that span many disciplines including email spam detection, computer intrusion detection, and detection of web pages
deliberately designed to manipulate the priorities of pages returned by modern search engines. The four papers in this special
issue provide a standard taxonomy of the types of attacks that can be expected in an adversarial framework, demonstrate how
to design classifiers that are robust to deleted or corrupted features, demonstrate the ability of modern polymorphic engines
to rewrite malware so it evades detection by current intrusion detection and antivirus systems, and provide approaches to
detect web pages designed to manipulate web page scores returned by search engines. We hope that these papers and this special
issue encourages the multidisciplinary cooperation required to address many interesting problems in this relatively new area
including predicting the future of the arms races created by adversarial learning, developing effective long-term defensive
strategies, and creating algorithms that can process the massive amounts of training and test data available for internet-scale
problems. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|