IP traceback is the enabling technology to control Internet crime. In this paper we present a novel and practical IP traceback system called Flexible Deterministic Packet Marking (FDPM) which provides a defense system with the ability to find out the real sources of attacking packets that traverse through the network. While a number of other traceback schemes exist, FDPM provides innovative features to trace the source of IP packets and can obtain better tracing capability than others. In particular, FDPM adopts a flexible mark length strategy to make it compatible to different network environments; it also adaptively changes its marking rate according to the load of the participating router by a flexible flow-based marking scheme. Evaluations on both simulation and real system implementation demonstrate that FDPM requires a moderately small number of packets to complete the traceback process; add little additional load to routers and can trace a large number of sources in one traceback process with low false positive rates. The built-in overload prevention mechanism makes this system capable of achieving a satisfactory traceback result even when the router is heavily loaded. It has been used to not only trace DDoS attacking packets but also enhance filtering attacking traffic. 相似文献
The concerns on visual privacy have been increasingly raised along with the dramatic growth in image and video capture and sharing. Meanwhile, with the recent breakthrough in deep learning technologies, visual data can now be easily gathered and processed to infer sensitive information. Therefore, visual privacy in the context of deep learning is now an important and challenging topic. However, there has been no systematic study on this topic to date. In this survey, we discuss algorithms of visual privacy attacks and the corresponding defense mechanisms in deep learning. We analyze the privacy issues in both visual data and visual deep learning systems. We show that deep learning can be used as a powerful privacy attack tool as well as preservation techniques with great potential. We also point out the possible direction and suggestions for future work. By thoroughly investigating the relationship of visual privacy and deep learning, this article sheds insights on incorporating privacy requirements in the deep learning era.
Online social networks provide an unprecedented opportunity for researchers to analysis various social phenomena. These network data is normally
represented as graphs, which contain many sensitive individual information. Publish these graph data will violate users’ privacy. Differential privacy is
one of the most influential privacy models that provides a rigorous privacy guarantee for data release. However, existing works on graph data publishing
cannot provide accurate results when releasing a large number of queries. In this paper, we propose a graph update method transferring the query release
problem to an iteration process, in which a large set of queries are used as update criteria. Compared with existing works, the proposed method enhances
the accuracy of query results. The extensive experiment proves that the proposed solution outperforms two state-of-the-art methods, the Laplace method
and the correlated method, in terms of Mean Absolute Value. It means our methods can retain more utility of the queries while preserving the privacy. 相似文献