首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Transactional data collection and sharing currently face the challenge of how to prevent information leakage and protect data from privacy breaches while maintaining high-quality data utilities. Data anonymization methods such as perturbation, generalization, and suppression have been proposed for privacy protection. However, many of these methods incur excessive information loss and cannot satisfy multipurpose utility requirements. In this paper, we propose a multidimensional generalization method to provide multipurpose optimization when anonymizing transactional data in order to offer better data utility for different applications. Our methodology uses bipartite graphs with generalizing attribute, grouping item and perturbing outlier. Experiments on real-life datasets are performed and show that our solution considerably improves data utility compared to existing algorithms.  相似文献   

2.
Social networks collect enormous amounts of user personal and behavioral data, which could threaten users' privacy if published or shared directly. Privacy-preserving graph publishing (PPGP) can make user data available while protecting private information. For this purpose, in PPGP, anonymization methods like perturbation and generalization are commonly used. However, traditional anonymization methods are challenging in balancing high-level privacy and utility, ineffective at defending against both various link and hybrid inference attacks, as well as vulnerable to graph neural network (GNN)-based attacks. To solve those problems, we present a novel privacy-disentangled approach that disentangles private and non-private information for a better privacy-utility trade-off. Moreover, we propose a unified graph deep learning framework for PPGP, denoted privacy-disentangled variational information bottleneck (PDVIB). Using low-dimensional perturbations, the model generates an anonymized graph to defend against various inference attacks, including GNN-based attacks. Particularly, the model fits various privacy settings by employing adjustable perturbations at the node level. With three real-world datasets, PDVIB is demonstrated to generate robust anonymous graphs that defend against various privacy inference attacks while maintaining the utility of non-private information.  相似文献   

3.
In multiparty collaborative data mining, participants contribute their own data sets and hope to collaboratively mine a comprehensive model based on the pooled data set. How to efficiently mine a quality model without breaching each party's privacy is the major challenge. In this paper, we propose an approach based on geometric data perturbation and data mining service-oriented framework. The key problem of applying geometric data perturbation in multiparty collaborative mining is to securely unify multiple geometric perturbations that are preferred by different parties, respectively. We have developed three protocols for perturbation unification. Our approach has three unique features compared to the existing approaches: 1) with geometric data perturbation, these protocols can work for many existing popular data mining algorithms, while most of other approaches are only designed for a particular mining algorithm; 2) both the two major factors: data utility and privacy guarantee are well preserved, compared to other perturbation-based approaches; and 3) two of the three proposed protocols also have great scalability in terms of the number of participants, while many existing cryptographic approaches consider only two or a few more participants. We also study different features of the three protocols and show the advantages of different protocols in experiments.  相似文献   

4.
信息技术的发展为人们生活带来便利的同时也带来了个人隐私泄露的风险,数据匿名化是阻止隐私泄露的有效方法。然而,已有的匿名化方法主要考虑切断准标识符属性和敏感属性之间的关联,而没有考虑准标识符属性之间,以及准标识符属性和敏感属性之间存在的函数依赖关系。针对隐私保护的数据发布中存在的问题,研究数据之间存在函数依赖时,如何有效保护用户的隐私信息。首先针对数据集中存在函数依赖情况,提出(l,α)-多样性隐私保护模型;其次,为更好地实现用户隐私保护以及数据效用的增加,提出结合扰动和概化/隐匿的杂合方法实现匿名化算法。最后,实验验证了算法的有效性和效率,并对结果做了理论分析。  相似文献   

5.
Various organizations collect data about individuals for various reasons, such as service improvement. In order to mine the collected data for useful information, data publishing has become a common practice among those organizations and data analysts, research institutes, or simply the general public. The quality of published data significantly affects the accuracy of the data analysis and thus affects decision making at the corporate level. In this study, we explore the research area of privacy-preserving data publishing, i.e., publishing high-quality data without compromising the privacy of the individuals whose data are being published. Syntactic privacy models, such as k-anonymity, impose syntactic privacy requirements and make certain assumptions about an adversary’s background knowledge. To address this shortcoming, we adopt differential privacy, a rigorous privacy model that is independent of any adversary’s knowledge and insensitive to the underlying data. The published data should preserve individuals’ privacy, yet remain useful for analysis. To maintain data utility, we propose DiffMulti, a workload-aware and differentially private algorithm that employs multidimensional generalization. We devise an efficient implementation to the proposed algorithm and use a real-life data set for experimental analysis. We evaluate the performance of our method in terms of data utility, efficiency, and scalability. When compared to closely related existing methods, DiffMulti significantly improved data utility, in some cases, by orders of magnitude.  相似文献   

6.
K-anonymisation is an approach to protecting individuals from being identified from data.Good k-anonymisations should retain data utility and preserve privacy,but few methods have considered these two conflicting requirements together. In this paper,we extend our previous work on a clustering-based method for balancing data utility and privacy protection, and propose a set of heuristics to improve its effectiveness.We introduce new clustering criteria that treat utility and privacy on equal terms and propose sampling-based techniques to optimally set up its parameters.Extensive experiments show that the extended method achieves good accuracy in query answering and is able to prevent linking attacks effectively.  相似文献   

7.
There has been relatively little work on privacy preserving techniques for distance based mining. The most widely used ones are additive perturbation methods and orthogonal transform based methods. These methods concentrate on privacy protection in the average case and provide no worst case privacy guarantee. However, the lack of privacy guarantee makes it difficult to use these techniques in practice, and causes possible privacy breach under certain attacking methods. This paper proposes a novel privacy protection method for distance based mining algorithms that gives worst case privacy guarantees and protects the data against correlation-based and transform-based attacks. This method has the following three novel aspects. First, this method uses a framework to provide theoretical bound of privacy breach in the worst case. This framework provides easy to check conditions that one can determine whether a method provides worst case guarantee. A quick examination shows that special types of noise such as Laplace noise provide worst case guarantee, while most existing methods such as adding normal or uniform noise, as well as random projection method do not provide worst case guarantee. Second, the proposed method combines the favorable features of additive perturbation and orthogonal transform methods. It uses principal component analysis to decorrelate the data and thus guards against attacks based on data correlations. It then adds Laplace noise to guard against attacks that can recover the PCA transform. Third, the proposed method improves accuracy of one of the popular distance-based classification algorithms: K-nearest neighbor classification, by taking into account the degree of distance distortion introduced by sanitization. Extensive experiments demonstrate the effectiveness of the proposed method.  相似文献   

8.
Random-data perturbation techniques and privacy-preserving data mining   总被引:2,自引:4,他引:2  
Privacy is becoming an increasingly important issue in many data-mining applications. This has triggered the development of many privacy-preserving data-mining techniques. A large fraction of them use randomized data-distortion techniques to mask the data for preserving the privacy of sensitive data. This methodology attempts to hide the sensitive data by randomly modifying the data values often using additive noise. This paper questions the utility of the random-value distortion technique in privacy preservation. The paper first notes that random matrices have predictable structures in the spectral domain and then it develops a random matrix-based spectral-filtering technique to retrieve original data from the dataset distorted by adding random values. The proposed method works by comparing the spectrum generated from the observed data with that of random matrices. This paper presents the theoretical foundation and extensive experimental results to demonstrate that, in many cases, random-data distortion preserves very little data privacy. The analytical framework presented in this paper also points out several possible avenues for the development of new privacy-preserving data-mining techniques. Examples include algorithms that explicitly guard against privacy breaches through linear transformations, exploiting multiplicative and colored noise for preserving privacy in data mining applications.  相似文献   

9.
Data collection is a necessary step in data mining process. Due to privacy reasons, collecting data from different parties becomes difficult. Privacy concerns may prevent the parties from directly sharing the data and some types of information about the data. How multiple parties collaboratively conduct data mining without breaching data privacy presents a challenge. The objective of this paper is to provide solutions for privacy-preserving collaborative data mining problems. In particular, we illustrate how to conduct privacy-preserving naive Bayesian classification which is one of the data mining tasks. To measure the privacy level for privacy- preserving schemes, we propose a definition of privacy and show that our solutions preserve data privacy.  相似文献   

10.
In real-time monitoring systems, participant’s privacy could be easily exposed when the time-series of sensing measurements are obtained accurately by adversaries. To address privacy issues, a number of privacy-preserving schemes have been designed for various monitoring applications. However, these schemes either lack considerations for temporal privacy or have less resistance to filtering attacks, or cause time delay with low utility. In this paper, we introduce a lightweight temporal perturbation based scheme, where sensor readings are buffered and disordered to obfuscate the temporal information of the original sensor measurement stream with differential privacy. Besides, we design the operations on the system server side to exploit the data utility in measurements from large number of sensors. We evaluate the performance of the proposed scheme through both rigorous theoretical analysis and extensive simulation experiments in comparison with related existing schemes. Evaluation results show that the proposed scheme manages to preserve both the temporal privacy and measurement privacy with filter-resistance, and achieves better performance in terms of computational overhead, data utility of real-time aggregation, and individual accumulation.  相似文献   

11.
With the prevalence of cloud computing, data owners are motivated to outsource their databases to the cloud server. However, to preserve data privacy, sensitive private data have to be encrypted before outsourcing, which makes data utilization a very challenging task. Existing work either focus on keyword searches and single-dimensional range query, or suffer from inadequate security guarantees and inefficiency. In this paper, we consider the problem of multidimensional private range queries over encrypted cloud data. To solve the problem, we systematically establish a set of privacy requirements for multidimensional private range queries, and propose a multidimensional private range query (MPRQ) framework based on private block retrieval (PBR), in which data owners keep the query private from the cloud server. To achieve both efficiency and privacy goals, we present an efficient and fully privacy-preserving private range query (PPRQ) protocol by using batch codes and multiplication avoiding technique. To our best knowledge, PPRQ is the first to protect the query, access pattern and single-dimensional privacy simultaneously while achieving efficient range queries. Moreover, PPRQ is secure in the sense of cryptography against semi-honest adversaries. Experiments on real-world datasets show that the computation and communication overhead of PPRQ is modest.  相似文献   

12.
Privacy is an important issue in data publishing. Many organizations distribute non-aggregate personal data for research, and they must take steps to ensure that an adversary cannot predict sensitive information pertaining to individuals with high confidence. This problem is further complicated by the fact that, in addition to the published data, the adversary may also have access to other resources (e.g., public records and social networks relating individuals), which we call adversarial knowledge. A robust privacy framework should allow publishing organizations to analyze data privacy by means of not only data dimensions (data that a publishing organization has), but also adversarial-knowledge dimensions (information not in the data). In this paper, we first describe a general framework for reasoning about privacy in the presence of adversarial knowledge. Within this framework, we propose a novel multidimensional approach to quantifying adversarial knowledge. This approach allows the publishing organization to investigate privacy threats and enforce privacy requirements in the presence of various types and amounts of adversarial knowledge. Our main technical contributions include a multidimensional privacy criterion that is more intuitive and flexible than previous approaches to modeling background knowledge. In addition, we identify an important congregation property of the adversarial-knowledge dimensions. Based on this property, we provide algorithms for measuring disclosure and sanitizing data that improve computational efficiency several orders of magnitude over the best known techniques.  相似文献   

13.
To preserve client privacy in the data mining process, a variety of techniques based on random perturbation of individual data records have been proposed recently. In this paper, we present FRAPP, a generalized matrix-theoretic framework of random perturbation, which facilitates a systematic approach to the design of perturbation mechanisms for privacy-preserving mining. Specifically, FRAPP is used to demonstrate that (a) the prior techniques differ only in their choices for the perturbation matrix elements, and (b) a symmetric positive-definite perturbation matrix with minimal condition number can be identified, substantially enhancing the accuracy even under strict privacy requirements. We also propose a novel perturbation mechanism wherein the matrix elements are themselves characterized as random variables, and demonstrate that this feature provides significant improvements in privacy at only a marginal reduction in accuracy. The quantitative utility of FRAPP, which is a general-purpose random-perturbation-based privacy-preserving mining technique, is evaluated specifically with regard to association and classification rule mining on a variety of real datasets. Our experimental results indicate that, for a given privacy requirement, either substantially lower modeling errors are incurred as compared to the prior techniques, or the errors are comparable to those of direct mining on the true database. A partial and preliminary version of this paper appeared in the Proc. of the 21st IEEE Intl. Conf. on Data Engineering (ICDE), Tokyo, Japan, 2005, pgs. 193–204.  相似文献   

14.
Privacy Preserving Data Mining (PPDM) can prevent private data from disclosure in data mining. However, the current PPDM methods damaged the values of original data where knowledge from the mined data cannot be verified from the original data. In this paper, we combine the concept and technique based on the reversible data hiding to propose the reversible privacy preserving data mining scheme in order to solve the irrecoverable problem of PPDM. In the proposed privacy difference expansion (PDE) method, the original data is perturbed and embedded with a fragile watermark to accomplish privacy preserving and data integrity of mined data and to also recover the original data. Experimental tests are performed on classification accuracy, probabilistic information loss, and privacy disclosure risk used to evaluate the efficiency of PDE for privacy preserving and knowledge verification.  相似文献   

15.
Due to growing concerns about the privacy of personal information, organizations that use their customers' records in data mining activities are forced to take actions to protect the privacy of the individuals. A frequently used disclosure protection method is data perturbation. When used for data mining, it is desirable that perturbation preserves statistical relationships between attributes, while providing adequate protection for individual confidential data. To achieve this goal, we propose a kd-tree based perturbation method, which recursively partitions a data set into smaller subsets such that data records within each subset are more homogeneous after each partition. The confidential data in each final subset are then perturbed using the subset average. An experimental study is conducted to show the effectiveness of the proposed method.  相似文献   

16.
Many recent applications depend on time series of data containing personal information. For example, the smart grid collects and distributes time series of energy-consumption data from households. Our concern is information hiding in such data according to individual privacy constraints, considering several constraints at a time. The existing information-hiding approaches we are aware of make limiting assumptions regarding the nature of such constraints. Our approach in turn lets the individuals concerned specify information that must be hidden arbitrarily, and it also lets the data receivers specify characteristics of the data needed to perform a certain task. We use these constraints to formulate an optimization problem that generates perturbed time series that fulfill the constraints of the data receivers and do not contain more sensitive information than allowed. Next, we propose a complexity-reduction approach that speeds up solving this optimization problem for time series by orders of magnitude. Three case studies on real-world data confirm that our approach is applicable to a wide range of application domains, and that it provides more protection against well-known privacy attacks such as re-identification, reconstruction and disaggregation. In addition, we provide a Java implementation of our approach and supplementary material on our web page.1  相似文献   

17.
Due to recent advances in data collection and processing, data publishing has emerged by some organizations for scientific and commercial purposes. Published data should be anonymized such that staying useful while the privacy of data respondents is preserved. Microaggregation is a popular mechanism for data anonymization, but naturally operates on numerical datasets. However, the type of data in the real world is usually mixed i.e., there are both numeric and categorical attributes together. In this paper, we propose a novel transformation based method for microaggregation of mixed data called TBM. The method uses multidimensional scaling to generate a numeric equivalent from mixed dataset. The partitioning step of microaggregation is performed on the equivalent dataset but the aggregation step on the original data. TBM can microaggregate large mixed datasets in a short time with low information loss. Experimental results show that the proposed method attains better trade-off between data utility and privacy in a shorter time in comparison with the traditional methods.  相似文献   

18.
隐私保护技术是云计算环境中防止隐私信息泄露的重要保障,通过度量这种泄露风险可反映隐私保护技术的隐私保护强度,以便构建更好的隐私保护方案。因此,隐私度量对隐私保护具有重大意义。主要对现有面向云数据的隐私度量方法进行综述:首先,对隐私保护技术和隐私度量进行概述,给出攻击者背景知识的量化方法,提出云数据隐私保护技术的性能评价指标和一种综合评估框架;然后,提出一种云数据隐私度量抽象模型,从工作原理和具体实施的角度对基于匿名、信息熵、集对分析理论和差分隐私四类隐私度量方法进行详细阐述;再从隐私度量指标和度量效果方面分析与总结这四类方法的优缺点及其适用范围;最后,从隐私度量的过程、效果和方法三方面指出云数据隐私度量技术的发展趋势及有待解决的问题。  相似文献   

19.
面向聚类的数据隐藏通常使用数据扰动技术防止敏感信息泄露。针对现有的面向聚类的数据扰动方法隐私保护度低的问题,提出一种基于平面反射的数据扰动方法,将发布对象的全部属性两两配对构成平面上的点,再随机选择一条直线,作每对属性关于直线的对称点,转换后的数据即为发布的数据。实验结果表明,这种方法具有较好的隐私保护度和聚类可用性,且对高维数据有良好的适应性。  相似文献   

20.
In this paper, we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of supermarket transactions that contains detailed information about items bought together by individuals. Even after removing all personal characteristics of the buyer, which can serve as links to his identity, the publication of such data is still subject to privacy attacks from adversaries who have partial knowledge about the set. Unlike most previous works, we do not distinguish data as sensitive and non-sensitive, but we consider them both as potential quasi-identifiers and potential sensitive data, depending on the knowledge of the adversary. We define a new version of the k-anonymity guarantee, the k m -anonymity, to limit the effects of the data dimensionality, and we propose efficient algorithms to transform the database. Our anonymization model relies on generalization instead of suppression, which is the most common practice in related works on such data. We develop an algorithm that finds the optimal solution, however, at a high cost that makes it inapplicable for large, realistic problems. Then, we propose a greedy heuristic, which performs generalizations in an Apriori, level-wise fashion. The heuristic scales much better and in most of the cases finds a solution close to the optimal. Finally, we investigate the application of techniques that partition the database and perform anonymization locally, aiming at the reduction of the memory consumption and further scalability. A thorough experimental evaluation with real datasets shows that a vertical partitioning approach achieves excellent results in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号