首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 26 毫秒
1.
Abstract

Congress and the courts have had to address an inevitable result of the new technology — the erosion of personal privacy. While our economy depends on the free flow of information, the individual's right to maintain privacy, personal dignity, and anonymity must be maintained. Courts have long recognized that the law must protect private matters and personal data against governmental and business intrusion.  相似文献   

2.
Anonymization is a practical approach to protect privacy in data. The major objective of privacy preserving data publishing is to protect private information in data whereas data is still useful for some intended applications, such as building classification models. In this paper, we argue that data generalization in anonymization should be determined by the classification capability of data rather than the privacy requirement. We make use of mutual information for measuring classification capability for generalization, and propose two k-anonymity algorithms to produce anonymized tables for building accurate classification models. The algorithms generalize attributes to maximize the classification capability, and then suppress values by a privacy requirement k (IACk) or distributional constraints (IACc). Experimental results show that algorithm IACk supports more accurate classification models and is faster than a benchmark utility-aware data anonymization algorithm.  相似文献   

3.
Location privacy: going beyond K-anonymity,cloaking and anonymizers   总被引:5,自引:3,他引:2  
With many location-based services, it is implicitly assumed that the location server receives actual users locations to respond to their spatial queries. Consequently, information customized to their locations, such as nearest points of interest can be provided. However, there is a major privacy concern over sharing such sensitive information with potentially malicious servers, jeopardizing users’ private information. The anonymity- and cloaking-based approaches proposed to address this problem cannot provide stringent privacy guarantees without incurring costly computation and communication overhead. Furthermore, they require a trusted intermediate anonymizer to protect user locations during query processing. This paper proposes a fundamental approach based on private information retrieval to process range and K-nearest neighbor queries, the prevalent queries used in many location-based services, with stronger privacy guarantees compared to those of the cloaking and anonymity approaches. We performed extensive experiments on both real-world and synthetic datasets to confirm the effectiveness of our approaches.  相似文献   

4.
Various miniaturized computing devices that store our identity information are emerging rapidly and are likely to become ubiquitous in the future. They allow private information to be exposed and accessed easily via wireless networks. When identity and context information is gathered by pervasive computing devices, personal privacy might be sacrificed to a greater extent than ever before. People whose information is targeted may have different privacy protection skills, awareness, and privacy preferences. In this research, we studied the following issues and their relations: (a) identity information that people think is important to keep private; (b) actions that people claim to take to protect their identities and privacy; (c) privacy concerns; (d) how people expose their identity information in pervasive computing environments; and (e) how our RationalExposure model can help minimize unnecessary identity exposure. We conducted the research in three stages, a comprehensive survey and two in-lab experiments. We built a simulated pervasive computing shopping system, called InfoSource. It consisted of two applications and our RationalExposure model. Our data show that identity exposure decisions depended on participants’ attitudes about maintaining privacy, but did not depend on participants’ concerns or security actions that they claimed to have taken. Our RationalExposure model did help the participants reduce unnecessary disclosures.  相似文献   

5.
Edge computing combining with artificial intelligence (AI) has enabled the timely processing and analysis of streaming data produced by IoT intelligent applications. However, it causes privacy risk due to the data exchanges between local devices and untrusted edge servers. The powerful analytical capability of AI further exacerbates the risks because it can even infer private information from insensitive data. In this paper, we propose a privacy-preserving IoT streaming data analytical framework based on edge computing, called PrivStream, to prevent the untrusted edge server from making sensitive inferences from the IoT streaming data. It utilizes a well-designed deep learning model to filter the sensitive information and combines with differential privacy to protect against the untrusted edge server. The noise is also injected into the framework in the training phase to increase the robustness of PrivStream to differential privacy noise. Taking into account the dynamic and real-time characteristics of streaming data, we realize PrivStream with two types of models to process data segment with fixed length and variable length, respectively, and implement it on a distributed streaming platform to achieve real-time streaming data transmission. We theoretically prove that Privstream satisfies ε-differential privacy and experimentally demonstrate that PrivStream has better performance than the state-of-the-art and has acceptable computation and storage overheads.  相似文献   

6.
Mobile devices with global positioning capabilities allow users to retrieve points of interest (POI) in their proximity. To protect user privacy, it is important not to disclose exact user coordinates to un-trusted entities that provide location-based services. Currently, there are two main approaches to protect the location privacy of users: (i) hiding locations inside cloaking regions (CRs) and (ii) encrypting location data using private information retrieval (PIR) protocols. Previous work focused on finding good trade-offs between privacy and performance of user protection techniques, but disregarded the important issue of protecting the POI dataset D. For instance, location cloaking requires large-sized CRs, leading to excessive disclosure of POIs (O(|D|) in the worst case). PIR, on the other hand, reduces this bound to \(O(\sqrt{|D|})\), but at the expense of high processing and communication overhead. We propose hybrid, two-step approaches for private location-based queries which provide protection for both the users and the database. In the first step, user locations are generalized to coarse-grained CRs which provide strong privacy. Next, a PIR protocol is applied with respect to the obtained query CR. To protect against excessive disclosure of POI locations, we devise two cryptographic protocols that privately evaluate whether a point is enclosed inside a rectangular region or a convex polygon. We also introduce algorithms to efficiently support PIR on dynamic POI sub-sets. We provide solutions for both approximate and exact NN queries. In the approximate case, our method discloses O(1) POI, orders of magnitude fewer than CR- or PIR-based techniques. For the exact case, we obtain optimal disclosure of a single POI, although with slightly higher computational overhead. Experimental results show that the hybrid approaches are scalable in practice, and outperform the pure-PIR approach in terms of computational and communication overhead.  相似文献   

7.
The rapid growth of contemporary social network sites (SNSs) has coincided with an increasing concern over personal privacy. College students and adolescents routinely provide personal information on profiles that can be viewed by large numbers of unknown people and potentially used in harmful ways. SNSs like Facebook and MySpace allow users to control the privacy level of their profile, thus limiting access to this information. In this paper, we take the preference for privacy itself as our unit of analysis, and analyze the factors that are predictive of a student having a private versus public profile. Drawing upon a new social network dataset based on Facebook, we argue that privacy behavior is an upshot of both social influences and personal incentives. Students are more likely to have a private profile if their friends and roommates have them; women are more likely to have private profiles than are men; and having a private profile is associated with a higher level of online activity. Finally, students who have private versus public profiles are characterized by a unique set of cultural preferences—of which the “taste for privacy” may be only a small but integral part.  相似文献   

8.

This research focuses on the development and validation of an instrument to measure the privacy concerns of individuals who use the Internet and two antecedents, perceived vulnerability and perceived ability to control information. The results of exploratory factor analysis support the validity of the measures developed. In addition, the regression analysis results of a model including the three constructs provide strong support for the relationship between perceived vulnerability and privacy concerns, but only moderate support for the relationship between perceived ability to control information and privacy concerns. The latter unexpected results suggest that the relationship among the hypothesized antecedents and privacy concerns may be one that is more complex than is captured in the hypothesized model, in light of the strong theoretical justification for the role of information control in the extant literature on information privacy.  相似文献   

9.

In software-defined networking (SDN), controllers are sinks of information such as network topology collected from switches. Organizations often like to protect their internal network topology and keep their network policies private. We borrow techniques from secure multi-party computation (SMC) to preserve the privacy of policies of SDN controllers about status of routers. On the other hand, the number of controllers is one of the most important concerns in scalability of SMC application in SDNs. To address this issue, we formulate an optimization problem to minimize the number of SDN controllers while considering their reliability in SMC operations. We use Non-Dominated Sorting Genetic Algorithm II (NSGA-II) to determine the optimal number of controllers, and simulate SMC for typical SDNs with this number of controllers. Simulation results show that applying the SMC technique to preserve the privacy of organization policies causes only a little delay in SDNs, which is completely justifiable by the privacy obtained.

  相似文献   

10.

The theory of privacy calculus in terms of the trade-offs between benefits and risks is believed to explain people’s willingness to disclose private information online. However, the phenomenon of privacy paradox, referring to the preference-behavior inconsistency, misfits the risk–benefit analysis. The phenomenon of privacy paradox matters because it reflects an illusion of personal control over privacy choices. The anomaly of privacy paradox is perhaps attributed to cognitive heuristics and biases in making privacy decisions. We consider the stability-instability of privacy choices is better used to explain the underlying mechanisms of paradoxical relationship. A rebalanced trade-off, referring to the embeddedness of “bridging” and “bonding” social support in privacy calculus, is derived to develop the risk–benefit paradigms to explain the underlying mechanisms. In this study we address the underlying mechanisms of privacy choices in terms of self-disclosure and user resistance. To test the hypotheses (or mechanisms) of the research model, we developed the instrument by modifying previous scales. A general sample of 311 experienced Facebook users was collected via online questionnaire survey. From the empirical results, perceived benefits based on information support rather than emotion support can motivate self-disclosure willingness. In contrast, privacy risks rather than privacy concerns inhibit the willingness to disclose private information. The risk–benefit paradigms instead of the imbalanced trade-offs help to explain the instability of privacy choices where privacy calculus sticks with the stability view. Implications for the theory and practice of privacy choices are discussed accordingly.

  相似文献   

11.
移动位置服务(LBS)是一个分布式多方参与的系统,给移动商业应用带来了一个快速发展的时机,但由于其拥有访问私人信息的权利,以至于也给它们的用户隐私带来很大的风险.为此,通过对能够有效保护用户隐私的模型进行了研究,提出了一个体系结构和一个协议,协议中使用一个位置中间件把来自LBS供应商提供的用户关心的区域信息和来自移动运营商的用户位置信息进行匹配.结果表明,该协议使得隐私友好的服务成为可能,而且仍然是高效率.  相似文献   

12.
Abstract

While obvious security threats like fast-spreading worms have a tendency to garner news headlines, other stealthy security risks threaten businesses every day. Increasing amounts of spyware and adware programs have the ability to facilitate the disclosure of business information and risk privacy, confidentiality, integrity, and system availability. Corporations usually accumulate a vault of information that could cause serious problems if it were shared with the wrong contacts or, even worse, taken. Spyware's evolution from simple cookies to a range of sophisticated user-tracking systems has left many businesses without the control over their proprietary data and operations.  相似文献   

13.
本地化差分隐私研究综述   总被引:2,自引:2,他引:0  
叶青青  孟小峰  朱敏杰  霍峥 《软件学报》2018,29(7):1981-2005
大数据时代信息技术不断发展,个人信息的隐私问题越来越受到关注,如何在数据发布和分析的同时保证其中的个人敏感信息不被泄露是当前面临的重大挑战.中心化差分隐私保护技术建立在可信第三方数据收集者的假设基础上,然而该假设在现实中不一定成立.基于此提出的本地化差分隐私作为一种新的隐私保护模型,具有强隐私保护性,不仅可以抵御具有任意背景知识的攻击者,而且能够防止来自不可信第三方的隐私攻击,对敏感信息提供了更全面的保护.介绍了本地化差分隐私的原理与特性,总结和归纳了该技术的当前研究工作,重点阐述了该技术的研究热点:本地化差分隐私下的频数统计、均值统计以及满足本地化差分隐私的扰动机制设计.在对已有技术深入对比分析的基础上,指出了本地化差分隐私保护技术的未来研究挑战.  相似文献   

14.
Being able to release and exploit open data gathered in information systems is crucial for researchers, enterprises and the overall society. Yet, these data must be anonymized before release to protect the privacy of the subjects to whom the records relate. Differential privacy is a privacy model for anonymization that offers more robust privacy guarantees than previous models, such as k-anonymity and its extensions. However, it is often disregarded that the utility of differentially private outputs is quite limited, either because of the amount of noise that needs to be added to obtain them or because utility is only preserved for a restricted type and/or a limited number of queries. On the contrary, k-anonymity-like data releases make no assumptions on the uses of the protected data and, thus, do not restrict the number and type of doable analyses. Recently, some authors have proposed mechanisms to offer general-purpose differentially private data releases. This paper extends such works with a specific focus on the preservation of the utility of the protected data. Our proposal builds on microaggregation-based anonymization, which is more flexible and utility-preserving than alternative anonymization methods used in the literature, in order to reduce the amount of noise needed to satisfy differential privacy. In this way, we improve the utility of differentially private data releases. Moreover, the noise reduction we achieve does not depend on the size of the data set, but just on the number of attributes to be protected, which is a more desirable behavior for large data sets. The utility benefits brought by our proposal are empirically evaluated and compared with related works for several data sets and metrics.  相似文献   

15.
Due to the advantages of pay-on-demand, expand-on-demand and high availability, cloud databases (CloudDB) have been widely used in information systems. However, since a CloudDB is distributed on an untrusted cloud side, it is an important problem how to effectively protect massive private information in the CloudDB. Although traditional security strategies (such as identity authentication and access control) can prevent illegal users from accessing unauthorized data, they cannot prevent internal users at the cloud side from accessing and exposing personal privacy information. In this paper, we propose a client-based approach to protect personal privacy in a CloudDB. In the approach, privacy data before being stored into the cloud side, would be encrypted using a traditional encryption algorithm, so as to ensure the security of privacy data. To execute various kinds of query operations over the encrypted data efficiently, the encrypted data would be also augmented with additional feature index, so that as much of each query operation as possible can be processed on the cloud side without the need to decrypt the data. To this end, we explore how the feature index of privacy data is constructed, and how a query operation over privacy data is transformed into a new query operation over the index data so that it can be executed on the cloud side correctly. The effectiveness of the approach is demonstrated by theoretical analysis and experimental evaluation. The results show that the approach has good performance in terms of security, usability and efficiency, thus effective to protect personal privacy in the CloudDB.  相似文献   

16.
强制数据隐私和用户隐私的外包数据库服务研究*   总被引:1,自引:0,他引:1  
外包数据库中的数据隐私和用户隐私保护是现代外包数据库服务面临的新挑战,针对目前外包数据库服务中单方面考虑数据隐私保护或用户隐私保护技术难以同时满足外包数据库安全需求的不足,提出一种可同时强制数据隐私和用户隐私保护的外包数据库服务模型,采用属性分解和部分属性加密技术,基于结合准标志集自动检测技术的近似算法实现外包数据的最小加密属性分解,同时把密码学应用于辅助随机服务器协议,以实现数据库访问时的用户隐私保护。理论分析和实验结果表明,该模型可以提供有效的数据隐私保护和查询处理,以及较好的用户隐私保护计算复杂度。  相似文献   

17.
Private predictions on hidden Markov models   总被引:1,自引:0,他引:1  
Hidden Markov models (HMMs) are widely used in practice to make predictions. They are becoming increasingly popular models as part of prediction systems in finance, marketing, bio-informatics, speech recognition, signal processing, and so on. However, traditional HMMs do not allow people and model owners to generate predictions without disclosing their private information to each other. To address the increasing needs for privacy, this work identifies and studies the private prediction problem; it is demonstrated with the following scenario: Bob has a private HMM, while Alice has a private input; and she wants to use Bob’s model to make a prediction based on her input. However, Alice does not want to disclose her private input to Bob, while Bob wants to prevent Alice from deriving information about his model. How can Alice and Bob perform HMMs-based predictions without violating their privacy? We propose privacy-preserving protocols to produce predictions on HMMs without greatly exposing Bob’s and Alice’s privacy. We then analyze our schemes in terms of accuracy, privacy, and performance. Since they are conflicting goals, due to privacy concerns, it is expected that accuracy or performance might degrade. However, our schemes make it possible for Bob and Alice to produce the same predictions efficiently while preserving their privacy.  相似文献   

18.
Nowadays, personal information is collected, stored, and managed through web applications and services. Companies are interested in keeping such information private due to regulation laws and privacy concerns of customers. Furthermore, the reputation of a company can be dependent on privacy protection, ie, the more a company protects the privacy of its customers, the more credibility it gets. This paper proposes an integrated approach that relies on models and design tools to help in the analysis, design, and development of web applications and services with privacy concerns. Using the approach, these applications can be developed consistently with their privacy policies to enforce them, protecting personal information from different sources of privacy violation. The approach is composed of a conceptual model, a reference architecture, and a Unified Modified Language Profile, ie, an extension of the Unified Modified Language for including privacy protection. The idea is to systematize the privacy concepts in the scope of web applications and services, organizing the privacy domain knowledge and providing features and functionalities that must be addressed to protect the privacy of the users in the design and development of web applications. Validation has been performed by analyzing the ability of the approach to model privacy policies from real web applications and by applying it to a simple application example of an online bookstore. Results show that privacy protection can be implemented in a model‐based approach, bringing values for the stakeholders and being an important contribution toward improving the process of designing web applications in the privacy domain.  相似文献   

19.
提出了一种基于信息混淆的社会网络隐私保护机制,其原理在于对整个社会网络里的隐私信息进行混淆,而非加密,使得需要保护的隐私信息以环形结构在社会网络里扩散开来。该机制以非集中化的方式工作,由用户之间的相互协作来保护用户的隐私信息。以"人人网"为平台,利用Firefox的扩展开发功能实现了该隐私保护的核心机制,证明了其可行性与可用性。该机制能够保证多方面的利益:要求隐私保护的主体用户、广告商、经过授权的用户及第三方应用。  相似文献   

20.
The information age has brought with it the promise of unprecedented economic growth based on the efficiencies made possible by new technology. This same greater efficiency has left society with less and less time to adapt to technological progress. Perhaps the greatest cost of this progress is the threat to privacy we all face from unconstrained exchange of our personal information. In response to this threat, the World Wide Web Consortium has introduced the Platform for Privacy Preferences (P3P) to allow sites to express policies in machine-readable form and to expose these policies to site visitors [Cranor et al., 8]. However, today P3P does not protect the privacy of individuals, nor does its implementation empower communities or groups to negotiate and establish standards of behavior. It is only through such negotiation or feedback that new social contracts can evolve. We propose a privacy architecture, the Social Contract Core (SCC), designed to use technology to facilitate this feedback and so speed the establishment of new Social Contracts needed to protect private data. The goal of SCC is to empower communities, speed the socialization of new technology, and encourage the rapid access to, and exchange of, information. Addressing these issues is essential, we feel, to both liberty and economic prosperity in the information age [Kaufman et al., 17].  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号