首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
In recent years, auctions have become a very popular price discovery mechanism in the Internet. The common auction formats are typically centralized in nature. The peer-to-peer paradigm demands gearing up auctions for decentralized infrastructures. In this context, this paper proposes a distributed mechanism for ascending second-price auctions that relies on standard cryptographic algorithms. In essence, the auction protocol has the capability of preserving the privacy of the winning bidder’s true valuation. The auction protocol makes use of a high number of auctioneers divided into several groups. A bidder creates an encrypted chain of monotonously increasing bidding steps, where each bidding step can be decrypted by a different auctioneer group. This considerably reduces the attack and manipulation possibilities of malicious auctioneers. In addition, this secure approach does not require bidders to be online unless they are submitting their bid chain to the auctioneers.  相似文献   

2.
A private information retrieval scheme enables a user to privately recover an item from a public accessible database. In this paper we present a private information retrieval scheme for k replicated databases. The scheme is information-theoretic secure for coalitions of databases of size t≤k-1. It improves the communication complexity of the scheme described in [Ishai and Kushilevitz (1999) Proc 31st Annu ACM Symp Theory Comput pp 79–88] for coalitions of size ≤t≤k-1. Published online: 27 July 2001  相似文献   

3.
Random-data perturbation techniques and privacy-preserving data mining   总被引:2,自引:4,他引:2  
Privacy is becoming an increasingly important issue in many data-mining applications. This has triggered the development of many privacy-preserving data-mining techniques. A large fraction of them use randomized data-distortion techniques to mask the data for preserving the privacy of sensitive data. This methodology attempts to hide the sensitive data by randomly modifying the data values often using additive noise. This paper questions the utility of the random-value distortion technique in privacy preservation. The paper first notes that random matrices have predictable structures in the spectral domain and then it develops a random matrix-based spectral-filtering technique to retrieve original data from the dataset distorted by adding random values. The proposed method works by comparing the spectrum generated from the observed data with that of random matrices. This paper presents the theoretical foundation and extensive experimental results to demonstrate that, in many cases, random-data distortion preserves very little data privacy. The analytical framework presented in this paper also points out several possible avenues for the development of new privacy-preserving data-mining techniques. Examples include algorithms that explicitly guard against privacy breaches through linear transformations, exploiting multiplicative and colored noise for preserving privacy in data mining applications.  相似文献   

4.
In this paper we explore how partial-order reduction can make the task of verifying security protocols more efficient. These reduction techniques have been implemented in our tool Brutus. Partial-order reductions have proved very useful in the domain of model checking reactive systems. These reductions are not directly applicable in our context because of additional complications caused by tracking knowledge of various agents. We present partial-order reductions in the context of verifying security protocols and prove their correctness. Experimental results demonstrating the effectiveness of this reduction technique are also presented. Published online: 24 January 2003  相似文献   

5.
We survey the paradigms, approaches and techniques used to conceptualize, define and provide solutions to natural cryptographic problems. We start by presenting some of the central tools (e.g., computational difficulty, pseudorandomness, and zero-knowledge proofs), and next turn to the treatment of encryption and signature schemes. We conclude with an extensive treatment of secure cryptographic protocols both when executed in a stand-alone manner and when many sessions of various protocols are concurrently executed and controlled by an adversary. The survey is intended for researchers in distributed computing, and assumes no prior familiarity with cryptography.Received: June 2001, Accepted: July 2002,  相似文献   

6.
A new approach to describing communication protocols is introduced. In the style of a formal language, the protocol is considered as the set of all legal sequences of symbols that can be exchanged by the communicating processes. Although context free grammars cannot adequately describe such sequences, it is shown that attribute grammars may be used. Examples are given which show that common protocol features such as interleaving, windowing and flow control can be described by attribute grammars.It is shown how deadlock-proneness of a protocol can be formalised as a property of its attribute grammar specification, and the undecidability of deadlock-proneness for arbitrary grammars is proved. An algorithm is given for determining whether a protocol is deadlock-prone in the decidable case.A method of automatically implementing protocols from their specifications is described. The implementation takes the form of a pair of communicating attributed pushdown automata. These are based on LR(0) parsers, with attribute evaluation being performed in parallel with the parse; attribute values are used to help direct the parse. Consideration is also given to the handling of errors.  相似文献   

7.
The advent of Blockchain and smart contracts is empowering many technologies and systems to automate commerce and facilitate the exchange, tracking and the provision of goods, data and services in a reliable and auditable way. Crowdsensing systems is one type of systems that have been receiving a lot of attention in the past few years. In crowdsensing systems consumer devices such as mobile phones and Internet of Things devices are used to deploy wide-scale sensor networks. We identify some of the major security and privacy issues associated with the development of crowdsensing systems based on smart contracts and Blockchain. We also explore possible solutions that can address major security concerns with these systems.  相似文献   

8.
Stepwise refinement is a crucial conceptual tool for system development, encouraging program construction via a number of separate correctness-preserving stages which ideally can be understood in isolation. A crucial conceptual component of security is an adversary’s ignorance of concealed information. We suggest a novel method of combining these two ideas.Our suggestion is based on a mathematical definition of “ignorance-preserving” refinement that extends classical refinement by limiting an adversary’s access to concealed information: moving from specification to implementation should never increase that access. The novelty is the way we achieve this in the context of sequential programs.Specifically we give an operational model (and detailed justification for it), a basic sequential programming language and its operational semantics in that model, a “logic of ignorance” interpreted over the same model, then a program-logical semantics bringing those together — and finally we use the logic to establish, via refinement, the correctness of a real (though small) protocol: Rivest’s Oblivious Transfer. A previous report? treated Chaum’s Dining Cryptographers similarly.In passing we solve the Refinement Paradox for sequential programs.  相似文献   

9.
The research done by the Tenet Group in multimedia networking has reached a point where it may be useful to reflect on the significance of its results for the current debate on how integrated-services internetworks should be designed. Such reflections constitute the main subject of this paper. The principles of the work and the conclusions reached so far by the Tenet researchers are discussed in the light of the conflict between the two major technologies being proposed to build future information infrastructures: namely, the Internet and the ATM technologies. The Tenet approach suggests one feasible way for resolving the conflict to the advantage of all the users of those infrastructures. This paper discusses various fundamental aspects of integrated-services network design: the choice of the service model, the type of charging policy to be adopted, and the selection of a suitable architecture.  相似文献   

10.
We present the design of ObjectGlobe, a distributed and open query processor for Internet data sources. Today, data is published on the Internet via Web servers which have, if at all, very localized query processing capabilities. The goal of the ObjectGlobe project is to establish an open marketplace in which data and query processing capabilities can be distributed and used by any kind of Internet application. Furthermore, ObjectGlobe integrates cycle providers (i.e., machines) which carry out query processing operators. The overall picture is to make it possible to execute a query with – in principle – unrelated query operators, cycle providers, and data sources. Such an infrastructure can serve as enabling technology for scalable e-commerce applications, e.g., B2B and B2C market places, to be able to integrate data and data processing operations of a large number of participants. One of the main challenges in the design of such an open system is to ensure privacy and security. We discuss the ObjectGlobe security requirements, show how basic components such as the optimizer and runtime system need to be extended, and present the results of performance experiments that assess the additional cost for secure distributed query processing. Another challenge is quality of service management so that users can constrain the costs and running times of their queries. Received: 30 October 2000 / Accepted: 14 March 2001 Published online: 7 June 2001  相似文献   

11.
For several years we have been in charge of a course on specification and validation of concurrent and reactive systems. At the end of this course, the students must carry out a project dealing with a model railway. They have to specify the railway, validate their model, and finally translate it into a program controlling the model railway with up to five trains. In this paper, after presenting the project, we describe how the railway is specified and checked, step by step, by the students. We also explain how the analysis results lead to a policy for the switch control. Finally, we include some remarks about the implementation. Published online: 24 August 2001  相似文献   

12.
Geographic data are useful for a large set of applications, such as urban planning and environmental control. These data are, however, very expensive to acquire and maintain. Moreover, their use is often restricted due to a lack of dissemination mechanisms. Digital libraries are a good approach for increasing data availability and therefore reducing costs, since they provide efficient storage and access to large volumes of data. One major drawback to this approach is that it creates the necessity of providing facilities for a large and heterogeneous community of users to search and interact with these geographic libraries. We present a solution to this problem, based on a framework that allows the design and construction of customizable user interfaces for applications based on Geographic Digital Libraries (GDL). This framework relies on two main concepts: a geographic user interface architecture and a geographic digital library model. Received: 15 December 1997 / Revised: June 1999  相似文献   

13.
One key component in providing effective image data management support is an expressive query language/interface. In this paper, we describe the EXQUISI system that we have developed. A main contribution of EXQUISI is its ability to allow a user to express subtle differences that may exist between images to be retrieved and other images that are similar. In particular, it allows the user to incorporate ambiguities and imprecisions in specifying his/her query. Another important aspect of EXQUISI is the provision of a reformulation language by which the user can ask “like this in what” queries, by specifying which parts of a returned image the user wants to include and exclude.  相似文献   

14.
A multimedia application involves information that may be in a form of video, images, audio, text and graphics, need to be stored, retrieved and manipulated in large databases. In this paper, we propose an object-oriented database schema that supports multimedia documents and their temporal, spatial and logical structures. We present a document example and show how the schema can adress all the structures described. We also present a multimedia query specification language that can be used to describe a multimedia content portion to be retrieved from the database. The language provides means by which the user can specify the information on the media as well as the temoral and spatial relationships among these media.  相似文献   

15.
The OSAM*.KBMS is a knowledge-base management system, or the so-called next-generation database management system, for non-traditional data/knowledge-intensive applications. In order to define, query, and manipulate a knowledge base, as well as to write codes to implement any application system, we have developed an object-oriented knowledge-base programming language called K to serve as the high-level interface of OSAM*.KBMS. This paper presents the design of K, its implementation, and its supporting KBMS developed at the Database Systems Research and Development Center of the University of Florida. Edited by Dennis McLeod. Received July 1992 / Accepted August 1995  相似文献   

16.
17.
Within cooperative learning great emphasis is placed on the benefits of ?two heads being greater than one?. However, further examination of this adage reveals that the value of learning groups can often be overstated and taken for granted for different types of problems. When groups are required to solve ill-defined and complex problems under real world constraints, different socio-cognitive factors (e.g., metacognition, collective induction, and perceptual experience) are expected to determine the extent to which cooperative learning is successful. Another facet of cooperative learning, the extent to which groups enhance the use of knowledge from one situation to another, is frequently ignored in determining the value of cooperative learning. This paper examines the role and functions of cooperative learning groups in contrast to individual learning conditions, for both an acquisition and transfer task. Results for acquisition show groups perform better overall than individuals by solving more elements of the Jasper problem as measured by their overall score in problem space analysis. For transfer, individuals do better overall than groups in the overall amount of problem elements transferred from Jasper. This paradox is explained by closer examination of the data analysis. Groups spend more time engaged with each other in metacognitive activities (during acquisition) whereas individuals spend more time using the computer to explore details of the perceptually based Jasper macrocontext. Hence, results show that individuals increase their perceptual learning during acquisition whereas groups enhance their metacognitive strategies. These investments show different pay-offs for the transfer problem. Individuals transfer more overall problem elements (as they explored the context more) but problem solvers who had the benefit of metacognition in a learning group did better at solving the most complex elements of the transfer problem. Results also show that collective induction groups (ones that freely share) – in comparison to groups composed of dominant members – enhance certain kinds of transfer problem solving (e.g., generating subgoals). The results are portrayed as the active interplay of socio-cognitive elements that impact the outcomes (and therein success) of cooperative learning.  相似文献   

18.
Abstract. The use of hand gestures provides an attractive means of interacting naturally with a computer-generated display. Using one or more video cameras, the hand movements can potentially be interpreted as meaningful gestures. One key problem in building such an interface without a restricted setup is the ability to localize and track the human arm robustly in video sequences. This paper proposes a multiple-cue localization scheme combined with a tracking framework to reliably track the dynamics of the human arm in unconstrained environments. The localization scheme integrates the multiple cues of motion, shape, and color for locating a set of key image features. Using constraint fusion, these features are tracked by a modified extended Kalman filter that exploits the articulated structure of the human arm. Moreover, an interaction scheme between tracking and localization is used for improving the estimation process while reducing the computational requirements. The performance of the localization/tracking framework is validated with the help of extensive experiments and simulations. These experiments include tracking with calibrated stereo camera and uncalibrated broadcast video. Received: 19 January 2001 / Accepted: 27 December 2001 Correspondence to: R. Sharma  相似文献   

19.
One of the challenges in the design of a distributed multimedia system is devising suitable specification models for various schemas in different levels of the system. Another important research issue is the integration and synchronization of heterogeneous multimedia objects. In this paper, we present our models for multimedia schemas and transformation algorithms. They transform high-level multimedia objects into schemas that can be used to support the presentation and communication of the multimedia objects. A key module in the system is the Object Exchange Manager (OEM). In this paper, we present the design and implementation of the OEM module, and discuss in detail the interaction between the OEM and other modules in a distributed multimedia system.  相似文献   

20.
The Internet of Things (IoT) provides anywhere, anything, anytime connections, for which user privacy is vulnerable and authentication methods that favor policy over attributes are essential. Thus, a signature scheme that considers user privacy and implements an attributes policy is required. Emerging attribute-based signature (ABS) schemes allow a requester of a resource to generate a signature with attributes satisfying the policy without leaking more information. However, few existing approaches simultaneously achieve an expressive policy and security under the standard Diffie–Hellman assumption. Here we describe ePASS, a novel ABS scheme that uses an attribute tree and expresses any policy consisting of AND, OR threshold gates under the computational Diffie–Hellman problem. Users cannot forge signatures with attributes they do not possess, and the signature provides assurance that only a user with appropriate attributes satisfying the policy can endorse the message, resulting in unforgeability. However, legitimate signers remain anonymous and are indistinguishable among all users whose attributes satisfy the policy, which provides attribute privacy for the signer. Compared to existing schemes, our approach delivers enhanced performance by reducing the computational cost and signature size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号