首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
``Perfect zero-knowledge arguments' is a cryptographic primitive which allows one polynomial-time player to convince another polynomial-time player of the validity of an NP statement, without revealing any additional information (in the information-theoretic sense). Here the security achieved is on-line: in order to cheat and validate a false theorem, the prover must break a cryptographic assumption on-line during the conversation, while the verifier cannot find (ever) any information unconditionally. Despite their practical and theoretical importance, it was only known how to implement zero-knowledge arguments based on specific algebraic assumptions. In this paper we show a general construction which can be based on any one-way permutation. The result is obtained by a construction of an information-theoretic secure bit-commitment protocol. The protocol is efficient (both parties are polynomial time) and can be based on any one-way permutation. Received 5 September 1993 and revised 16 September 1997  相似文献   

2.
In this paper we show that any two-party functionality can be securely computed in a constant number of rounds , where security is obtained against (polynomial-time) malicious adversaries that may arbitrarily deviate from the protocol specification. This is in contrast to Yao's constant-round protocol that ensures security only in the face of semi-honest adversaries, and to its malicious adversary version that requires a polynomial number of rounds. In order to obtain our result, we present a constant-round protocol for secure coin-tossing of polynomially many coins (in parallel). We then show how this protocol can be used in conjunction with other existing constructions in order to obtain a constant-round protocol for securely computing any two-party functionality. On the subject of coin-tossing, we also present a constant-round almost perfect coin-tossing protocol, where by ``almost perfect' we mean that the resulting coins are guaranteed to be statistically close to uniform (and not just pseudorandom).  相似文献   

3.
Maintaining Authenticated Communication in the Presence of Break-Ins   总被引:5,自引:0,他引:5  
We study the problem of maintaining authenticated communication over untrusted communication channels, in a scenario where the communicating parties may be occasionally and repeatedly broken into for transient periods of time. Once a party is broken into, its cryptographic keys are exposed and perhaps modified. Yet, when aided by other parties it should be able to regain its ability to communicate in an authenticated way. We present a mathematical model for this highly adversarial setting, exhibiting salient properties and parameters, and then describe a practically appealing protocol for solving this problem. A key element in our solution is devising a proactive distributed signature (PDS) scheme in our model. The PDS schemes known in the literature are designed for a model where authenticated communication is available. We therefore show how these schemes can be modified to work in our model, where no such primitives are available a priori. In the process of devising these schemes, we also present a new definition of PDS schemes (and of distributed signature schemes in general). This definition may be of independent interest. Received 22 April 1998 and revised 2 April 1999  相似文献   

4.
On the Importance of Eliminating Errors in Cryptographic Computations   总被引:2,自引:0,他引:2  
We present a model for attacking various cryptographic schemes by taking advantage of random hardware faults. The model consists of a black-box containing some cryptographic secret. The box interacts with the outside world by following a cryptographic protocol. The model supposes that from time to time the box is affected by a random hardware fault causing it to output incorrect values. For example, the hardware fault flips an internal register bit at some point during the computation. We show that for many digital signature and identification schemes these incorrect outputs completely expose the secrets stored in the box. We present the following results: (1) The secret signing key used in an implementation of RSA based on the Chinese Remainder Theorem (CRT) is completely exposed from a single erroneous RSA signature, (2) for non-CRT implementations of RSA the secret key is exposed given a large number (e.g. 1000) of erroneous signatures, (3) the secret key used in Fiat—Shamir identification is exposed after a small number (e.g. 10) of faulty executions of the protocol, and (4) the secret key used in Schnorr's identification protocol is exposed after a much larger number (e.g. 10,000) of faulty executions. Our estimates for the number of necessary faults are based on standard security parameters such as a 1024-bit modulus, and a 2 -40 identification error probability. Our results demonstrate the importance of preventing errors in cryptographic computations. We conclude the paper with various methods for preventing these attacks. Received July 1997 and revised August 2000 Online publication 27 November, 2000  相似文献   

5.
The goal of secure multiparty computation is to transform a given protocol involving a trusted party into a protocol without need for the trusted party, by simulating the party among the players. Indeed, by the same means, one can simulate an arbitrary player in any given protocol. We formally define what it means to simulate a player by a multiparty protocol among a set of (new) players, and we derive the resilience of the new protocol as a function of the resiliences of the original protocol and the protocol used for the simulation. In contrast to all previous protocols that specify the tolerable adversaries by the number of corruptible players (a threshold), we consider general adversaries characterized by an adversary structure, a set of subsets of the player set, where the adversary may corrupt the players of one set in the structure. Recursively applying the simulation technique to standard threshold multiparty protocols results in protocols secure against general adversaries. The classical results in unconditional multiparty computation among a set of n players state that, in the passive model, any adversary that corrupts less than n/2 players can be tolerated, and in the active model, any adversary that corrupts less than n/3 players can be tolerated. Strictly generalizing these results we prove that, in the passive model, every function (more generally, every cooperation specified by involving a trusted party) can be computed securely with respect to a given adversary structure if and only if no two sets in the adversary structure cover the full set of players, and, in the active model, if and only if no three sets cover the full set of players. The complexities of the protocols are polynomial in the number of maximal adverse player sets in the adversary structure. Received 31 December 1997 and revised 26 February 1999  相似文献   

6.
Bit commitment using pseudorandomness   总被引:4,自引:0,他引:4  
We show how a pseudorandom generator can provide a bit-commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the assumption of the existence of pseudorandom generators suffices to assure amortized O(1) bits of communication per bit commitment.Part of this work was done while the author was at the University of California at Berkeley. This research was supported by NSF Grant CCR 88-13632.  相似文献   

7.
This paper considers the channel assignment problem in a multi‐channel MANET environment. We propose a scheme called GRID, by which a mobile host can easily determine which channel to use based on its current location. In fact, following the GSM style, our GRID spends no communication cost to allocate channels to mobile hosts since channel assignment is purely determined by hosts' physical locations. We show that this can improve the channel reuse ratio. We then propose a multi‐channel MAC protocol, which integrates GRID. Our protocol is characterized by the following features: (i) it follows an ‘on‐demand’ style to access the medium and thus a mobile host will occupy a channel only when necessary, (ii) the number of channels required is independent of the network topology, and (iii) no form of clock synchronization is required. On the other hand, most existing protocols assign channels to a host statically even if it has no intention to transmit [IEEE/ACM Trans. Networks 1995; 3 (4):441–449; 1993; 1 (6): 668–677; IEEE J. Selected Areas Commun. 1999; 17 (8):1345–1352], require a number of channels which is a function of the maximum connectivity [IEEE/ACM Trans. Networks 1995; 3 (4):441–449; 1993; 1 (6): 668–677; Proceedings of IEEE MILCOM'97, November 1997; IEEE J. Selected Areas Commun. 1999; 17 (8):1345–1352], or necessitate a clock synchronization among all hosts in the MANET [IEEE J. Selected Areas Commun. 1999; 17 (8):1345–1352; Proceedings of IEEE INFOCOM'99, October 1999]. Through simulations, we demonstrate the advantages of our protocol. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
Mesh‐based multicast routing protocols for mobile ad hoc networks (MANETs) build multiple paths from senders to receivers to deliver packets even in the presence of links breaking. This redundancy results in high reliability/robustness but may significantly increase packet overhead. This paper proposes a mesh‐based multicast protocol, called centered protocol for unified multicasting through announcements (CPUMA), that achieves comparable reliability as existing mesh‐based multicast protocols, however, with significantly much less data overhead. In CPUMA, a distributed core‐selection and maintenance algorithm is used to find the source‐centric center of a shared mesh. We leverage data packets to center the core of each multicast group shared mesh instead of using GPS or any pre‐assignment of cores to groups (the case of existing protocols). The proposed centering scheme allows reducing data packet overhead and creating forwarding paths toward the nearest mesh member instead of the core to reduce latency. We show, via simulations, that CPUMA outperforms existing multicast protocols in terms of data packet overhead, and latency while maintaining a constant or better packet delivery ratio, at the cost of a small increase in control overhead in a few scenarios. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
Security and Composition of Multiparty Cryptographic Protocols   总被引:17,自引:0,他引:17  
We present general definitions of security for multiparty cryptographic protocols, with focus on the task of evaluating a probabilistic function of the parties' inputs. We show that, with respect to these definitions, security is preserved under a natural composition operation. The definitions follow the general paradigm of known definitions; yet some substantial modifications and simplifications are introduced. The composition operation is the natural ``subroutine substitution' operation, formalized by Micali and Rogaway. We consider several standard settings for multiparty protocols, including the cases of eavesdropping, Byzantine, nonadaptive and adaptive adversaries, as well as the information-theoretic and the computational models. In particular, in the computational model we provide the first definition of security of protocols that is shown to be preserved under composition. Received 4 June 1998 and revised 19 August 1999  相似文献   

10.
Problems of secure communication and computation have been studied extensively in network models. Goldreich et al., Franklin and Yung, and Franklin and Wright have initiated the study of secure communication and secure computation in multirecipient (multicast) models. A ``multicast channel' (such as ethernet) enables one processor to send the same message—simultaneously and privately—to a fixed subset of processors. In their recent paper, Franklin and Wright have shown that if there are n multicast lines between a sender and a receiver and there are at most t malicious (Byzantine style) processors, then the condition n>t is necessary and sufficient for achieving efficient probabilistically reliable and probabilistically private communication. They also showed that if n> \lceil 3t/2\rceil , then there is an efficient protocol to achieve probabilistically reliable and perfectly private communication. They left open the question whether there exists an efficient protocol to achieve probabilistically reliable and perfectly private communication when \lceil 3t/2\rceil≥ n>t . In this paper, by using a different authentication scheme, we answer this question affirmatively and study related problems. Received June 1999 and revised September 2000 Online publication 9 March 2001  相似文献   

11.
This paper proposes a new medium access protocol (MAC) protocol for futurewireless multimedia personal communication systems, denoted hybrid andadaptive multiple access control (HAMAC) protocol. The HAMAC protocolintegrates fixed assignment TDMA protocol, reservation-based protocols, andcontention-based protocols into a single wireless network so as tosimultaneously and efficiently support various classes of traffic such asconstant-bit-rate (CBR), variable-bit-rate (VBR), and available-bit-rate (ABR)traffic. In particular, the HAMAC protocol uses a novel preservationslot technique to overcome the packet contention overhead in packetreservation multiple access (PRMA) like protocols, while keeping mostisochronous service features of TDMA protocols to serve voice and CBR trafficstreams. A preservation slot is a very short slot which is used torepresent a CBR connection when the traffic in the CBR connection is in asilent period in which there is no meaningful data to transmit. Due to thevery short length of the preservation slot, it only takes minimalportion of the bandwidth pre-allocated to the CBR connection, so that theremaining bandwidth can be freed for other connections to use. When the CBRsource becomes active again, the preservation slot is replaced bynormal data slots without any reservation operation, extra delay, orsignificant bandwidth loss. Consequently, the guaranteed service andsimplified signaling features of TDMA protocols, together with the adaptivebandwidth allocation features of PRMA-like protocols, are both realized in theHAMAC protocol. We have analyzed the performance of the HAMAC protocol usingextensive simulations. The results show that the HAMAC protocol can achievevery low loss rates for various multimedia traffic with stringent quality ofservice (QoS) requirements and outperforms state-of-the-art PRMA-likeprotocols. As a result, the HAMAC protocol appears to be a good candidate forfuture generation multimedia personal communication systems.  相似文献   

12.

Every year thousands of urban and industrial fires occur, which leads to the destruction of infrastructure, buildings, and loss of lives. One of the reasons behind this is the delayed transmission of information to the fire station and the nearer hospitals for ambulance service as the transmission of information is dependent on observer at the location where the fire is caught and cellular network. This paper proposed an automated routing protocol for the urban vehicular ad-hoc network to send the information from the location where the fire is caught to the nearest fire stations and hospitals with optimum service time. This transmission of information involves Road Side Unit (RSU) at the junction and the vehicles present in the transmission path. Selection of route to transmit faulty vehicle information from the RSU to the required faulty vehicle is based on a parameter called path value. The computation of path value is done by the attributes such as expected End To End (E2E) delay, the shortest distance to destination, the density of vehicle between the junctions, and attenuation. From the current junction, the selection of the next junction is based on minimum path value. The proposed routing protocol considers the performance parameters such as E2E delay, total service time (TST), number of network fragments or network gaps, number of hops, and attenuation for the propagation path for the evaluation of the proposed methodology. The proposed routing algorithm is implemented through OmNet++ and SUMO. Results obtained for the proposed routing protocol is compared with three existing VANET protocols (GSR, A-STAR, and ARP) in terms of End To End delay, number of hops, number of vehicular gaps, and Total Service Time (TST).

  相似文献   

13.
In the setting of secure computation, a set of parties wish to securely compute some function of their inputs, in the presence of an adversary. The adversary in question may be static (meaning that it controls a predetermined subset of the parties) or adaptive (meaning that it can choose to corrupt parties during the protocol execution and based on what it sees). In this paper, we study two fundamental questions relating to the basic zero-knowledge and oblivious transfer protocol problems:
•  Adaptive zero-knowledge proofs: We ask whether it is possible to construct adaptive zero-knowledge proofs (with unconditional soundness) for all of NP\mathcal{NP}. Beaver (STOC [1996]) showed that known zero-knowledge proofs are not adaptively secure, and in addition showed how to construct zero-knowledge arguments (with computational soundness).  相似文献   

14.
Goldreich and Lindell (CRYPTO ’01) recently presented the first protocol for password-authenticated key exchange in the standard model (with no common reference string or set-up assumptions other than the shared password). However, their protocol uses several heavy tools and has a complicated analysis. We present a simplification of the Goldreich–Lindell (GL) protocol and analysis for the special case when the dictionary is of the form i.e., the password is a short string chosen uniformly at random (in the spirit of an ATM PIN number). The security bound achieved by our protocol is somewhat worse than the GL protocol. Roughly speaking, our protocol guarantees that the adversary can “break” the scheme with probability at most , whereas the GL protocol guarantees a bound of . We also present an alternative, more natural definition of security than the “augmented definition” of Goldreich and Lindell, and prove that the two definitions are equivalent. An extended abstract of this paper appeared in the First Theory of Cryptography Conference (TCC ’04) [22]. Minh-Huyen Nguyen: Supported by NSF grant CCR-0205423 and ONR grant N00014-04-1-0478. Salil Vadhan: Supported by NSF grant CCR-0205423, a Sloan Research Fellowship, and ONR grant N00014-04-1-0478. Part of this work done while at the Radcliffe Institute for Advanced Study.  相似文献   

15.
Brassard and Crépeau [BCr] introduced a simple technique for producing zero-knowledge proof systems based on blobs. As is to be expected, their implementation rests on an unproven cryptographic assumption, specifically, that it is easy to generate numbers that are difficult to factor. In this paper we present an implementation of blobs based on a different cryptographic assumption, specifically, that it is easy to generate primes p over which it is difficult to compute discrete logarithms. If, in addition, we can produce a generator for Z p * , this implementation then has the advantage that it leads to proof systems which are perfect zeroknowledge, rather than only almost perfect zero-knowledge.The relationship between factoring and finding discrete logarithms is not well understood, although Bach [Bac1] is an important contribution. Given our current state of number theoretic knowlege, there is no reason to prefer the cryptographic assumption required by one of these implementations over that required by the other. In any event, we introduce the notion of a product blob, whose favorable properties depend only on at least one of these assumptions holding.The first author was supported in part by NSA Grant MDA 904-84-H-00171. The second author was supported in part by NSF Grant DCR-8602562.  相似文献   

16.
A New Bluetooth Scatternet Formation Protocol   总被引:6,自引:0,他引:6  
A Bluetooth ad hoc network can be formed by interconnecting piconets into scatternets. The constraints and properties of Bluetooth scatternets present special challenges in forming an ad hoc network efficiently. In this paper, we present and analyze a new randomized distributed protocol for Bluetooth scatternet formation. We prove that our protocol achieves O(logn) time complexity and O(n) message complexity. The scatternets formed by our protocol have the following properties: (1) any device is a member of at most two piconets, and (2) the number of piconets is close to be optimal. These properties can help prevent overloading of any single device and lead to low interference between piconets. We validate the theoretical results by simulations, which also show that the scatternets formed have O(logn) diameter. As an essential part of the scatternet formation protocol, we study the problem of device discovery: establishing multiple connections simultaneously with many Bluetooth devices. We investigate the collision rate and time requirement of the inquiry and page processes. Our simulation results indicate that the total number of packets sent is O(n) and that the maximum number of packets sent by any single device is O(logn).  相似文献   

17.
Reliability is an important research topic in the study of distributed systems. Under many circumstances, a healthy processor in a distributed system needs to reach a common agreement before performing some special tasks even if the faults exist. In order to achieve fault-tolerance in distributed systems, one must deal with the Byzantine Agreement (BA) problem. Most BA problem require all the healthy processors to obtain an agreement at the same round, this kind of agreement is called an Immediate Byzantine Agreement (IBA). Another kind of agreement, Eventual Byzantine Agreement (EBA), allows its participants to reach a common agreement at different rounds when the fact < fp (fact is the number of actual arbitrary faulty processors; fp is the number of tolerate arbitrary faulty processors). However, the traditional EBA problem is solved in well-defined networks, but the Mobile Ad hoc NETworks (MANETs) are increasing in popularity. Therefore, EBA problem is revisited under dual failure mode (processors and transmission media) in the MANET. The proposed protocol, Early Dual Agreement Protocol (EDAP), can achieve agreement while tolerating the maximum number of faulty processors and transmission media in a MANET by using the minimum number of message exchanges. Furthermore, our protocol can manage and organize the network efficiently even if the processors move around the network.  相似文献   

18.
In this paper, we present a Petri net model for performance evaluation of IEEE 802.11 distributed coordination function as a popular media access control layer protocol in mobile ad hoc network. The goal of this evaluation is to examine this protocol under the existing of misbehavior nodes that selfishly try to grasp common channel in a neighbor area. The presented model consists of 2 separate models based on stochastic reward net (SRN), as a variation of stochastic Petri net. The first model, which is called one node operation model , is supposed for presenting all distributed coordination function operations in a single node such as collision avoidance, request to send/clear to send (RTS/CTS) handshake, and backoff mechanism. The next SRN model, all node operation model , is used for modeling nodes competition for occupying channel in a neighbor area. The models could be adjusted to a dynamic network with any number of nodes, dimension scale, and nodes' speed. For evaluation purpose, 4 distinct attack types implemented by modifying associated transitions in SRN models. The proposed SRN model has been quantified by deriving 2 performances metrics as Throughput and Delay . Both metrics are also compared to the value obtained from NS‐2 in terms of different number of nodes and 3 packet generation rates. Three additional metrics measuring the channel usage are also quantified in terms of different attack strategies using only presented SRN model.  相似文献   

19.
Definitions and properties of zero-knowledge proof systems   总被引:4,自引:0,他引:4  
In this paper we investigate some properties of zero-knowledge proofs, a notion introduced by Goldwasser, Micali, and Rackoff. We introduce and classify two definitions of zero-knowledge: auxiliary-input zero-knowledge and blackbox-simulation zero-knowledge. We explain why auxiliary-input zero-knowledge is a definition more suitable for cryptographic applications than the original [GMR1] definition. In particular, we show that any protocol solely composed of subprotocols which are auxiliary-input zero-knowledge is itself auxiliary-input zero-knowledge. We show that blackbox-simulation zero-knowledge implies auxiliary-input zero-knowledge (which in turn implies the [GMR1] definition). We argue that all known zero-knowledge proofs are in fact blackbox-simulation zero-knowledge (i.e., we proved zero-knowledge using blackbox-simulation of the verifier). As a result, all known zero-knowledge proof systems are shown to be auxiliary-input zero-knowledge and can be used for cryptographic applications such as those in [GMW2].We demonstrate the triviality of certain classes of zero-knowledge proof systems, in the sense that only languages in BPP have zero-knowledge proofs of these classes. In particular, we show that any language having a Las Vegas zero-knowledge proof system necessarily belongs to RP. We show that randomness of both the verifier and the prover, and nontriviality of the interaction are essential properties of (nontrivial) auxiliary-input zero-knowledge proofs.This research was partially supported by the Fund for Basic Research Administered by the Israeli Academy of Sciences and Humanities. Preliminary versions of this work have appeared in [O1] and [O2].  相似文献   

20.
Robust and Efficient Sharing of RSA Functions   总被引:3,自引:0,他引:3  
We present two efficient protocols which implement robust threshold RSA signature schemes, where the power to sign is shared by N players such that any subset of T+1 or more signers can collaborate to produce a valid RSA signature on any given message, but no subset of T or less corrupted players can forge a signature. Our protocols are robust in the sense that the correct signature is computed even if up to T players behave in an arbitrarily malicious way during the signature protocol. This, in particular, includes the cases of players who refuse to participate or who introduce erroneous values into the computation. Our robust protocols achieve optimal resiliency as they can tolerate up to (N-1)/2 faults, and their efficiency is comparable with the efficiency of the underlying threshold RSA signature scheme. Our protocols require RSA moduli which are the product of two safe primes, and that the underlying (centralized) RSA signature scheme is unforgeable. Our techniques also apply to the secure sharing of the RSA decryption function. We show that adding robustness to the existing threshold RSA schemes reduces to solving the problem of how to verify an RSA signature without a public verification Received 21 March 1997 and revised 28 September 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号