首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Focusing on a large-scale wireless sensor network with multiple base stations (BS), a key management protocol is designed in this paper. For securely relaying data between a node and a base station or two nodes, an end-to-end data security method is adopted by this protocol. Further employing a distributed key revocation scheme to efficiently remove compromised nodes then forms our key management protocol celled multi-BS key management protocol (MKMP). Through performance evaluation, we show that MKMP outperforms LEDS Ren et al. (IEEE Trans Mobile Comput 7(5):585–598, 2008) in terms of efficiency of resilience against the node capture attack. With the analysis of key storage overheads, we demonstrate that MKMP performs better than mKeying Wang et al. (A key management protocol for wireless sensor networks with multiple base stations. In: Procceedings of ICC'08, pp 1625–1629, 2008) in terms of this overhead.  相似文献   

2.
3.
In this letter, we propose an improved Quasi-orthogonal space-time block code (QOSTBC) with full rate, full diversity, linear decoding and better peak-to-average power ratio (PAPR). Constellation rotation is used to the input symbol vector to ensure full diversity, and then the information symbol vector is interleaved in coordinates and pre-grouped by using a Given rotation matrix. The performance is evaluated by numerical experiments. The PAPR of our proposed QOSTBC is lower than that of CI-QOSTBC in Khan and Rajan (IEEE Trans Inf Theory 52(5):2062–2091, 2006). Meanwhile, the Bit-error-rate versus Signal-to-noise-ratio of our proposed QOSTBC is better than those of OSTBC (Tarokh et al. in IEEE Trans Inf Theory 45(5):1456–1467, 1999) QOSTBC (Jafarkhani in IEEE Trans Wirel Commun 49(1):1–4, 2001), G-QOSTBC (Park et al. in IEEE Commun Lett 12(12):868–870, 2008), slightly better that of the CI-QOSTBC, and as same as that of the recently proposed Minimum Decoding Complexity QOSTBCs (MDC-QOSTBC) in Yuen et al.(IEEE Trans Wirel Commun 4(5):2089–2098, 2005), Wang and Xia (IEEE Trans Inf Theory 55(3):1104–1130, 2009). Compared with MDC-QOSTBC, the proposed QOSTBC has simpler code construct and lower decoding complexity.  相似文献   

4.
Image distortion analysis is a fundamental issue in many image processing problems, including compression, restoration, recognition, classification, and retrieval. Traditional image distortion evaluation approaches tend to be heuristic and are often limited to specific application environment. In this work, we investigate the problem of image distortion measurement based on the theory of Kolmogorov complexity, which has rarely been studied in the context of image processing. This work is motivated by the normalized information distance (NID) measure that has been shown to be a valid and universal distance metric applicable to similarity measurement of any two objects (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004). Similar to Kolmogorov complexity, NID is non-computable. A useful practical solution is to approximate it using normalized compression distance (NCD) (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004), which has led to impressive results in many applications such as construction of phylogeny trees using DNA sequences (Li et al. in IEEE Trans Inf Theory 50:3250–3264, 2004). In our earlier work, we showed that direct use of NCD on image processing problems is difficult and proposed a normalized conditional compression distance (NCCD) measure (Nikvand and Wang, 2010), which has significantly wider applicability than existing image similarity/distortion measures. To assess the distortions between two images, we first transform them into the wavelet transform domain. Assuming stationarity and good decorrelation of wavelet coefficients beyond local regions and across wavelet subbands, the Kolmogorov complexity may be approximated using Shannon entropy (Cover et al. in Elements of information theory. Wiley-Interscience, New York, 1991). Inspired by Sheikh and Bovik (IEEE Trans Image Process 15(2):430–444, 2006), we adopt a Gaussian scale mixture model for clusters of neighboring wavelet coefficients and a Gaussian channel model for the noise distortions in the human visual system. Combining these assumptions with the NID framework, we derive a novel normalized perceptual information distance measure, where maximal likelihood estimation and least square regression are employed for parameter fitting. We validate the proposed distortion measure using three large-scale, publicly available, and subject-rated image databases, which include a wide range of practical image distortion types and levels. Our results demonstrate the good prediction power of the proposed method for perceptual image distortions.  相似文献   

5.
In this paper, we discuss applications of max–min fairness (MMF) in survivable networks. We focus on two specific applications intended to face failure situations and provide several computational results for each of them. The first application, called simple robust routing, generalizes the multipath routing in order to achieve acceptable levels of traffic demand satisfaction in case of single link failures while avoiding classical rerouting procedures. Such a method can be seen as a special case of dedicated resource recovery schemes. The second application is concerned with two shared resource restoration strategies and the corresponding problems of computing the MMF minimum traffic demand satisfaction ratio vectors associated with the set of single link failures. We consider the local rerouting and end-to-end rerouting without stub-release strategies. Computational results for realistic network instances provide a comparison of different routing and rerouting strategies in terms of traffic satisfaction rate. The question of estimating the bandwidth overhead, which can be required by the “simple robust routing scheme” in comparison with the classical restoration schemes, is also studied and answers based on computational results are provided. This work is in continuation of our earlier works on MMF (Nace et al., IEEE Trans Netw 14:1272–1281, 2006; Nace et al., Comput Oper Res 35:557–573, 2008).  相似文献   

6.
We present nontrivial utilization methods of a pair of symbolic algebraic algorithms (Yamada et?al. in IEEE Trans Signal Process 46:1639?C1664, 1998; Yamada and Bose in IEEE Trans Circuits Syst 1 Fundam Theory Appl 49:298?C304, 2002) which were developed originally for multidimensional phase unwrapping and zero-distribution problems. Given the minor subspace of the covariance matrix of the data measured at a uniform linear array of sensors, the proposed methods provide estimates of the Directions-of-Arrival (DOA) distribution of multiple narrowband signals, i.e., the number of DOA in an arbitrarily specified range, without using any numerical search for each direction of arrival. The proposed methods can serve as powerful mathematical tools to extract global information in the high-resolution DOA estimation problems.  相似文献   

7.
We present a positive obfuscation result for a traditional cryptographic functionality. This positive result stands in contrast to well-known impossibility results (Barak et al. in Advances in Cryptology??CRYPTO??01, 2002), for general obfuscation and recent impossibility and implausibility (Goldwasser and Kalai in 46th IEEE Symposium on Foundations of Computer Science (FOCS), pp.?553?C562, 2005) results for obfuscation of many cryptographic functionalities. Whereas other positive obfuscation results in the standard model apply to very simple point functions (Canetti in Advances in Cryptology??CRYPTO??97, 1997; Wee in 37th ACM Symposium on Theory of Computing (STOC), pp.?523?C532, 2005), our obfuscation result applies to the significantly more complex and widely-used re-encryption functionality. This functionality takes a ciphertext for message m encrypted under Alice??s public key and transforms it into a ciphertext for the same message m under Bob??s public key. To overcome impossibility results and to make our results meaningful for cryptographic functionalities, our scheme satisfies a definition of obfuscation which incorporates more security-aware provisions.  相似文献   

8.
Recently, some works have shown that it was possible to obtain quite good bit error rate performances over an additive white Gaussian noise channels with chaotic systems. In this research field, this paper proposes new insights for the chaos-coded modulation (CCM) schemes originally proposed by Kozic et al. (2003; IEEE Trans Circuits Syst Regul Pap 53:2048–2059, 2006). A detailed study of the distance spectrum of such schemes is proposed and an approximation of its distribution by means of Gaussian or Rayleigh mixtures is given. Furthermore, using these approximate distributions, a complete study of the performances of these CCM schemes when they are concatenated with a space–time block code is proposed. Accurate bounds are obtained even in the case of time-selective channels.  相似文献   

9.
Radio frequency identification (RFID) is a popular kind of automatic identification technologies that uses radio frequencies. Many security and privacy problems my be raised in the using of RFID due to its radio transmission nature. In 2012, Cho et al. (Comput Math Appl, 2012. doi:10.1016/j.camwa.2012.02.025) proposed a new hash-based RFID mutual authentication protocol to solve these problems. However, this protocol was demonstrated to be vulnerable to DOS attack. This paper further shows that Cho et al.’s protocol is vulnerable to traffic analysis and tag/reader impersonation attacks. An improved protocol is also proposed which can prevent the said attacks.  相似文献   

10.
Wireless ad-hoc networks are infrastructureless networks that comprise wireless mobile nodes able to communicate each other outside wireless transmission range. Due to frequent network topology changes in one hand and the limited underlying bandwidth in the other hand, routing becomes a challenging task. In this paper we present a novel routing algorithm devoted for mobile ad hoc networks. It entails both reactive and proactive components. More precisely, the algorithm is based on ant general behavior, but differs from the classic ant methods inspired from Ant-Colony-Optimization algorithm [1]. We do not use, during the reactive phase, a broadcasting technique that exponentially increases the routing overhead, but we introduce a new reactive route discovery technique that considerably reduces the communication overhead. In the simulation results, we show that our protocol can outperform both Ad hoc On-demand Distance Vector (AODV) protocol [2], one of the most important current state-of-the-art algorithms, and AntHocNet protocol [5], one of the most important ant-based routing algorithms, in terms of end-to-end delay, packet delivery ratio and the communication overhead.  相似文献   

11.
An adaptive power control algorithm is proposed to maximize the sum rate of multiple interfering links under minimum rate constraints. The proposed algorithm efficiently minimizes the outage probability of total network system by using an iterative algorithm with complexity of \(O\left( {N^{2}\ln N} \right) \) , where \(N\) is the number of interfering links. By Monte Carlo simulations, it is shown that the proposed algorithm guarantees the optimum outage probability and the system sum rate above 80 %, compared with the geometric programming (Chiang et al in IEEE Trans Wirel Commun 6(7):2640–2651, 2007).  相似文献   

12.
This article presents three versions of a novel MAC protocol for IEEE 802.11 ad-hoc networks called Busy Signal-based Mechanism turned On (BusySiMOn) (This is an extended version of our conference paper: [15]). The key idea of the proposed solution is based on an intelligent two-step reservation procedure combined with the advantages of EDCA service differentiation. The former alleviates the hidden node problem while the latter ensures compatibility with the IEEE 802.11 standard. Simulation results obtained for saturated and non-saturated network conditions emphasize the advantages of the new protocol over the currently used four-way handshake mechanism in terms of fairness, throughput, and average frame delay.  相似文献   

13.
In 2009, Lee et al. (Ann Telecommun 64:735–744, 2009) proposed a new authenticated group key agreement protocol for imbalanced wireless networks. Their protocol based on bilinear pairing was proven the security under computational Diffie–Hellman assumption. It remedies the security weakness of Tseng’s nonauthenticated protocol that cannot ensure the validity of the transmitted messages. In this paper, the authors will show that Lee et al.’s authenticated protocol also is insecure. An adversary can impersonate any mobile users to cheat the powerful node. Furthermore, the authors propose an improvement of Lee et al.’s protocol and prove its security in the Manulis et al.’s model. The new protocol can provide mutual authentication and resist ephemeral key compromise attack via binding user’s static private key and ephemeral key.  相似文献   

14.
The basic bandgap reference voltage generator, BGR, is thoroughly analyzed and relations are reconstructed considering dependency of bandgap energy, Eg, to absolute temperature. The previous works all consider Eg as a constant, independent of temperature variations. However, Eg varies around 25 meV when the temperature is increased from 2 to 92 °C. In this paper the dependence of Eg to absolute temperature, based on HSPICE mosfet models in HSPICE MOSFET Models Manual (Version X-2005.09, 2005), is approximated by a third-order polynomial using Lagrangian interpolating method within the temperature range of 2–92 °C. Accurate analysis on the simplified polynomial reveals that the TC of VBE must be corrected to ?1.72 mV/°K at 27 °C which has been formerly reported about ?1.5 mV/°K in Razavi (Design of analog CMOS integrated circuits, 2001) and Colombo et al. (Impact of noise on trim circuits for bandgap voltage references, 2007), ?2 mV/°K in Gray et al. (Analysis and design of analog integrated circuits, 2001), Leung and Mok (A sub-1-V 15-ppm/°C CMOS bandgap voltage reference without requiring low threshold voltage device, 2002), Banba et al. (A CMOS bandgap reference circuit with sub-1-V operation, 1999), and ?2.2 mV/°K in Jones and Martin (Analog integrated circuit design, 1997), Tham and Nagaraj (A low supply voltage high PSRR voltage reference in CMOS process, 1995). Another important conclusion is that the typical weighting coefficient of TC+ and TC? terms is modified to about 19.84 at 27 °C temperature from otherwise 16.76, when Eg is considered constant, and also 17.2, in widely read literatures, (Razavi in Design of analog CMOS integrated circuits, 2001). Neglecting the temperature dependence of Eg might introduce a relative error of about 20.5 % in TC of VBE. Also, resistance and transistor size ratios, which denote the weighting coefficient of TC+ term, might be encountered to utmost 20.3 % error when the temperature dependence of Eg is ignored.  相似文献   

15.
In this paper a new control-period-based distributed adaptive guard channel reservation (CDAGCR) technique is proposed to meet the call admission level quality-of-service (QoS) in wireless cellular networks. It partitions the real time into control periods. Handoffs during the current control period is used to reserve guard channels at the beginning of the next control period. Efficient mechanisms are devised to adaptively vary the length of the control period which further regulates the number of guard channels used to meet the call admission level QoS. The BSC associated with the cell site can do this exclusively without generating any signal overhead for information exchange among cell sites unlike the schemes described in [14]. Thus, the CDAGCR scheme is amenable to a fully distributed implementation. Extensive simulation studies have been carried out with an emulated test bed to investigate the performance of this CDAGCR scheme. It is found that this CDAGCR scheme keeps the handoff call drop probability below the targeted QoS with comparable new call blocking by adaptively varying the length of the control period. The simulation results appear promising.  相似文献   

16.
In this paper, we propose a green radio resource allocation (GRRA) scheme for LTE-advanced downlink systems with coordinated multi-point (CoMP) transmission to support multimedia traffic. The GRRA scheme defines a green radio utility function, which is composed of the required transmission power, assigned modulation order, and the number of coordinated transmission nodes. By maximizing this utility function, the GRRA scheme can effectively save transmission power, enhance spectrum efficiency, and guarantee quality-of-service requirements. The simulation results show that when the traffic load intensity is greater than 0.7, the GRRA scheme can save transmission power by more than 33.9 and 40.1 %, as compared with the conventional adaptive radio resource allocation (ARRA) scheme (Tsai et al. in IEEE Trans Wireless Commun 7(5):1734–1743, 2008) with CoMP and the utility-based radio resource allocation (URRA) scheme (Katoozian et al. in IEEE Trans Wireless Commun 8(1):66–71, 2009) with CoMP, respectively. Besides, it enhances the system throughput by approximately 5.5 % and improves Jain’s fairness index for best effort users by more than 155 % over these two ARRA and URRA schemes.  相似文献   

17.
The aim of this paper is to demonstrate the feasibility of authenticated throughput-efficient routing in an unreliable and dynamically changing synchronous network in which the majority of malicious insiders try to destroy and alter messages or disrupt communication in any way. More specifically, in this paper we seek to answer the following question: Given a network in which the majority of nodes are controlled by a node-controlling adversary and whose topology is changing every round, is it possible to develop a protocol with polynomially bounded memory per processor (with respect to network size) that guarantees throughput-efficient and correct end-to-end communication? We answer the question affirmatively for extremely general corruption patterns: we only request that the topology of the network and the corruption pattern of the adversary leaves at least one path each round connecting the sender and receiver through honest nodes (though this path may change at every round). Out construction works in the public-key setting and enjoys optimal transfer rate and bounded memory per processor (that is polynomial in the network size and does not depend on the amount of traffic). We stress that our protocol assumes no knowledge of which nodes are corrupted nor which path is reliable at any round, and is also fully distributed with nodes making decisions locally, so that they need not know the topology of the network at any time. The optimality that we prove for our protocol is very strong. Given any routing protocol, we evaluate its efficiency (rate of message delivery) in the “worst case,” that is with respect to the worst possible graph and against the worst possible (polynomially bounded) adversarial strategy (subject to the above mentioned connectivity constraints). Using this metric, we show that there does not exist any protocol that can be asymptotically superior (in terms of throughput) to ours in this setting. We remark that the aim of our paper is to demonstrate via explicit example the feasibility of throughput-efficient authenticated adversarial routing. However, we stress that out protocol is not intended to provide a practical solution, as due to its complexity, no attempt thus far has been made to reduce constants and memory requirements. Our result is related to recent work of Barak et al. (Proc. of Advances in Cryptology—27th EUROCRYPT 2008, LNCS, vol. 4965, pp. 341–360, 2008) who studied fault localization in networks assuming a private-key trusted-setup setting. Our work, in contrast, assumes a public-key PKI setup and aims at not only fault localization, but also transmission optimality. Among other things, our work answers one of the open questions posed in the Barak et al. paper regarding fault localization on multiple paths. The use of a public-key setting to achieve strong error-correction results in networks was inspired by the work of Micali et al. (Proc. of 2nd Theory of Cryptography Conf., LNCS, vol. 3378, pp. 1–16, 2005) who showed that classical error correction against a polynomially bounded adversary can be achieved with surprisingly high precision. Our work is also related to an interactive coding theorem of Rajagopalan and Schulman (Proc. 26th ACM Symp. on Theory of Computing, pp. 790–799, 1994) who showed that in noisy-edge static-topology networks a constant overhead in communication can also be achieved (provided none of the processors are malicious), thus establishing an optimal-rate routing theorem for static-topology networks. Finally, our work is closely related and builds upon to the problem of End-To-End Communication in distributed networks, studied by Afek and Gafni (Proc. of the 7th ACM Symp. on Principles of Distributed Computing, pp. 131–148, 1988); Awebuch et al. (Proc. of the 30th IEEE Symp. on Foundations of Computer Science, FOCS, 1989); Afek et al. (Proc. of the 11th ACM Symp. on Principles of Distributed Computing, pp. 35–46, 1992); and Afek et al. (J. Algorithms 22:158–186, 1997), though none of these papers consider or ensure correctness in the setting of a node-controlling adversary that may corrupt the majority of the network.  相似文献   

18.
This paper proposes three different dynamic cell coordination schemes using adaptive link adaptation and variable frequency reuse for OFDMA downlink cellular networks, which are composed of greedy cell coordination for flat fading channel, dynamic maximum C/I cell coordination (DMCC), and dynamic proportional fairness cell coordination (DPFCC) for frequency selective fading channel. The performances of the proposed dynamic cell coordination schemes are compared to those with no cell coordination schemes and static reuse coordination schemes using conventional proportional fair (PF) scheduling in terms of system throughput and fairness. Simulation results demonstrate that the proposed schemes allow the radio network controller (RNC) and base stations (BSs) to apply different reuse factors on each subchannel in consideration of different interference conditions of individual users so as to increase the system throughput and guarantee QoS requirement of each user on the multicell environment, where the performance of conventional OFDMA downlinks might have become degraded due to persistent interference from other cells. In frequency flat fading, the proposed dynamic schemes achieve, on average, a 1.2 times greater system throughput than no cell coordination, a 1.4 times greater static cell coordination and a 3 times greater simplified subchannel allocation scheme (SSAS) (Kim et al. in Proceedings of IEEE VTC spring’04, vol. 3, pp. 1821–1825, 2004). In frequency selective fading, the proposed scheme, DMCC, showed a 2.6 times greater throughput than that of a single reuse factor of one for all subcarriers, and DPFCC demonstrated a single reuse factor as good as one.  相似文献   

19.
Redundant binary number appears to be appropriate for high-speed arithmetic operation, but the delay and hardware cost associated with the conversion from redundant binary (RB) to natural binary (NB) number is still a challenging task. In the present investigation a simple approach has been adopted to achieve high speed with lesser hardware and power saving. A circuit level approach has been adopted to implement the equivalent bit conversion algorithm (EBCA) (Kim et al. IEEE Journal of Solid State Circuits 36:1538-1544, 2001, 38:159-160, 2003) for RB to NB conversion. The circuit is designed based on exploration of predictable carry out feature of EBCA algorithm. This implementation concludes a significant delay power product and component complexity advantage for a 64-bit RB to NB conversion using novel carry-look-ahead equivalent bit converter.  相似文献   

20.
Among the recently proposed single-rate multicast congestion control protocols is transmission control protocol-friendly multicast congestion control (TFMCC; Widmer and Handley 2001; Floyd et al. 2000; Widmer et al. IEEE Netw 15:28–37, 2001), which is an equation-based single-rate protocol that extends the mechanisms of the unicast TCP-friendly rate control (TFRC) protocol into the multicast domain. In TFMCC, each receiver estimates its throughput using an equation that estimates the steady-state throughput of a TCP source. The source then adjusts its sending rate according to the slowest receiver within the session (a.k.a., current-limiting receiver, CLR). TFMCC is a relatively simple, scalable, and TCP-friendly multicast congestion control protocol. However, TFMCC is hindering its throughput performance by adopting an equation derived from the unicast TFRC protocol. Further, TFMCC is slow to react to congestion conditions that usually result in a change of the CLR. This paper is motivated by these two observations and proposes an improved version of TFMCC, which we refer to as hybrid-TFMCC (or H-TFMCC for short). First, each receiver estimates its throughput using an equation that models the steady-state throughput of a multicast source controlled according to the additive increase multiplicative decrease (AIMD) approach. The second modification consists of adopting a hybrid sender/receiver-based rate control strategy, where the sending rate can be adjusted by the source or initiated by the current or a new CLR. The source monitors RTT variations on the CLR path, in order to rapidly adjust the sending rate to network conditions. Simulation results show that these modifications result in remarkable performance improvement with respect to throughput, time to react, and magnitude of oscillations. We also show that H-TFMCC remains TCP-friendly and achieves a higher fairness index than that achieved by TFMCC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号