首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
In this paper, we present a novel computationally efficient motion estimation (ME) algorithm for high-efficiency video coding (HEVC). The proposed algorithm searches in the hexagonal pattern with a fixed number of search points at each grid. It utilizes the correlation between contiguous pixels within the frame. In order to reduce the computational complexity, the proposed algorithm utilizes pixel truncation, adaptive search range, sub-sampling and avoids some of the asymmetrical prediction unit techniques. Simulation results are obtained by using the reference software HM (e n c o d e r_l o w d e l a y_P_m a i n and e n c o d e r_r a n d o m a c c e s s_m a i n profile) and shows 55.49% improvement on search points with approximately the same PSNR and around 1% increment in bit rate as compared to the Test Zonal Search (TZS) ME algorithm. By utilizing the proposed algorithm, the BD-PSNR loss for the video sequences like B a s k e t b a l l P a s s_416 × 240@50 and J o h n n y_1280 × 720@60 is 0.0804 dB and 0.0392 dB respectively as compared to the HM reference software with the e n c o d e r_l o w d e l a y_P_m a i n profile.  相似文献   

2.
A novel framework for sparse and dense disparity estimation was designed, and the proposed framework has been implemented in CPU and GPU for a parallel processing capability. The Census transform is applied in the first stage, and then, the Hamming distance is later used as similarity measure in the stereo matching stage followed by a matching consistency check. Next, a disparity refinement is performed on the sparse disparity map via weighted median filtering and color K-means segmentation, in addition to clustered median filtering to obtain the dense disparity map. The results are compared with state-of-the-art frameworks, demonstrating this process to be competitive and robust. The quality criteria used are structural similarity index measure and percentage of bad pixels (B) for objective results and subjective perception via human visual system demonstrating better performance in maintaining fine features in disparity maps. The comparisons include processing times and running environments, to place each process into context.  相似文献   

3.
One of the main challenges in the smart-phone world is that they are battery constrained and the development of battery technologies have not kept pace with the required energy demand. In particular, there are still significant technological gaps on developing energy-aware solutions that would prolong the battery life of devices without affecting the quality of the distributed video/multimedia content. In this aspect, this paper proposes DE-BAR—a process based innovation that will provide a seamless battery saving mechanism, based on backlight and adaptive region of interest of the streamed multimedia content. This work intends to look at the nature of the video/multimedia content that is received in the device and adapts the energy consumption dynamically at three levels: Screen Colour, backlight and Intensity; and adaptive Region-of-Interest (RoI) based variation in the multimedia content. Notably, the work provides the mechanism for real-time adaptation. The colour intensity, number of RoI for the video sequence and the frame rate is decided by the spatial and temporal complexity of the video. The energy consumption is measured using an Arduino board while video quality is analyzed using extensive subjective tests. The results indicate that more than 50% energy could be saved in the device while retaining above average perceptual video quality.  相似文献   

4.
This paper deals with the problem of optimal association of stations (S T A s) to access points (A P s) for mulicast services in IEEE 802.11 WLAN. In a multicast session, all the subscribed S T A s receive the multicast data packet at the same data rate (R m i n ) from their respective serving A P s. A higher value of R m i n improves the multicast throughput by completing the ongoing multicast session in lesser time. This also improves the unicast throughput as the cycle duration is shared by the unicast and multicast sessions. To provide multicast services to the S T A s, we need to select a minimum cardinality subset of A P s as the system message overhead depends on this cardinality. However, such a minimum cardinality subset of A P s may not be possible to activate simultaneously due to the limited number of available orthogonal frequency channels. In this paper, we develop a combined greedy algorithm that selects a subset of A P s with minimum cardinality for which a conflict-free frequency assignment exists and finds an association between the S T A s and the selected A P s that maximizes the R m i n value. Through simulation we have shown that the proposed algorithm selects significantly less number of A P s for different R m i n values in comparison to the well-known metrics for multicast association like RSSI, minimum hop-distance, normalized-cost and in-range STA number.  相似文献   

5.
To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R-λ model, the quantization parameter (QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.  相似文献   

6.
It is very important to detect and correct faults for ensuring the validity and reliability of reversible circuits. Test vectors play an important role to detect as well as correct the faults in the circuits. The optimum number of test vector implies the more capabilities for detecting several types of faults in the circuits. In this paper, we have proposed an algorithm for generating optimum test vectors. We have shown that the proposed algorithm generates optimum test vectors with the least complexity of time as compared to existing methods, i.e., we have proved that the proposed algorithm requires O(log 2 N) time, whereas the best known existing method requires O(N. log 2 N) time, where N is the number of inputs. We have also proposed another algorithm for detecting faults using the generated test vectors. This proposed method can detect more faults than existing ones. We have proved that the proposed fault detection algorithm requires least time complexity as compared to the best known existing methods, i.e., the proposed algorithm requires O(d. 1/N) time, whereas the best known existing methods require O(d. N) time, where N is the number of inputs and d is the number of gates in a reversible circuit. Finally, we have proposed another algorithm for correcting the detected faults. We have also proved that the proposed methods require the least time complexity as compared to the best known existing methods. In addition, the experimental results using benchmark circuits show the efficiency of the proposed methods.  相似文献   

7.
A modification of Hannan’s Following the Perturbed Leaded (FPL) algorithm for the case of unlimited gains and losses is considered. Estimates of the prediction error are obtained in terms of the volume v t and game fluctuation fluc(t) = Δv t /v t , where Δv t = v t v t–1. We prove the asymptotic consistency on the average of this variant of the algorithm for fluc(t) = o(t). Applications of this algorithm for constructing game strategies are considered. Game strategies employing the difference between the “micro” and “macro” volatility of the discrete time series (prices of the financial instrument) are determined for obtaining arbitrage. Mixing these expert strategies is performed on the basis of the variant of Hannan’s algorithm developed in this work.  相似文献   

8.
Correlation tracker is computation intensive (if the search space or the template is large), has template drift problem, and may fail in case of fast maneuvering target, rapid changes in its appearance, occlusion suffered by it and clutter in the scene. Kalman filter can predict the target coordinates in the next frame, if the measurement vector is supplied to it by a correlation tracker. Thus, a relatively small search space can be determined where the probability of finding the target in the next frame is high. This way, the tracker can become fast and reject the clutter, which is outside the search space in the scene. However, if the tracker provides wrong measurement vector due to the clutter or the occlusion inside the search region, the efficacy of the filter is significantly deteriorated. Mean-shift tracker is fast and has shown good tracking results in the literature, but it may fail when the histograms of the target and the candidate region in the scene are similar (even when their appearance is different). In order to make the overall visual tracking framework robust to the mentioned problems, we propose to combine the three approaches heuristically, so that they may support each other for better tracking results. Furthermore, we present novel method for (1) appearance model updating which adapts the template according to rate of appearance change of target, (2) adaptive threshold for similarity measure which uses the variable threshold for each forthcoming image frame based on current frame peak similarity value, and (3) adaptive kernel size for fast mean-shift algorithm based on varying size of the target. Comparison with nine state-of-the-art tracking algorithms on eleven publically available standard dataset shows that the proposed algorithm outperforms the other algorithms in most of the cases.  相似文献   

9.
An opportunistic network (OPPNET) is a wireless networks without an infrastructure. In OPPNET, communication intermittently occurs when one node meets with another node. Thus, a connected path between the source and destination nodes rarely exists. For this reason, nodes need not only to forward messages but are also to store and carry messages as relay nodes. In OPPNET, several routing algorithms that rely on relay nodes with appropriate behavior have been proposed. Some of these are referred to as context-ignorant routing algorithms, which manipulate flooding, and others are referred to as context-aware routing algorithms, which utilize the contextual information. We propose a routing algorithm that employs a novel similarity based on both position and social information. We combine the position similarity with the social similarity using the fuzzy inference method to obtain the enhanced performance. Through this method, the proposed algorithm utilizes more proper relay nodes in forwarding adaptively and achieves significant improvement on the performance especially under memory constrained environment. We analyze the proposed algorithm on the NS-2 network simulator with the home-cell community-based mobility model. Experimental results show that the proposed algorithm outperforms typical routing algorithms in terms of the network traffic and delivery delay.  相似文献   

10.
Let g be an element of prime order p in an abelian group, and let α∈? p . We show that if g,g α , and \(g^{\alpha^{d}}\) are given for a positive divisor d of p?1, the secret key α can be computed deterministically in \(O(\sqrt{p/d}+\sqrt{d})\) exponentiations by using \(O(\max\{\sqrt{p/d},\sqrt{d}\})\) storage. If \(g^{\alpha^{i}}\) (i=0,1,2,…,2d) is given for a positive divisor d of p+1, α can be computed in \(O(\sqrt{p/d}+d)\) exponentiations by using \(O(\max\{\sqrt{p/d},\sqrt{d}\})\) storage. We also propose space-efficient but probabilistic algorithms for the same problem, which have the same computational complexities with the deterministic algorithm.As applications of the proposed algorithms, we show that the strong Diffie–Hellman problem and related problems with public \(g^{\alpha},\ldots,g^{\alpha^{d}}\) have computational complexity up to \(O(\sqrt{d}/\log p)\) less than the generic algorithm complexity of the discrete logarithm problem when p?1 (resp. p+1) has a divisor dp 1/2 (resp. dp 1/3). Under the same conditions for d, the algorithm is also applicable to recovering the secret key in \(O(\sqrt{p/d}\cdot \log p)\) for Boldyreva’s blind signature scheme and the textbook ElGamal scheme when d signature or decryption queries are allowed.  相似文献   

11.
This paper proposes an instantaneous recovery route design scheme using multiple coding-aware link protection scenarios to achieve higher link cost reduction in the network. In this scheme, two protection scenarios, namely, (i) traffic splitting (TS), and (ii) two sources and a common destination (2SD) are used to integrate multiple sources and a common destination. The proposed scheme consists of two phases. In the first phase, the proposed scheme determines routes for 2SD and TS scenarios of all possible source-destination pairs to minimize the total link cost. In this phase, the network coding is applied to the common path within each scenario, individually. In the second phase, network coding is applied to the common path between two scenarios (or a scenario pair) in order to enhance the resource saving. This phase develops conditions that select the most appropriate combination of scenario pairs, such as TSTS, 2SD–2SD, and 2SDTS for network coding, including their proofs. Using these conditions, a heuristic algorithm is introduced in order to select the most appropriate combination of scenario pairs for further enhancing of resource saving. Simulation results show that the proposed scheme outperforms the conventional 1 + 1 protection scheme, the TS scenario, and the 2SD scenario in terms of link cost reduction in the network.  相似文献   

12.
The newly developed Taylor-Interpolation-FFT (TI-FFT) algorithm dramatically increases the computational speeds for millimeter wave propagation from a planar (cylindrical) surface onto a “quasi-planar” (“quasi-cylindrical”) surface. Two different scenarios are considered in this article: the planar TI-FFT is for the computation of the wave propagation from a plane onto a “quasi-planar” surface and the cylindrical TI-FFT is for the computation of wave propagation from a cylindrical surface onto a “quasi-cylindrical” surface. Due to the use of the FFT, the TI-FFT algorithm has a computational complexity of O(N 2?log2? N 2) for an N?×?N computational grid, instead of N 4 for the direct integration method. The TI-FFT algorithm has a low sampling rate according to the Nyquist sampling theorem. The algorithm has accuracy down to ?80 dB and it works particularly well for narrow-band fields and “quasi-planar” (“quasi-cylindrical”) surfaces.  相似文献   

13.
We study the problem of routing in three-dimensional ad hoc networks. We are interested in routing algorithms that guarantee delivery and are k-local, i.e., each intermediate node v’s routing decision only depends on knowledge of the labels of the source and destination nodes, of the subgraph induced by nodes within distance k of v, and of the neighbour of v from which the message was received. We model a three-dimensional ad hoc network by a unit ball graph, where nodes are points in three-dimensional space, and for each node v, there is an edge between v and every node u contained in the unit-radius ball centred at v. The question of whether there is a simple local routing algorithm that guarantees delivery in unit ball graphs has been open for some time. In this paper, we answer this question in the negative: we show that for any fixed k, there can be no k-local routing algorithm that guarantees delivery on all unit ball graphs. This result is in contrast with the two-dimensional case, where 1-local routing algorithms that guarantee delivery are known. Specifically, we show that guaranteed delivery is possible if the nodes of the unit ball graph are contained in a slab of thickness \(1/\sqrt{2}.\) However, there is no k-local routing algorithm that guarantees delivery for the class of unit ball graphs contained in thicker slabs, i.e., slabs of thickness \(1/\sqrt{2} + \epsilon\) for some \( \epsilon > 0.\) The algorithm for routing in thin slabs derives from a transformation of unit ball graphs contained in thin slabs into quasi unit disc graphs, which yields a 2-local routing algorithm. We also show several results that further elaborate on the relationship between these two classes of graphs.  相似文献   

14.
In spite of spectrum sensing, aggregate interference from cognitive radios (CRs) remains as a deterring factor to the implementation of spectrum sharing strategies. We provide a systematic approach of evaluating the aggregate interference (I aggr) experienced at a victim primary receiver (PR). In our approach, we model the received power versus distance relations between a primary transmitter (PT), PR, and CRs. CRs can spatially reuse a channel and thus two adjacent CRs are separated by the co-channel reuse distance (R). Our analytical framework differs from the existing ones in that we have formulated I aggr in terms of R and sensing inaccuracy. Energy detector is assumed for the purpose of spectrum sensing. I aggr is expressed explicitly as a function of the number of energy samples collected (N) and the threshold SNR level used for comparison (SNR ε ). This allows us to assess their impacts on I aggr. A numerical example is constructed based on the scenario of spectrum sharing between DTV broadcast and IEEE 802.22 Wireless Regional Area Network. Our analysis demonstrates the extents of which I aggr can be restricted, by increasing R or sensing accuracy (either by increasing N or decreasing SNR ε ), and the amount of increment required. Conditioned on the example scenario, the critical {N, SNR ε , R} values that fulfill certain regulatory requirement are revealed.  相似文献   

15.
A technique is proposed for determination of analytical grid eigensolutions for a 3D impedance flow Rτ grid. On the basis of this technique, grid modes of an empty rectangular waveguide are found and an algorithm is developed for the analysis of H- and E-plane devices. The algorithm is a unified procedure of calculation of 3D and H- and E-plane devices.  相似文献   

16.
The structural, optical, electrical and electrical–optical properties of a double-junction GaAsP light-emitting diode (LED) structure grown on a GaP (100) substrate by using a molecular beam epitaxy technique were investigated. The pn junction layers of GaAs1?xPx and GaAs1?yPy, which form the double-junction LED structure, were grown with two different P/As ratios. High-resolution x-ray diffraction (HRXRD), photoluminescence (PL), and current–voltage (IV) measurements were used to investigate the structural, optical and electrical properties of the sample. Alloy composition values (x, y) and some crystal structure parameters were determined using HRXRD measurements. The phosphorus compositions of the first and second junctions were found to be 63.120% and 82.040%, respectively. Using PL emission peak positions at room temperature, the band gap energies (Eg) of the first and second junctions were found to be 1.867 eV and 2.098 eV, respectively. In addition, the alloy compositions were calculated by Vegard’s law using PL measurements. The turn-on voltage (Von) and series resistance (Rs) of the device were obtained from the IV measurements to be 4.548 V and 119 Ω, respectively. It was observed that the LED device emitted in the red (664.020 nm) and yellow (591.325 nm) color regions.  相似文献   

17.
Overhead resource elements (REs) in Long Term Evolution (LTE) networks are used for some control, signaling and synchronization tasks at both the Physical level and Media Access Control sub-level. Accurately computing all the overhead REs is necessary to achieve an efficient system design, which is difficult because LTE is a complex standard that contains a large number of implementation flexibilities and system configurations. The number of such REs depends on both the system configurations and services demanded. Aiming at exploring the influence of overhead on LTE downlink performance, we first parametrize each system configuration—including parameters corresponding to enhancement techniques such as Adaptive Modulation and Coding and Multi-Antenna Transmissions techniques—and those of the resource allocation mechanisms (which depend on users’ services). Second, using such parametrization, we model all overheads for synchronization, controlling and signaling operations in LTE Physical Downlink Shared/Control Channels. This allows for dynamically computing the useful REs (by subtracting the overhead REs from the total ones), both per Transmission Time Interval (TTI) and per frame (and hence, the corresponding bit rates). Our data rate-based performance model is able to accurately compute: (1) the real, exact system data rate or “throughput” (instead of approximations); and (2) the maximum number of simultaneous multi-service users per TTI that is able to support (called here “capacity”). Aiming at understanding the impact of each overhead mechanism, we have carried out a variety of simulations, including different service provision scenarios, such as multi-user with multi-application. The simulation results prove our starting hypothesis that the influence of overhead on LTE performance should not be neglected. The parametrized and dynamic model quantifies to what extent throughput and capacity are modified by overhead—under a combination of system configurations and services, and may provide these performance metrics, throughput and capacity, as inputs to planning, dimensioning and optimization specialized tools.  相似文献   

18.
The crystalline and electronic structures, energy, kinetic, and magnetic characteristics of n-HfNiSn semiconductor heavily doped with Y acceptor impurity are studied in the ranges: T = 80–400 K, N A Y ≈ 1.9 × 1020–5.7 × 1021 cm–3 (x = 0.01–0.30), and H ≤ 10 kG. The nature of the mechanism of structural defect generation is determined, which leads to a change in the band gap and the degree of semiconductor compensation, the essence of which is the simultaneous reduction and elimination of structural donor-type defects as a result of the displacement of ~1% of Ni atoms from the Hf (4a) site, and the generation of structural acceptor-type defects by substituting Hf atoms with Y atoms at the 4a site. The results of calculations of the electronic structure of Hf1–x Y x NiSn are in agreement with the experimental data. The discussion is performed within the Shklovskii–Efros model of a heavily doped and compensated semiconductor.  相似文献   

19.
This paper considers wireless networks where communication links are unstable and link interference is a challenge to design high performance scheduling algorithms. Wireless links are time varying and are modeled by Markov stochastic processes. The problem of designing an optimal link scheduling algorithm to maximize the expected reliability of the network is formulated into a Markov Decision Process first. The optimal solution can be obtained by the finite backward induction algorithm. However, the time complexity is very high. Thus, we develop an approximate link scheduling algorithm with an approximate ratio of \(2(N - 1)(r_{M}\Delta - r_{m} \delta ),\) where N is the number of decision epochs, r M is the maximum link reliability, r m is the minimum link reliability, Δ is the number of links in the largest maximal independent set and δ is the number of links in the smallest maximal independent set. Simulations are conducted in different scenarios under different network topologies.  相似文献   

20.
Homomorphic encryption schemes are useful in designing conceptually simple protocols that operate on encrypted inputs. On the other hand, non-malleable encryption schemes are vital for designing protocols with robust security against malicious parties, in a composable setting. In this paper, we address the problem of constructing public-key encryption schemes that meaningfully combine these two opposing demands. The intuitive tradeoff we desire in an encryption scheme is that anyone should be able to change encryptions of unknown messages \(m_1, \ldots , m_k\) into a (fresh) encryption of \(T(m_1, \ldots , m_k)\) for a specific set of allowed functions T, but the scheme should be otherwise “non-malleable.” That is, no adversary should be able to construct a ciphertext whose value is related to that of other ciphertexts in any other way. For the case where the allowed functions T are all unary, we formulate precise definitions that capture our intuitive requirements and show relationships among these new definitions and other more standard ones (IND-CCA, gCCA, and RCCA). We further justify these new definitions by showing their equivalence to a natural formulation of security in the framework of Universally Composable security. Next, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations T and prove their security under the Decisional Diffie-Hellman (DDH) assumption in two groups with related sizes. Finally, we demonstrate how encryption schemes that satisfy our definitions can be used to implement conceptually simple protocols for non-trivial computation on encrypted data, which are secure against malicious adversaries in the UC framework without resorting to general-purpose multi-party computation or zero-knowledge proofs. For the case where the allowed functions T are binary, we show that a natural generalization of our definitions is unattainable if some T is a group operation. On the positive side, we show that if one of our security requirements is relaxed in a natural way, we can in fact obtain a scheme that is homomorphic with respect to (binary) group operations, and non-malleable otherwise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号