首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes a method of error detection based on macroblock (MB) types for video transmission. For decoded inter MBs, the absolute values of received residues are accumulated. At the same time, the intra textural complexity of the current MB is estimated by that of the motion compensated reference block. We compare the inter residue with the intra textural complexity. If the inter residue is larger than the intra textural complexity by a predefined threshold, the MB is considered to be erroneous and errors are concealed. For decoded intra MBs, the connective smoothness of the current MB with neighboring MBs is tested to find erroneous MBs. Simulation results show that the new method can remove those seriously-corrupted MBs efficiently. Combined with error concealment, the new method improves the recovered quality at the decoder by about 0.5--1 dB.  相似文献   

2.
Mining frequent itemsets has emerged as a fundamental problem in data mining and plays an essential role in many important data mining tasks.In this paper,we propose a novel vertical data representation called N-list,which originates from an FP-tree-like coding prefix tree called PPC-tree that stores crucial information about frequent itemsets.Based on the N-list data structure,we develop an efficient mining algorithm,PrePost,for mining all frequent itemsets.Efficiency of PrePost is achieved by the following three reasons.First,N-list is compact since transactions with common prefixes share the same nodes of the PPC-tree.Second,the counting of itemsets’ supports is transformed into the intersection of N-lists and the complexity of intersecting two N-lists can be reduced to O(m + n) by an efficient strategy,where m and n are the cardinalities of the two N-lists respectively.Third,PrePost can directly find frequent itemsets without generating candidate itemsets in some cases by making use of the single path property of N-list.We have experimentally evaluated PrePost against four state-of-the-art algorithms for mining frequent itemsets on a variety of real and synthetic datasets.The experimental results show that the PrePost algorithm is the fastest in most cases.Even though the algorithm consumes more memory when the datasets are sparse,it is still the fastest one.  相似文献   

3.
This paper proposes a low-complexity spatial-domain error concealment (EC) algorithm for recovering consecutive blocks error in still images or intra-coded (I) frames of video sequences. The proposed algorithm works with the following steps. Firstly the Sobel operator is performed on the top and bottom adjacent pixels to detect the most probable edge direction of current block area. After that one-dimensional (1-D) matching is used on the available block boundaries. Displacement between edge direction candidate and most probable edge direction is taken into consideration as an important factor to improve stability of 1-D boundary matching. Then the corrupted pixels are recovered by linear weighting interpolation along the estimated edge direction. Finally the interpolated values are merged to get last recovered picture. Simulation results demonstrate that the proposed algorithms obtain good subjective quality and higher PSNR than the methods in literatures for most images.  相似文献   

4.
This paper introduced a novel high performance algorithm and VLSI architectures for achieving bit plane coding (BPC) in word level sequential and parallel mode. The proposed BPC algorithm adopts the techniques of coding pass prediction and parallel & pipeline to reduce the number of accessing memory and to increase the ability of concurrently processing of the system, where all the coefficient bits of a code block could be coded by only one scan. A new parallel bit plane architecture (PA) was proposed to achieve word-level sequential coding. Moreover, an efficient high-speed architecture (HA) was presented to achieve multi-word parallel coding. Compared to the state of the art, the proposed PA could reduce the hardware cost more efficiently, though the throughput retains one coefficient coded per clock. While the proposed HA could perform coding for 4 coefficients belonging to a stripe column at one intra-clock cycle, so that coding for an NxN code-block could be completed in approximate N2/4 intra-clock cycles. Theoretical analysis and experimental results demonstrate that the proposed designs have high throughput rate with good performance in terms of speedup to cost, which can be good alternatives for low power applications.  相似文献   

5.
An algorithm is proposed,which combines global and local information of fingerprint images to detect singular points.It’s mathematically proven that normal lines of gradient of double orientation field(GDOF) pass through singular points.Normal lines of GDOF use rather global information to detect candidate singular points.Fingerprint image is divided into blocks and normal lines of GDOF are drawn.The number of normal lines that pass through each block is accumulated.The block that has the maximum number corresponds to a candidate singular point.It can be seen as a kind of Hough transform(HT).As candidate singular points detected by global information may have a little warp from their real positions,it’s necessary to use local information to refine their positions.Poincare index is chosen,and uses local information to refine the candidate singular points.This makes our algorithm more robust to noise than methods that only use local information.What’s more,the pairs of the detected singular points are used to classify fingerprint.Experimental results show that our algorithm performs well and fast enough for real time application in databases NIST-4.  相似文献   

6.
Multi-user orthogonal frequency division multiplexing(OFDM) is one of the most important techniques for next-generation wireless communication systems.The sum capacity and proportional fairness index are two important metrics in OFDM adaptive resource allocation.In existing researches,the two metrics have individually yielded approximately optimal solutions.However,as the two goals seem to contradict each other,it means that they cannot achieve optima simultaneously.This paper presents a resource allocation algorithm based on subcarrier exchange.By relaxing the fairness index,the proposed algorithm can dynamically adjust the fairness requirements.Furthermore,By considering a criterion that minimizes the loss of sum capacity,the proposed algorithm can balance the tradeoff between fairness and sum capacity.In contrast to existing algorithms,our proposal improves performance by approximately 30% in terms of sum capacity,at the expense of approximately 10% reduction in proportional fairness.Moreover,if a malicious user exists,each user in the system can obtain higher ergodic capacity compared with the conventional algorithms.To decrease the complexity,a low-complexity simplified alternative is presented,which outperforms the conventional algorithm with an increase in the number of users.Four numerical simulations are carried out to verify the efficiency and effectiveness.Finally,we discuss in detail the computational complexity of the proposed algorithm.  相似文献   

7.
Recently,several important block ciphers are considered to be broken by the brute-force-like cryptanalysis,with a time complexity faster than the exhaustive key search by going over the entire key space but performing less than a full encryption for each possible key.Motivated by this observation,we describe a meetin-the-middle attack that can always be successfully mounted against any practical block ciphers with success probability one.The data complexity of this attack is the smallest according to the unicity distance.The time complexity can be written as 2k(1-),where>0 for all practical block ciphers.Previously,the security bound that is commonly accepted is the length k of the given master key.From our result we point out that actually this k-bit security is always overestimated and can never be reached because of the inevitable loss of the key bits.No amount of clever design can prevent it,but increments of the number of rounds can reduce this key loss as much as possible.We give more insight into the problem of the upper bound of effective key bits in block ciphers,and show a more accurate bound.A suggestion about the relationship between the key size and block size is given.That is,when the number of rounds is fixed,it is better to take a key size equal to the block size.Also,effective key bits of many well-known block ciphers are calculated and analyzed,which also confirms their lower security margins than thought before.The results in this article motivate us to reconsider the real complexity that a valid attack should compare to.  相似文献   

8.
Reduction of attributes is one of important topics in the research on rough set theory.Wong S K M and Ziarko W have proved that finding the minimal attribute reduction of decision table is a NP-hard problem.Algorithm A (the improved algorithm to Jelonek) choices optimal candidate attribute by using approximation quality of single attribute,it improves efficiency of attribute reduction,but yet exists the main drawback that the single atribute having maximum approxiamtion quality is probably optimal candidate attribute.Therefore,in this paper, we introduce the concept of compatible decision rule,and propose an attribute reduction algorithm based on rules (ARABR).Algorithm ARABR provides a new method that measures the relevance between extending attribute and the set of present attributes,the method assures that the optimal attribute is extended,and obviously reduces the search space.Theory analysis shows that algorithm ARABR is of lower computational complexity than Jelonek's algorithm,and overcomes effectively the main drawback of algorithm A.  相似文献   

9.
Audio Video coding Standard (AVS) is established by the AVS Working Group of China. The main goal of AVS part 7 is to provide high compression performance with relatively low complexity for mobility applications. There are 3 main low-complexity tools: deblocking filter, context-based adaptive 2D-VLC and direct intra prediction. These tools are presented and analyzed respectively. Finally, we compare the performance and the decoding speed of AVS part 7 and H.264 baseline profile. The analysis and results indicate that AVS part 7 achieves similar performance with lower cost.  相似文献   

10.
Tradeoffs between time complexities and solution optimalities are important when selecting algorithms for an NP-hard problem in different applications. Also, the distinction between theoretical upper bound and actual solution optimality for realistic instances of an NP-hard problem is a factor in selecting algorithms in practice. We consider the problem of partitioning a sequence of n distinct numbers into minimum number of monotone (increasing or decreasing) subsequences. This problem is NP-hard and the number of monotone subsequences can reach [√2n+1/1-1/2]in the worst case. We introduce a new algorithm, the modified version of the Yehuda-Fogel algorithm, that computes a solution of no more than [√2n+1/1-1/2]monotone subsequences in O(n^1.5) time. Then we perform a comparative experimental study on three algorithms, a known approximation algorithm of approximation ratio 1.71 and time complexity O(n^3), a known greedy algorithm of time complexity O(n^1.5 log n), and our new modified Yehuda-Fogel algorithm. Our results show that the solutions computed by the greedy algorithm and the modified Yehuda-Fogel algorithm are close to that computed by the approximation algorithm even though the theoretical worst-case error bounds of these two algorithms are not proved to be within a constant time of the optimal solution. Our study indicates that for practical use the greedy algorithm and the modified Yehuda-Fogel algorithm can be good choices if the running time is a major concern.  相似文献   

11.
Reduct and attribute order   总被引:14,自引:2,他引:12       下载免费PDF全文
Based on the principle of discernibility matrix, a kind of reduction algorithm with attribute order has been developed and its solution has been proved to be complete for reduct and unique for a given attribute order. Being called the reduct problem, this algorithm can be regarded as a mapping R = Reduct(S) from the attribute order space O to the reduct space R for an information system (U, C ∪ D), where U is the universe and C and D are two sets of condition and decision attributes respectively. This paper focuses on the reverse problem of reduct problem S = Order(R), i.e., for a given reduct R of an information system, we determine the solution of S = Order(R) in the space θ. First, we need to prove that there is at least one attribute order S such that S = Order(R). Then, some decision rules are proposed, which can be used directly to decide whether the pair of attribute orders has the same reduct. The main method is based on the fact that an attribute order can be transformed into another one by moving the attribute for limited times. Thus, the decision of the pair of attribute orders can be altered to the decision of the sequence of neighboring pairs of attribute orders. Therefore,the basic theorem of neighboring pair of attribute orders is first proved, then, the decision theorem of attribute order is proved accordingly by the second attribute.  相似文献   

12.
This paper presents a simple Electrocardiogram (ECG) processing algorithm for portable healthcare devices. This algorithm consists of the Haar wavelet transform (HWT), the modulus maxima pair detection (MMPD) and the peak position modification (PPM). To lessen the computational complexity, a novel no multiplier structure is introduced to implement HWT. In the MMPD, the HWT coefficient at scale 24 is processed to find candidate peak positions of ECG. The PPM is designed to correct the time shift in digital process and accurately determine the location of peaks. Some new methods are proposed to improve anti-jamming per- formance in MMPD and PPM. Evaluated by the MIT-BIH arrhythmia database, the sensitivity (Se) of QRS detection is 99.53% and the positive prediction (Pr) of QRS detection is 99.70%. The QT database is chosen to fully validate this algorithm in complete delineation of ECG waveform. The mean # and standard deviation cr between test results and annotations are calculated. Most of a satisfies the CSE limits which indicates that the results are stable and reliable. A detailed and rigorous computational complexity analysis is presented in this paper. The number of arithmetic operations in N input samples is chosen as the criterion of complexity. Without any multiplication operations, the number of addition operations is only about 16.33N. This algorithm achieves high detection accuracy and the lower computational complexity.  相似文献   

13.
The variable precision rough set (VPRS) model extends the basic rough set (RS) theory with finite uni- verses and finite evaluative measures. VPRS is concerned with the equivalence and the contained relationship between two sets. In incompatible information systems, the inclusion degree and β upper (lower) approximation of the inconsistent equivalence class to the decision equivalence classes may be affected by the variable precision. The analysis of an example of incompatible decision table shows that there is a critical point in β available-values region. In the new β range limited at the critical point, the incompatible decision table can be converted to the coordination decision table reliably. The method and its algorithm implement are introduced for the critical value search. The examples of the inconsistent equivalence class transformation are exhibited. The results illustrate that this algorithm is rational and precise.  相似文献   

14.
The NAND flash memory has gained its popularity as a storage device for consumer electronics due to its higher performance and lower power consumption.In most of these devices,an FTL(Flash Translation Layer)is adopted to emulate a block device interface to support the conventional disk-based file systems that make the flash management much easier.Among various FTLs,the FAST(Fully-Associative Sector Translation)FTL has shown superior performance,becoming one of the state-of-the-art approaches.However,the FAST FTL performs poorly while dealing with a huge number of small-sized random writes brought by upper applications such as database transaction processing workloads.The two important reasons are the absence of efficient selection schemes for the reclaiming of random log blocks that leads to large overhead of full merges,and the sequential log block scheme which no longer applies to random writes due to the large costs of partial merges.To overcome the above two defects in the presence of random writes,two techniques have been proposed.The first technique reduced full merge costs by adopting a novel random log block selection algorithm,based on the block associativity and the relevant-valid-page-amount of random log blocks as the key block selection criterion.The second technique replaced the sequential log block with a random log block to eliminate the overhead of partial merges.Experimental results showed that our optimizations can outperform FAST FTL significantly in three aspects:erase counts,page migration amount,and response time.The maximum improvement level in these cases could reach up to 66.8%,98.2%,and 51.0%,respectively.  相似文献   

15.
Studies on algorithms for self-stabilizing communication protocols   总被引:1,自引:1,他引:0       下载免费PDF全文
In this paper the algorithms for self-stabilizing communication protocols are studied.First some concepts and a formal method for describing the proposed algorithms are described,then an improved algorithm for achieving global states is presented.The study shows that the improved algorithm can be applied to obtain the global states in the case of a loss of cooperation of the different processes in th protocol,which can be used as a recovery point that will be used by the following recovery procdure.Thus,the improved algorithm can be used to self-stabilize a communication protocol.Meanwhile,a recovery algorithm for selastabilizing communication protocols is presented.After a failure is detected,all processes can eventually know the error.The recovery algorithm uses the contextual information exchanged during the progress of the protocol and recorded on the stable memory.The proof of correctness and analysis of complexity for these algorithms have been made.The availability and efficiency of the algorithms have been verified by illustrating the example protocols.Finally,some conclusions and remarks are given.  相似文献   

16.
A non-unitary non-coherent space-time code which is capable of achieving full algebraic diversity is proposed based on full diversity space-time block coding, The error performance is optimized by transforming the non-unitary space-time code into unitary space-time code, By exploiting the desired structure of the proposed code, a grouped generalized likelihood ratio test decoding algorithm is presented to overcome the high complexity of the optimal algorithm, Simulation results show that the proposed code possesses high spectrum efficiency in contrast to the unitary space-time code despite slight loss in the SNR, and besides, the proposed grouped decoding algorithm provides good tradeoff between performance and complexity,  相似文献   

17.
In order to recognize and track all of the heads exactly in top view images, a novel approach of 3D feature extraction of heads based on target region matching is presented. The main idea starts from the disparity of head region, which is generally extracted in global dense disparity image obtained by block matching method. Deferent from the block matching, the correspondence searching in target region matching is not done in the regions around every pixel in image but in the candidate head regions extracted in advance by monocular image processing. As the number of candidate head regions is far less than the resolution of image, the computational complexity and time consume can be largely reduced. After the disparity of candidate head regions are obtained, the 3D features of head, including the height feature and the perspective feature, can be extracted to largely improve the accuracy of head recognition.  相似文献   

18.
Snake robots are mostly designed based on single mode locomotion. However, single mode gait most likely could not work effectively when the robot is subject to an unstructured working environment with different measures of terrain complexity. As a solution, mixed mode locomotion is proposed in this paper by synchronizing two types of gaits known as serpentine and wriggler gaits used for non-constricted and narrow space environments, respectively, but for straight line locomotion only. A gait transition algorithm is developed to efficiently change the gait from one to another. This study includes the investigation on kinematics analysis followed by dynamics analysis while considering related structural constraints for both gaits. The approach utilizes the speed of the serpentine gait for open area locomotion and exploits the narrow space access capability of the wriggler gait. Hence, it can increase motion flexibility in view of the fact that the robot is able to change its mode of locomotion according to the working environment.  相似文献   

19.
In this paper, a non-cooperative distributed MPC algorithm based on reduced order model is proposed to stabilize large-scale systems. The large-scale system consists of a group of interconnected subsystems. Each subsystem can be partitioned into two parts: measurable part, whose states can be directly measured by sensors, and the unmeasurable part. In the online computation phase, only the measurable dynamics of the corresponding subsystem and neighbour-to-neighbour communication are necessary for the local controller design. Satisfaction of the state constraints and the practical stability are guaranteed while the complexity of the optimization problem is reduced. Numerical examples are given to show the effectiveness of this algorithm.  相似文献   

20.
The MAGA is an effective algorithm used for global numerical optimization problems. Drawbacks, however, still existed in the neighborhood selection part of the algorithm. Based on the social cooperate mechanism of agents, an effective neighborhood construction mode is proposed. This mode imports an acquaintance net which describes the relation of agents, and uses that to construct the local environment (neighborhood) for agents. This strategy makes the new mode more reasonable than that of MAGA. The Multi-Agent Social Evolutionary Algorithm (MASEA) based on this construction mode is introduced, and some standard testing functions are tested. In the first experiments, two dimensional, 30 dimensional and 20-1000 dimensional functions are tested to prove the effectiveness of this algorithm. The experimental results show MASEA can find optimal or close-to-optimal solutions at a low computational cost, and its solution quality is quite stable. In addition, the comparative results indicate that MASEA performs much better than the CMA-ES and MAGA in both quality of solution and computational complexity. Even when the dimensions reach 10,000, the performance of MASEA is still good.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号