首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an effective and efficient method for speeding up ant colony optimization (ACO) in solving the codebook generation problem. The proposed method is inspired by the fact that many computations during the convergence process of ant-based algorithms are essentially redundant and thus can be eliminated to boost their convergence speed, especially for large and complex problems. To evaluate the performance of the proposed method, we compare it with several state-of-the-art metaheuristic algorithms. Our simulation results indicate that the proposed method can significantly reduce the computation time of ACO-based algorithms evaluated in this paper while at the same time providing results that match or outperform those ACO by itself can provide.  相似文献   

2.
在AMR-WB中,固定码本搜索是影响性能和复杂度的关键模块,约占总复杂度的40%。为了降低计算量,提出了一种码字分裂、子码字脉冲取代的高效码本搜索算法。该算法包括四步:一个初始码字分裂为两个或更多的子码字;每个子码字通过最不重要脉冲取代法进行更新;更新后的子码字合成一个候选的码字;比较初始码字和候选码字,选择优者作为最后的码字。实验表明,与传统方法相比,编码时间减小约16%。  相似文献   

3.
In this paper, we present a fast codebook generation algorithm called CGAUCD (Codebook Generation Algorithm Using Codeword Displacement) by making use of the codeword displacement between successive partition processes. By implementing a fast search algorithm named MFAUPI (Modified Fast Algorithm Using Projection and Inequality) for VQ encoding in the partition step of CGAUCD, the codebook generation time can be further reduced significantly. Using MFAUPI, the computing time of CGAUCD can be reduced by a factor of 4.7–7.6. Compared to Generalized Lloyd Algorithm (GLA), our proposed method can reduce the codebook generation time by a factor of 35.9–121.2. Compared to the best codebook generation algorithm to our knowledge, our approach can further reduce the corresponding computing time by 26.0–32.8%. It is noted that our proposed algorithm can generate the same codebook as that produced by the GLA. The superiority of our method is more remarkable when a larger codebook is generated.  相似文献   

4.
为降低AMR_WB中固定码本搜索算法的复杂度,在脉冲取代法的基础上提出了一种新的搜索算法,采用脉冲组合的方法,不仅降低了计算复杂度,而且保证了语音质量。实验结果表明,与AMR_WB采用的深度优先树搜索算法相比,在不影响语音编码质量的条件下,提出的快速码本搜索算法的复杂度降低了53.6%。  相似文献   

5.
In this paper, a novel encoding algorithm for vector quantization is presented. Our method uses a set of transformed codewords and partial distortion rejection to determine the reproduction vector of an input vector. Experimental results show that our algorithm is superior to other methods in terms of the computing time and number of distance calculations. Compared with available approaches, our method can reduce the computing time and number of distance calculations significantly. Compared with the available best method of reducing the number of distance computations, our approach can reduce the number of distance calculations by 32.3-67.1%. Compared with the best encoding algorithm for vector quantization, our method can also further reduce the computing time by 19.7-23.9%. The performance of our method is better when a larger codebook is used and is weakly correlated to codebook size.  相似文献   

6.
Vector quantization (VQ), a lossy image compression, is widely used for many applications due to its simple architecture, fast decoding ability, and high compression rate. Traditionally, VQ applies the full search algorithm to search for the codeword that best matches each image vector in the encoding procedure. However, matching in this manner consumes a lot of computation time and leads to a heavy burden for the VQ method. Therefore, Torres and Huguet proposed a double test algorithm to improve the matching efficiency. However, their scheme does not include an initiation strategy to choose an initially searched codeword for each image vector, and, as a result, matching efficiency may be affected significantly. To overcome this drawback, we propose an improved double test scheme with a fine initialization as well as a suitable search order. Our experimental results indicate that the computation time of the double test algorithm can be significantly reduced by the proposed method. In addition, the proposed method is more flexible than existing schemes.  相似文献   

7.
This paper presents a novel approach to the fast computation of Zernike moments from a digital image. Most existing fast methods for computing Zernike moments have focused on the reduction of the computational complexity of the Zernike 1-D radial polynomials by introducing their recurrence relations. Instead, in our proposed method, we focus on the reduction of the complexity of the computation of the 2-D Zernike basis functions. As Zernike basis functions have specific symmetry or anti-symmetry about the x-axis, the y-axis, the origin, and the straight line y=x, we can generate the Zernike basis functions by only computing one of their octants. As a result, the proposed method makes the computation time eight times faster than existing methods. The proposed method is applicable to the computation of an individual Zernike moment as well as a set of Zernike moments. In addition, when computing a series of Zernike moments, the proposed method can be used with one of the existing fast methods for computing Zernike radial polynomials. This paper also presents an accurate form of Zernike moments for a discrete image function. In the experiments, results show the accuracy of the form for computing discrete Zernike moments and confirm that the proposed method for the fast computation of Zernike moments is much more efficient than existing fast methods in most cases.  相似文献   

8.
This paper presents an evolution-based tabu search approach (ETSA) to design codebooks with smaller distortion values in vector quantization. In the ETSA, there is no need for users to determine the size of a tabu memory and to specifically define a set of tabu restrictions and a set of aspiration criteria. During iterations, only the best solution visited is memorized as a tabu point in the search space and the distance from each trial solution to the tabu point is an important factor in the fitness evaluation. In population competition, the new fitness function plays the roles of the tabu restrictions and the aspiration criteria. Based on the new fitness function and a parallel evolutionary mechanism, the ETSA can prevent premature convergence and eventually find a good solution. Seven grayscale images are used to test the performance of the ETSA. Experimental results show that the ETSA performs better than several existing algorithms in terms of the distortion and robustness measures.  相似文献   

9.
In existing adaptive neural control approaches, only when the regressor satisfies the persistent excitation (PE) or interval excitation (IE) conditions, the constant optimal weights of neural network (NN) can be identified, which can be used to establish uncertainties in nonlinear systems. This paper proposes a novel composite learning approach based on adaptive neural control. The focus of this approach is to make the NN approximate uncertainties in nonlinear systems quickly and accurately without identifying the constant optimal weights of the NN. Hence, the regressor does not need to satisfy the PE or IE conditions. In this paper, regressor filtering scheme is adopted to generate prediction error, and then the prediction error and tracking error simultaneously drive the update of NN weights. Under the framework of Lyapulov theory, the proposed composite learning approach can ensure that approximation error of the uncertainty and tracking error of the system states converge to an arbitrarily small neighborhood of zero exponentially. The simulation results verify the effectiveness and advantages of the proposed approach in terms of fast approximation.  相似文献   

10.
Estimating the noise power spectral density (PSD) from the corrupted speech signal is an essential component for speech enhancement algorithms. In this paper, a novel noise PSD estimation algorithm based on minimum mean-square error (MMSE) is proposed. The noise PSD estimate is obtained by recursively smoothing the MMSE estimation of the current noise spectral power. For the noise spectral power estimation, a spectral weighting function is derived, which depends on the a priori signal-to-noise ratio (SNR). Since the speech spectral power is highly important for the a priori SNR estimate, this paper proposes an MMSE spectral power estimator incorporating speech presence uncertainty (SPU) for speech spectral power estimate to improve the a priori SNR estimate. Moreover, a bias correction factor is derived for speech spectral power estimation bias. Then, the estimated speech spectral power is used in “decision-directed” (DD) estimator of the a priori SNR to achieve fast noise tracking. Compared to three state-of-the-art approaches, i.e., minimum statistics (MS), MMSE-based approach, and speech presence probability (SPP)-based approach, it is clear from experimental results that the proposed algorithm exhibits more excellent noise tracking capability under various nonstationary noise environments and SNR conditions. When employed in a speech enhancement system, improved speech enhancement performances in terms of segmental SNR improvements (SSNR+) and perceptual evaluation of speech quality (PESQ) can be observed.  相似文献   

11.
Li  Rui  Pan  Zhibin  Wang  Yang 《Multimedia Tools and Applications》2018,77(18):23803-23823
Multimedia Tools and Applications - Vector quantization (VQ) is widely used in image processing applications, the primary focus of VQ is to determine a codebook to represent the original image...  相似文献   

12.
13.
In the present paper an improved stoichiometric algorithm based on the successive single reaction equilibrium has been described, avoiding the solution of large systems. The algorithm calculates the equilibrium composition adding automatically suitable new reactions to improve the convergence. The algorithm has been tested for plasma composition calculation for a pure nitrogen and air plasma mixtures. The algorithm proposed always converges and it is significantly faster than other hierarchical algorithms.  相似文献   

14.
F.  Y.S.  H.  W.F.   《Pattern recognition》2008,41(8):2512-2524
This paper presents a hierarchical approach for fast and robust ellipse extraction from images. At the lowest level, the image is described as a set of edge pixels, from which line segments are extracted. Then, line segments that are potential candidates of elliptic arcs are linked to form arc segments according to connectivity and curvature conditions. Next, arc segments that belong to the same ellipse are grouped together. Finally, a robust statistical method, namely RANSAC, is applied to fit ellipses to groups of arc segments. Unlike Hough Transform based algorithms, this method does not need a high dimensional parameter space, and so it reduces the computation and storage requirements. Experiments on both synthetic and real images demonstrate that the proposed method has excellent performance in handling occlusion and overlapping ellipses.  相似文献   

15.
Neural networks have shown good results for detecting a certain pattern in a given image. In this paper, faster neural networks for pattern detection are presented. Such processors are designed based on cross-correlation in the frequency domain between the input matrix and the input weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through matrix decomposition. Each matrix is divided into smaller in size sub-matrices and then each one is tested separately using a single faster neural processor. Furthermore, faster pattern detection is obtained using parallel processing techniques to test the resulting submatrices at the same time using the same number of faster neural networks. In contrast to faster neural networks, the speed up ratio is increased with the size of the input matrix when using faster neural networks and matrix decomposition. Moreover, the problem of local sub-matrix normalization in the frequency domain is solved. The effect of matrix normalization on the speed up ratio of pattern detection is discussed. Simulation results show that local sub-matrix normalization through weight normalization is faster than sub-matrix normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.  相似文献   

16.
A novel algorithm for fast computation of Zernike moments   总被引:7,自引:0,他引:7  
J.  H. Z.  C.  L. M. 《Pattern recognition》2002,35(12):2905-2911
Zernike moments (ZMs) have been successfully used in pattern recognition and image analysis due to their good properties of orthogonality and rotation invariance. However, their computation by a direct method is too expensive, which limits the application of ZMs. In this paper, we present a novel algorithm for fast computation of Zernike moments. By using the recursive property of Zernike polynomials, the inter-relationship of the Zernike moments can be established. As a result, the Zernike moment of order n with repetition m, Znm, can be expressed as a combination of Zn−2,m and Zn−4,m. Based on this relationship, the Zernike moment Znm, for n>m, can be deduced from Zmm. To reduce the computational complexity, we adopt an algorithm known as systolic array for computing these latter moments. Using such a strategy, the multiplication number required in the moment calculation of Zmm can be decreased significantly. Comparison with known methods shows that our algorithm is as accurate as the existing methods, but is more efficient.  相似文献   

17.
Mnica  Daniel 《Pattern recognition》2005,38(12):2400-2408
An important objective in image analysis is dimensionality reduction. The most often used data-exploratory technique with this objective is principal component analysis, which performs a singular value decomposition on a data matrix of vectorized images. When considering an array data or tensor instead of a matrix, the high-order generalization of PCA for computing principal components offers multiple ways to decompose tensors orthogonally. As an alternative, we propose a new method based on the projection of the images as matrices and show that it leads to a better reconstruction of images than previous approaches.  相似文献   

18.
在图像处理中,为了提高传统色彩平衡算法的计算速度、降低算法的复杂度,提出了一种全新的快速色彩平衡算法,即过滤少量极端像素颜色值并按比例提高剩余的非极端像素颜色值的方法.通过对多幅图片的处理,相对于传统的色彩平衡方法,该算法得到了更好的效果、具有更好的性能.  相似文献   

19.
Multimedia Tools and Applications - Medical image is the visual representation of anatomy or physiology of internal structures of the body and it is useful for clinical analysis and medical...  相似文献   

20.
Wang  Lingfei  Pan  Zhibin  Zhu  Ruoxin 《Multimedia Tools and Applications》2017,76(24):26153-26176

Reversible data hiding (RDH) in compression domain is an important research issue in the security of digital multimedia. Obtaining a high embedding rate and a low compression rate are the main goals of compression domain RDH. This paper proposes a novel RDH scheme to improve joint neighboring coding (JNC) scheme. In embedding process, the first index SC 1st in current state codebook (SC) and median edge detector (MED) prediction P med are exploited. These two parameters are employed to replace the right-up and left-up neighboring SMVQ indices, which have lower correlation with the current index. As a result, a more concentrated distribution of difference “d” is obtained. Difference “d” is computed by the difference between the current SMVQ index and its left, upper neighboring indices, P med and SC 1st after embedding secret bits. The experimental results show that our work achieves the average compression rate of 0.45/0.51/0.57 bpp and the average embedding efficiency of 0.28/0.36/0.43 after embedding 2/3/4 bits secret data into each SMVQ index. As demonstrated in the comparative results, it can be observed that the proposed scheme outperforms the other previous works.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号