首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This work examines the nearest neighbor encoding problem with an unstructured codebook of arbitrary size and vector dimension. We propose a new tree-structured nearest neighbor encoding method that significantly reduces the complexity of the full-search method without any performance degradation in terms of distortion. Our method consists of efficient algorithms for constructing a binary tree for the codebook and nearest neighbor encoding by using this tree. Numerical experiments are given to demonstrate the performance of the proposed method.  相似文献   

2.
Fast nearest neighbor search of entropy-constrained vectorquantization   总被引:1,自引:0,他引:1  
Entropy-constrained vector quantization (ECVQ) offers substantially improved image quality over vector quantization (VQ) at the cost of additional encoding complexity. We extend results in the literature for fast nearest neighbor search of VQ to ECVQ. We use a new, easily computed distance that successfully eliminates most codewords from consideration.  相似文献   

3.
This paper evaluates the performance of an image compression system based on wavelet-based subband decomposition and vector quantization. The images are decomposed using wavelet filters into a set of subbands with different resolutions corresponding to different frequency bands. The resulting subbands are vector quantized using the Linde-Buzo-Gray (1980) algorithm and various fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive neural network through an unsupervised learning process. The quality of the multiresolution codebooks designed by these algorithms is measured on the reconstructed images belonging to the training set used for multiresolution codebook design and the reconstructed images from a testing set.  相似文献   

4.
Information theoretic criteria such as mutual information are often used as similarity measures for inter-modality image registration. For better performance, it is useful to consider vector-valued pixel features. However, this leads to the task of estimating entropy in medium to high dimensional spaces, for which standard histogram entropy estimator is not usable. We have therefore previously proposed to use a nearest neighbor-based Kozachenko-Leonenko (KL) entropy estimator.Here we address the issue of determining a suitable all nearest neighbor (NN) search algorithm for this relatively specific task.We evaluate several well-known state-of-the-art standard algorithms based on k-d trees (FLANN), balanced box decomposition (BBD) trees (ANN), and locality sensitive hashing (LSH), using publicly available implementations. In addition, we present our own method, which is based on k-d trees with several enhancements and is tailored for this particular application.We conclude that all tree-based methods perform acceptably well, with our method being the fastest and most suitable for the all-NN search task needed by the KL estimator on image data, while the ANN and especially FLANN methods being most often the fastest on other types of data. On the other hand, LSH is found the least suitable, with the brute force search being the slowest.  相似文献   

5.
Two algorithms for fast approximate subspace tracking   总被引:6,自引:0,他引:6  
New fast algorithms are presented for tracking singular values, singular vectors, and the dimension of a signal subspace through an overlapping sequence of data matrices. The basic algorithm is called fast approximate subspace tracking (FAST). The algorithm is derived for the special case in which the matrix is changed by deleting the oldest column, shifting the remaining columns to the left, and adding a new column on the right. A second algorithm (FAST2) is specified by modifying FAST to trade reduced accuracy for higher speed. The speed and accuracy are compared with the PL algorithm, the PAST and PASTd algorithms, and the FST algorithm. An extension to multicolumn updates for the FAST algorithm is also discussed  相似文献   

6.
7.
Fractal image encoding is a computationally intensive method of compression due to its need to find the best match between image subblocks by repeatedly searching a large virtual codebook constructed from the image under compression. One of the most innovative and promising approaches to speed up the encoding is to convert the range-domain block matching problem to a nearest neighbor search problem. This paper presents an improved formulation of approximate nearest neighbor search based on orthogonal projection and pre-quantization of the fractal transform parameters. Furthermore, an optimal adaptive scheme is derived for the approximate search parameter to further enhance the performance of the new algorithm. Experimental results showed that our new technique is able to improve both the fidelity and compression ratio, while significantly reduce memory requirement and encoding time.  相似文献   

8.
A local distance measure is shown to optimize the performance of the nearest neighbor two-class classifier for a finite number of samples. The difference between the finite sample error and the asymptotic error is used as the criterion of improvement. This new distance measure is compared to the well-known Euclidean distance. An algorithm for practical implementation is introduced. This algorithm is shown to be computationally competitive with the present nearest neighbor procedures and is illustrated experimentally. A closed form for the corresponding second-order moment of this criterion is found. Finally, the above results are extended to  相似文献   

9.
A clustering algorithm based on the pairwise nearest-neighbor (PNN) algorithm developed by Equitz (1989), is introduced for the design of entropy-constrained residual vector quantizers. The algorithm designs residual vector quantization codebooks by merging the pair of stage clusters that minimizes the increase in overall distortion subject to a given decrease in entropy. Image coding experiments show that the clustering design algorithm typically results in more than a 200:1 reduction in design time relative to the standard iterative entropy-constrained residual vector quantization algorithm while introducing only small additional distortion. Multipath searching over the sequence of merges is also investigated and shown experimentally to slightly improve rate-distortion performance. The proposed algorithm can be used alone or can he followed by the iterative algorithm to improve the reproduction quality at the same bit rate.  相似文献   

10.
The presences of noise and regions undersampling ininterference phasei mages will introduce phaseinconsist-encies or residues,which make the phase unwrappingvery complex.To solve this problem,a wide variety ofphase unwrapping algorithms have been suggeste…  相似文献   

11.
本文提出核最近特征线和特征面分类器,可直接对高维人脸图像进行识别.为解决计算量大和可能失效的问题,提出(核)最近特征重心和(核)最近邻特征两种解决方法,前者降低了计算特征线和面距离的复杂度,后者减少了特征线和面的数目,两种方法均避免了可能失效的问题.将二者结合得到的(核)最近邻特征重心分类器,在获得相近识别率的条件下,使计算复杂度降到了最小.所得方法无需预先抽取人脸图像特征,因此避免了在较多样本数时,特征抽取存在计算量大的问题.基于ORL人脸数据库的实验验证了本文方法的有效性.  相似文献   

12.
为了降低图像特征向量量化的近似表示和高维向量带来的码书训练时间开销,提出了一种投影增强型残差量化方法。在前期的增强型残差量化工作基础上,将主成分分析与增强型残差量化相结合,使得码书训练和特征量化均在低维向量空间进行以提高效率;在低维向量空间上训练码书过程中,提出了联合优化方法,同时考虑投影和量化产生的总体误差,提升码书精度;针对该量化方法,设计了一种特征向量之间的近似欧氏距离快速计算方法用于近似最近邻完全检索。结果表明,相比增强型残差量化,在相同检索精度前提条件下,投影增强型残差量化的只需花费近1/3的训练时间;相比其它同类方法,所提出方法在码书训练时间效率、检索速度和精度上均具有更优的综合性能。该研究为主成分分析同其它量化模型的有效结合提供了参考。  相似文献   

13.
Wavelet-based image coding using nonlinear interpolative vectorquantization   总被引:1,自引:0,他引:1  
We propose a reduced complexity wavelet-based image coding technique. Here, 64-D (for three stages of decomposition) vectors are formed by combining appropriate coefficients from the wavelet subimages, 16-D feature vectors are then extracted from the 64-D vectors on which vector quantization (VQ) is performed. At the decoder, 64-D vectors are reconstructed using a nonlinear interpolative technique. The proposed technique has a reduced complexity and has the potential to provide a superior coding performance when the codebook is generated using the training vectors drawn from similar images.  相似文献   

14.
对50例非癌症患者和100例癌症患者分组,运用邻近点插值的数学模型诊断.对诊断对象的腺苷三磷酸酶(ATP酶)和琥珀酸脱氢酶(SDH酶)活性两项指标利用邻近点插值,其诊断正确率可以高达98.2%,增加训练样本集的覆盖面,诊断正确率将趋于100%.邻近点插值是癌症诊断的高效数学模型.  相似文献   

15.
The nonlinear principal component analysis (NLPCA) method is combined with vector quantization for the coding of images. The NLPCA is realized using the backpropagation neural network (NN), while vector quantization is performed using the learning vector quantizer (LVQ) NN. The effects of quantization in the quality of the reconstructed images are then compensated by using a novel codebook vector optimization procedure.  相似文献   

16.
A further modification to Cover and Hart's nearest neighbor decision rule, the reduced nearest neighbor rule, is introduced. Experimental results demonstrate its accuracy and efficiency.  相似文献   

17.
18.
The key operation in Elliptic Curve Cryptosystems(ECC) is point scalar multiplication. Making use of Frobenius endomorphism, Mfiller and Smart proposed two efficient algorithms for point scalar multiplications over even or odd finite fields respectively. This paper reduces thec orresponding multiplier by modulo τ^k-1 … τ 1 and improves the above algorithms. Implementation of our Algorithm 1 in Maple for a given elliptic curve shows that it is at least as twice fast as binary method. By setting up a precomputation table, Algorithm 2, an improved version of Algorithm 1, is proposed. Since the time for the precomputation table can be considered free, Algorithm 2 is about (3/2) log2 q - 1 times faster than binary method for an elliptic curve over Fq.  相似文献   

19.
随着重复数据删除次数的增加,系统中用于存储指纹索引的清单文件等元数据信息会不断累积,导致不可忽视的存储资源开销。因此,如何在不影响重复数据删除率的基础上,对重复数据删除过程中产生的元数据信息进行压缩,从而减小查重索引,是进一步提高重复数据删除效率和存储资源利用率的重要因素。针对查重元数据中存在大量冗余数据,提出了一种基于压缩近邻的查重元数据去冗算法Dedup2。该算法先利用聚类算法将查重元数据分为若干类,然后利用压缩近邻算法消除查重元数据中相似度较高的数据以获得查重子集,并在该查重子集上利用文件相似性对数据对象进行重复数据删除操作。实验结果表明,Dedup2可以在保持近似的重复数据删除比的基础上,将查重索引大小压缩50%以上。  相似文献   

20.
The combination of singular value decomposition (SVD) and vector quantization (VQ) is proposed as a compression technique to achieve low bit rate and high quality image coding. Given a codebook consisting of singular vectors, two algorithms, which find the best-fit candidates without involving the complicated SVD computation, are described. Simulation results show that the proposed methods are better than the discrete cosine transform (DCT) in terms of energy compaction, data rate, image quality, and decoding complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号