共查询到20条相似文献,搜索用时 171 毫秒
1.
2.
3.
在无源毫米波成像中,因为受天线孔径大小的限制而导致获取的图像分辨率低,所以必须采取有效后处理措施增强分辨率.提出了一种改进的POCS超分辨率算法,该算法结合了Wiener滤波器复原算法和凸集投影(POCS)算法的优点,使用Wiener滤波复原算法恢复图像通带内的低频分量,运用POCS算法作为主迭代过程实现频谱外推,同时保证低频分量不被破坏.实验结果表明,该算法增强了图像的分辨率,改善了收敛速度,减少了计算量,有利于无源毫米波成像超分辨率的实时实现. 相似文献
4.
基于图像分辨率增强算法的场景生成技术 总被引:1,自引:1,他引:0
针对高分辨率真实感的虚拟环境及场景浏览时变焦观察的需要,研究了基于图像的高分辨率场景生成技术。图像分辨率增强技术是指利用已采样的信息来重新构建分辨率更高的场景图像,包含单帧图像的分辨率增强和多帧图像序列的分辨率增强两种技术。对于单帧图像的分辨率增强技术,提出了一种基于熵变分的图像分辨率增强算法。该算法在贝叶斯估计和最大熵原理的基础上,将图像像素点梯度信息应用到图像分辨率增强中,从而建立起一种基于图像梯度信息的各向异性自适应分辨率增强算法。对于多帧图像序列的超分辨率复原技术,在单帧熵变分模型的基础上,将双边滤波技术引入到图像超分辨率复原中,建立了一种基于广义熵变分的图像超分辨率复原模型,提出了一种基于几何距离和梯度信息的双重加权各向异性分辨率增强算法。实验结果表明:使用本文算法得到的高分辨率复原图像具有较高的峰值信噪比和视觉质量,与传统图像分辨率增强算法相比具有一定的优势。 相似文献
5.
6.
7.
8.
图像超分辨率重建算法综述 总被引:4,自引:1,他引:4
介绍了超分辨率重建的基本原理与数学模型,对现有的图像超分辨率重建算法进行了总结。将当前的超分辨率算法分为基于重建约束的方法和基于学习的方法两大类,分别阐述了超分辨率重建技术的经典方法,最后指出了低质量图像超分辨率技术进一步的研究方向。 相似文献
9.
10.
利用光纤中的背向瑞利散射通过光频域反射原理实现整段光纤上的分布式应变测量.对应变解调算法进行了改进,采用二次互相关的方法实现了光纤的波长分辨率为5 pm,空间分辨率为2 cm.搭建了一套基于光频域反射原理的分布式光纤应变传感系统,对应变系数进行了标定,并实现了5m长光纤上的分布式应变测量,应变测量范围为50~500 με,应变分辨率为4.2με.改进后的算法克服了传统算法中波长分辨率和空间分辨率相互制约的缺点,能在保证波长分辨率的前提下提高空间分辨率,对该应变传感技术的后续研究具有一定借鉴意义. 相似文献
11.
Transcoding algorithms that eliminate distortion accumulation due to tandem transcodings between memoryless, finite-state, and predictive vector quantization and pulse code modulation (PCM) are presented. The algorithms can be implemented using table lookups for memoryless and finite-state vector quantization, whereas predictive vector quantization requires online calculations. Computer simulations indicate a 6 dB improvement in the case of 16 kb/s predictive vector quantizers, 48 kb/s PCM, and four tandems for speech 相似文献
12.
This paper evaluates the performance of an image compression system based on wavelet-based subband decomposition and vector quantization. The images are decomposed using wavelet filters into a set of subbands with different resolutions corresponding to different frequency bands. The resulting subbands are vector quantized using the Linde-Buzo-Gray (1980) algorithm and various fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive neural network through an unsupervised learning process. The quality of the multiresolution codebooks designed by these algorithms is measured on the reconstructed images belonging to the training set used for multiresolution codebook design and the reconstructed images from a testing set. 相似文献
13.
This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities. 相似文献
14.
Using vector quantization for image processing 总被引:1,自引:0,他引:1
Cosman P.C. Oehler K.L. Riskin E.A. Gray R.M. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1993,81(9):1326-1341
A review is presented of vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, which is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks, such as enhancement classification, halftoning, and edge detection, and to reduce the computational complexity by performing the tasks simultaneously with the compression. The fundamental ideas of vector quantization are explained, and vector quantization algorithms that perform image processing are surveyed 相似文献
15.
Trellis-coded vector quantization 总被引:5,自引:0,他引:5
Fischer T.R. Marcellin M.W. Wang M. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1991,37(6):1551-1566
Trellis-coded quantization is generalized to allow a vector reproduction alphabet. Three encoding structures are described, several encoder design rules are presented, and two design algorithms are developed. It is shown that for a stationary ergodic vector source, if the optimized trellis-coded vector quantization reproduction process is jointly stationary and ergodic with the source, then the quantization noise is zero-mean and of a variance equal to the difference between the source variance and the variance of the reproduction sequence. Several examples illustrate the encoder design procedure and performance 相似文献
16.
This paper presents a fast method for building and searching split codebooks for vector quantization. The proposed method is evaluated in near transparent quality vector quantization of Line Spectral Frequencies (LSF) at 24-bit per frame. The method is based on a family of fractals called Space-Filling Curves (SFC). The SF curves achieve a significant saving in the complexity of vector quantization by reducing the problem to quantization in one-dimensional space. The paper presents algorithms for the generation of the SFC mapping utilizing the self-replication feature of the curves, and a number of simulation experiments to demonstrate the effectiveness of the method. It is shown that the SFC can reduce the search complexity of split codebooks by a factor of 8–32 times with a slight degradation in the vector quantization performance. 相似文献
17.
18.
本文提出两种基于可靠度的迭代大数逻辑译码算法,从以下两个方面降低译码复杂度:(1)校验节点使用伴随式信息处理,可节省外信息的计算操作;(2)变量节点使用伴随信息进行总信息的投票计数过程.结合非均匀量化技术,接收信号在判决门限附近获得更加精细的处理.此外,本文利用量化参数和列重比例信息对可靠度偏移方向和幅度进行了设计.仿真实验表明,本文提出的算法能够在很低的量化比特(3~4 bits)下有效工作,具有优良的译码性能和快速的收敛速度. 相似文献
19.
This article discusses bit allocation and adaptive search algorithms for mean-residual vector quantization (MRVQ) and multistage vector quantization (MSVQ). The adaptive search algorithm uses a buffer and a distortion threshold function to control the bit rate that is assigned to each input vector. It achieves a constant rate for the entire image but variable bit rate for each vector in the image. For a given codebook and several bit rates, we compare the performance between the optimal bit allocation and adaptive search algorithms. The results show that the performance of the adaptive search algorithm is only 0.20-0.53 dB worse than that of the optimal bit allocation algorithm, but the complexity of the adaptive search algorithm is much less than that of the optimal bit allocation algorithm. 相似文献
20.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1984,30(6):805-814
A vector quantizer maps ak -dimensional vector into one of a finite set of output vectors or "points". Although certain lattices have been shown to have desirable properties for vector quantization applications, there are as yet no algorithms available in the quantization literature for building quantizers based on these lattices. An algorithm for designing vector quantizers based on the root latticesA_{n}, D_{n} , andE_{n} and their duals is presented. Also, a coding scheme that has general applicability to all vector quantizers is presented. A four-dimensional uniform vector quantizer is used to encode Laplacian and gamma-distributed sources at entropy rates of one and two bits/sample and is demonstrated to achieve performance that compares favorably with the rate distortion bound and other scalar and vector quantizers. Finally, an application using uniform four- and eight-dimensional vector quantizers for encoding the discrete cosine transform coefficients of an image at0.5 bit/pel is presented, which visibly illustrates the performance advantage of vector quantization over scalar quantization. 相似文献