首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article presents an improved version of an algorithm designed to perform image restoration via nonlinear interpolative vector quantization (NLIVQ). The improvement results from using lapped blocks during the decoding process. The algorithm is trained on original and diffraction-limited image pairs. The discrete cosine transform is again used in the codebook design process to control complexity. Simulation results are presented which demonstrate improvements over the nonlapped algorithm in both observed image quality and peak signal-to-noise ratio. In addition, the nonlinearity of the algorithm is shown to produce super-resolution in the restored images.  相似文献   

2.
In this paper we propose an adaptive image restoration algorithm using block-based edge-classification for reducing block artifacts in compressed images. In order to efficiently reduce block artifacts, edge direction of each block is classified by using model-fitting criterion, and the constrained least-squares (CLS) filter with corresponding direction is used for restoring the block. The proposed restoration filter is derived based on the observation that the quantization operation in a series of coding processes is a nonlinear and many-to-one mapping operator. Then we propose an approximated version of a constrained optimization technique as a restoration process for removing the nonlinear and space-varying degradation operator. For real-time implementation, the proposed restoration filter can be realized in the form of a truncated FIR filter, which is suitable for postprocessing reconstructed images in digital TV, video conferencing systems, etc.  相似文献   

3.
In this paper, we propose a novel learning-based image restoration scheme for compressed images by suppressing compression artifacts and recovering high frequency (HF) components based upon the priors learnt from a training set of natural images. The JPEG compression process is simulated by a degradation model, represented by the signal attenuation and the Gaussian noise addition process. Based on the degradation model, the input image is locally filtered to remove Gaussian noise. Subsequently, the learning-based restoration algorithm reproduces the HF component to handle the attenuation process. Specifically, a Markov-chain based mapping strategy is employed to generate the HF primitives based on the learnt codebook. Finally, a quantization constraint algorithm regularizes the reconstructed image coefficients within a reasonable range, to prevent possible over-smoothing and thus ameliorate the image quality. Experimental results have demonstrated that the proposed scheme can reproduce higher quality images in terms of both objective and subjective quality.  相似文献   

4.
基于各向异性规整化的总变分盲复原算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
针对大气湍流退化图像复原问题,提出了一种基于各向异性和非线性规整化的总变分盲复原新算法,该算法主要结合图像和湍流点扩展函数的一些性质采用基于各向异性的空间自适应规整化处理,建立了具有非线性和空间各向异性的规整化函数,使其在恢复目标图像和估计点扩展函数时能自适应地进行梯度平滑。最后,通过交替最小化方案来极小化代价函数和通过定点迭代策略将非线性方程进行线性化处理,快速地估计点扩展函数和恢复图像。在微机上对数字模拟和实际退化图像进行了一系列恢复实验,验证了算法的有效性和稳健性。  相似文献   

5.
大气湍流是大气中的一种重要运动形式,它的存在使大气中的动量、热量、水气和污染物的垂直和水平交换作用明显增强,这种干扰作用极大地影响了光电成像系统对于目标的分辨能力。由于湍流影响而退化的图像中同时存在着幸运区域,用适当的算法可以获得高分辨力复原图像。为了获取包含幸运区域的大气湍流模糊图像,在实验室使用人造湍流,并结合短曝光技术拍摄了大气湍流干扰的序列图像。文中应用矩形交叠分块方法,改进了基于偏微分方程(PDEs)的序列图像复原算法,对获取的序列短曝光图像进行处理。结果表明,经该算法处理得到的合成图像质量有明显的提升,该算法对大气湍流造成的图像质量退化有较好的复原作用。  相似文献   

6.
A method for modeling noise in medical images   总被引:1,自引:0,他引:1  
We have developed a method to study the statistical properties of the noise found in various medical images. The method is specifically designed for types of noise with uncorrelated fluctuations. Such signal fluctuations generally originate in the physical processes of imaging rather than in the tissue textures. Various types of noise (e.g., photon, electronics, and quantization) often contribute to degrade medical images; the overall noise is generally assumed to be additive with a zero-mean, constant-variance Gaussian distribution. However, statistical analysis suggests that the noise variance could be better modeled by a nonlinear function of the image intensity depending on external parameters related to the image acquisition protocol. We present a method to extract the relationship between an image intensity and the noise variance and to evaluate the corresponding parameters. The method was applied successfully to magnetic resonance images with different acquisition sequences and to several types of X-ray images.  相似文献   

7.
RBFN restoration of nonlinearly degraded images   总被引:4,自引:0,他引:4  
We investigate a technique for image restoration using nonlinear networks based on radial basis functions. The technique is also based on the concept of training or learning by examples. When trained properly, these networks are used as spatially invariant feedforward nonlinear filters that can perform restoration of images degraded by nonlinear degradation mechanisms. We examine a number of network structures including the Gaussian radial basis function network (RBFN) and some extensions of it, as well as a number of training algorithms including the stochastic gradient (SG) algorithm that we have proposed earlier. We also propose a modified structure based on the Gaussian-mixture model and a learning algorithm for the modified network. Experimental results indicate that the radial basis function network and its extensions can be very useful in restoring images degraded by nonlinear distortion and noise.  相似文献   

8.
This paper presents an approach for the effective combination of interpolation with binarization of gray level text images to reconstruct a high resolution binary image from a lower resolution gray level one. We study two nonlinear interpolative techniques for text image interpolation. These nonlinear interpolation methods map quantized low dimensional 2x2 image blocks to higher dimensional 4x4 (possibly binary) blocks using a table lookup operation. The first method performs interpolation of text images using context-based, nonlinear, interpolative, vector quantization (NLIVQ). This system has a simple training procedure and has performance (for gray-level high resolution images) that is comparable to our more sophisticated generalized interpolative VQ (GIVQ) approach, which is the second method. In it, we jointly optimize the quantizer and interpolator to find matched codebooks for the low and high resolution images. Then, to obtain the binary codebook that incorporates binarization with interpolation, we introduce a binary constrained optimization method using GIVQ. In order to incorporate the nearest neighbor constraint on the quantizer while minimizing the distortion in the interpolated image, a deterministic-annealing-based optimization technique is applied. With a few interpolation examples, we demonstrate the superior performance of this method over the NLIVQ method (especially for binary outputs) and other standard techniques e.g., bilinear interpolation and pixel replication.  相似文献   

9.
Dequantizing image orientation   总被引:1,自引:0,他引:1  
We address the problem of computing a local orientation map in a digital image. We show that standard image gray level quantization causes a strong bias in the repartition of orientations, hindering any accurate geometric analysis of the image. In continuation, a simple dequantization algorithm is proposed, which maintains all of the image information and transforms the quantization noise in a nearby Gaussian white noise (we actually prove that only Gaussian noise can maintain isotropy of orientations). Mathematical arguments are used to show that this results in the restoration of a high quality image isotropy. In contrast with other classical methods, it turns out that this property can be obtained without smoothing the image or increasing the signal-to-noise ratio (SNR). As an application, it is shown in the experimental section that, thanks to this dequantization of orientations, such geometric algorithms as the detection of nonlocal alignments can be performed efficiently. We also point out similar improvements of orientation quality when our dequantization method is applied to aliased images.  相似文献   

10.
Restoration of blurred star field images by maximally sparseoptimization   总被引:1,自引:0,他引:1  
The problem of removing blur from, or sharpening, astronomical star field intensity images is discussed. An approach to image restoration that recovers image detail using a constrained optimization theoretic approach is introduced. Ideal star images may be modeled as a few point sources in a uniform background. It is argued that a direct measure of image sparseness is the appropriate optimization criterion for deconvolving the image blurring function. A sparseness criterion based on the l(p) is presented, and candidate algorithms for solving the ensuing nonlinear constrained optimization problem are presented and reviewed. Synthetic and actual star image reconstruction examples are presented to demonstrate the method's superior performance as compared with several image deconvolution methods.  相似文献   

11.
The nonlinear principal component analysis (NLPCA) method is combined with vector quantization for the coding of images. The NLPCA is realized using the backpropagation neural network (NN), while vector quantization is performed using the learning vector quantizer (LVQ) NN. The effects of quantization in the quality of the reconstructed images are then compensated by using a novel codebook vector optimization procedure.  相似文献   

12.
为了提高图像的压缩比和压缩质量,结合人眼对比度敏感视觉特性和图像变换域频谱特征,该文提出一种自适应量化表的构建方法。并将该表代替JPEG中的量化表,且按照JPEG的编码算法对3幅不同的彩色图像进行了压缩仿真实验验证,同时与JPEG压缩作对比分析。实验结果表明:与JPEG压缩方法相比,在相同的压缩比下,采用自适应量化压缩后,3幅解压彩色图像的SSIM和PSNR值分别平均提高了1.67%和4.96%。表明该文提出的结合人眼视觉特性的自适应量化是一种较好的、有实用价值的量化方法。  相似文献   

13.
Rate bounds on SSIM index of quantized images   总被引:2,自引:0,他引:2  
In this paper, we derive bounds on the structural similarity (SSIM) index as a function of quantization rate for fixed-rate uniform quantization of image discrete cosine transform (DCT) coefficients under the high-rate assumption. The space domain SSIM index is first expressed in terms of the DCT coefficients of the space domain vectors. The transform domain SSIM index is then used to derive bounds on the average SSIM index as a function of quantization rate for uniform, Gaussian, and Laplacian sources. As an illustrative example, uniform quantization of the DCT coefficients of natural images is considered. We show that the SSIM index between the reference and quantized images fall within the bounds for a large set of natural images. Further, we show using a simple example that the proposed bounds could be very useful for rate allocation problems in practical image and video coding applications.  相似文献   

14.
Watermarking in the Joint Photographic Experts Group (JPEG)2000 coding pipeline is investigated in this paper. A joint quantization and watermarking method based on trellis-coded quantization (TCQ) is proposed to reliably embed data during the quantization stage of the JPEG2000 part 2 codec. The central contribution of this work is the use of a single quantization module to jointly perform quantization and watermark embedding at the same time. The TCQ-based watermarking technique allows embedding the watermark in the detail sub-bands of one or more resolution levels except the first one. Watermark recovery is performed after image decompression. The performance of this joint scheme in terms of image quality and robustness against common image attacks was estimated on real images.  相似文献   

15.
Two enhanced subband coding schemes using a regularized image restoration technique are proposed: the first controls the global regularity of the decompressed image; the second extends the first approach at each decomposition level. The quantization scheme incorporates scalar quantization (SQ) and pyramidal lattice vector quantization (VQ) with both optimal bit and quantizer allocation. Experimental results show that both the block effect due to VQ and the quantization noise are significantly reduced.  相似文献   

16.
Image coding using vector quantization: a review   总被引:2,自引:0,他引:2  
A review of vector quantization techniques used for encoding digital images is presented. First, the concept of vector quantization is introduced, then its application to digital images is explained. Spatial, predictive, transform, hybrid, binary, and subband vector quantizers are reviewed. The emphasis is on the usefulness of the vector quantization when it is combined with conventional image coding techniques, or when it is used in different domains  相似文献   

17.
大气湍流、光子噪声和光学跟踪系统对准误差严重降低了空间目标观测图像的分辨率.根据最大似然估计原理,建立了提高目标图像分辨率的多帧盲反卷积算法,用共轭梯度优化方法从目标记录图像估计出原始目标函数和点扩散函数.运用低通平滑滤波技术在算法迭代过程中逐步完成对噪声的抑制.模拟实验数据和实际图像的复原结果表明,论文建立的盲反卷积算法有效地克服了大气湍流、光子噪声和光学系统对准误差,提高了目标图像的分辨率,复原目标图像的分辨率达到了光学衍射极限的水平.  相似文献   

18.
在毫米波的图像恢复中,L-R算法是一种简单而有效的非线性方法,但当噪声不可忽略时,L-R算法难以获得较好的复原结果。针对毫米波图像数据量少和图像分辨率低的特点,提出基于改进自蛇模型和L-R算法毫米波图像恢复方法,以局部方差构造自蛇模型的边缘停止函数,其改进自蛇模型在消除噪声的同时更能够保留图像中的边缘和细节特征,然后使用L-R算法进行图像恢复,这种改进算法通过使用基于改进自蛇模型去噪能有效地减少噪声对L-R算法的影响。实验结果表明:在信噪比和相关度方面本文算法提高了L-R算法的性能,可用于含噪声的图像复原。  相似文献   

19.
颜色量化是利用人眼对颜色的惰性,将原图像中不太重要的相似颜色合并为一种颜色,减少图像中的颜色,而使量化前后的图像对于人眼的认识误差最小即量化误差最小。在此从揭示八叉树颜色量化算法的优缺点开始,优化八叉树结构,进化为以二叉堆和数组索引数据结构,对最不重要颜色不断地进行退化,最终达到量化要求的颜色数量。实践证明,在相同情况下,二叉堆比八叉树颜色量化对图像的量化误差更小。  相似文献   

20.
一种基于小波包树的图像压缩方法   总被引:1,自引:0,他引:1  
在过去十年中对图像压缩的研究呈持续增长趋势,在这个领域里最有效和最具代表性的方法是离散小波变换。图像的压缩包括变换、量化和编码。提出了图像的变换和量化方案。该算法采用小波包实现变换,在香农熵的基础上重建最佳树,并且为了量化采用了自适应阈值。相对小波变换的压缩,提供了一种很好的压缩实现。最后实验结果显示了该算法的优越性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号