首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Modeling and quality assessment of halftoning by error diffusion   总被引:11,自引:0,他引:11  
Digital halftoning quantizes a graylevel image to one bit per pixel. Halftoning by error diffusion reduces local quantization error by filtering the quantization error in a feedback loop. In this paper, we linearize error diffusion algorithms by modeling the quantizer as a linear gain plus additive noise. We confirm the accuracy of the linear model in three independent ways. Using the linear model, we quantify the two primary effects of error diffusion: edge sharpening and noise shaping. For each effect, we develop an objective measure of its impact on the subjective quality of the halftone. Edge sharpening is proportional to the linear gain, and we give a formula to estimate the gain from a given error filter. In quantifying the noise, we modify the input image to compensate for the sharpening distortion and apply a perceptually weighted signal-to-noise ratio to the residual of the halftone and modified input image. We compute the correlation between the residual and the original image to show when the residual can be considered signal independent. We also compute a tonality measure similar to total harmonic distortion. We use the proposed measures for edge sharpening, noise shaping, and tonality to evaluate the quality of error diffusion algorithms.  相似文献   

2.
This article provides an approach for representing an optimum vector quantizer by a scalar nonlinear gain-plus-additive noise model. The validity and accuracy of this analytic model is confirmed by comparing the calculated model quantization errors with actual simulation of the optimum Linde-Buzo-Gray (1980) vector quantizer. Using this model, we form an MSE measure of an M-band filter bank codec in terms of the equivalent scalar quantization model and find the optimum FIR filter coefficients for each channel in the M-band structure for a given bit rate, filter length, and input signal correlation model. Specific design examples are worked out for four-tap filters in the two-band paraunitary case. These theoretical results are confirmed by extensive Monte Carlo simulation  相似文献   

3.
Adaptive threshold modulation for error diffusion halftoning   总被引:5,自引:0,他引:5  
Grayscale digital image halftoning quantizes each pixel to one bit. In error diffusion halftoning, the quantization error at each pixel is filtered and fed back to the input in order to diffuse the quantization error among the neighboring grayscale pixels. Error diffusion introduces nonlinear distortion (directional artifacts), linear distortion (sharpening), and additive noise. Threshold modulation, which alters the quantizer input, has been previously used to reduce either directional artifacts or linear distortion. This paper presents an adaptive threshold modulation framework to improve halftone quality by optimizing error diffusion parameters in the least squares sense. The framework models the quantizer implicitly, so a wide variety of quantizers may be used. Based on the framework, we derive adaptive algorithms to optimize 1) edge enhancement halftoning and 2) green noise halftoning. In edge enhancement halftoning, we minimize linear distortion by controlling the sharpening control parameter. We may also break up directional artifacts by replacing the thresholding quantizer with a deterministic bit flipping (DBF) quantizer. For green noise halftoning, we optimize the hysteresis coefficients.  相似文献   

4.
Optimal hierarchical coding is sought, for progressive or scalable multidimensional signal transmission, by minimizing the variance of the error difference between the original image and its lower resolution renditions. The optimal, according to the above criterion, pyramidal coders are determined for images quantized using the optimal vector Lloyd-Max quantizers. A rigorous general statistical model of a vector Lloyd-Max quantizer is used, consisting of a linear time-invariant filter followed by additive noise uncorrelated with the input. Given arbitrary analysis filters, the optimal synthesis filters are found. The optimal analysis filters are subsequently determined, leading to formulas for globally optimal structures for pyramidal multidimensional signal decompositions. These structures produce replicas of the original image, which at lower resolutions retain as much similarity to the original as possible. This is highly useful for the progressive coding of two- or three-dimensional (2-D or 3-D) images needed in applications such as fast browsing through image databases. Furthermore, the minimization of the variance of the error image leads to minimization of the variance of the quantization noise for this image and, hence, to its optimally efficient compression. Experimental results illustrate the implementation and performance of the optimal pyramids in application for the coding of still 2-D images  相似文献   

5.
Fuzzy algorithms for combined quantization and dithering   总被引:6,自引:0,他引:6  
Color quantization reduces the number of the colors in a color image, while the subsequent dithering operation attempts to create the illusion of more colors with this reduced palette. In quantization, the palette is designed to minimize the mean squared error (MSE). However, the dithering that follows enhances the color appearance at the expense of increasing the MSE. We introduce three joint quantization and dithering algorithms to overcome this contradiction. The basic idea is the same in two of the approaches: introducing the dithering error to the quantizer in the training phase. The fuzzy C-means (FCM) and the fuzzy learning vector quantization (FLVQ) algorithms are used to develop two combined mechanisms. In the third algorithm, we minimize an objective function including an inter-cluster separation (ICS) term to obtain a color palette which is more suitable for dithering. The goal is to enlarge the convex hull of the quantization colors to obtain the illusion of more colors after error diffusion. The color contrasts of images are also enhanced with the proposed algorithm. We test the results of these three new algorithms using quality metrics which model the perception of the human visual system and illustrate that substantial improvements are achieved after dithering  相似文献   

6.
Color error-diffusion halftoning   总被引:1,自引:0,他引:1  
Grayscale halftoning converts a continuous-tone image (e.g., 8 bits per pixel) to a lower resolution (e.g., 1 bit per pixel) for printing or display. Grayscale halftoning by error diffusion uses feedback to shape the quantization noise into high frequencies where the human visual system (HVS) is least sensitive. In color halftoning, the application of grayscale error-diffusion methods to the individual colorant planes fails to exploit the HVS response to color noise. Ideally the quantization error must be diffused to frequencies and colors, to which the HVS is least sensitive. Further it is desirable for the color quantization to take place in a perceptual space so that the colorant vector selected as the output color is perceptually closest to the color vector being quantized. This article discusses the design principles of color error diffusion that differentiate it from grayscale error diffusion, focusing on color error diffusion halftoning systems using the red, green, and blue (RGB) space for convenience.  相似文献   

7.
Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. Tone-dependent error diffusion (TDED) reduces these artifacts by controlling the diffusion of quantization errors based on the input graylevel. We present an extension of TDED to color. In color-error diffusion, which color to render becomes a major concern in addition to finding optimal dot patterns. We propose a visually meaningful scheme to train input-level (or tone-) dependent color-error filters. Our design approach employs a Neugebauer printer model and a color human visual system model that takes into account spatial considerations in color reproduction. The resulting halftones overcome several traditional error-diffusion artifacts and achieve significantly greater accuracy in color rendition.  相似文献   

8.
We analyze and design the minimum mean-square error (MMSE) multiuser receiver for uniformly quantized synchronous code division multiple access (CDMA) signals in additive white Gaussian noise (AWGN) channels. The input-output relationship of the quantizer is represented by the gain-plus-additive-noise model. Based on this model, we derive the weight vector and the output signal-to-interference ratio (SIR) of the MMSE receiver. The effects of quantization on the MMSE receiver performance is characterized in a single parameter named "equivalent noise variance" which is a function of the sum of each active user's signal-to-noise ratio (SNR), processing gain, and the number of quantization levels. The optimal quantizer stepsize which maximizes the MMSE receiver output SNR is also determined. Simulation results validate the accuracy of our analysis.  相似文献   

9.
The paper is concerned with the analysis and modeling of the effects of quantization of subband signals in subband codecs. Using cyclostationary representations, the authors derive equations for the autocorrelation and power spectral density (PSD) of the reconstructed signal y(n) in terms of the analysis/synthesis filters, the PSD of the input, and the pdf-optimized quantizer model. Formulas for the mean-square error (MSE) and for compaction gain are obtained in terms of these parameters. The authors constrain the filter bank to be perfect reconstruction (PR) (but not necessarily paraunitary) in the absence of quantization and transmission errors. These formulas set the stage for filter optimization (maximization of compaction gain and minimization of MSE) subject to PR and bit constraints. Optimal filters are designed, optimal compensation is performed, and the theoretical results are confirmed with simulations. The floating-point quantizer wherein only the mantissa is uniformly quantized is also analyzed and compared with the fixed point, pdf-optimized filter bank. For high bit rates, their performance is comparable  相似文献   

10.
A new method is presented for the analysis of the effects of Lloyd-Max quantization in subband filterbanks and for the optimal design of such filterbanks. A rigorous statistical model of a vector Lloyd-Max quantizer is established first, consisting of a linear time-invariant filter followed by additive noise uncorrelated/with the input. On the basis of this model, an expression for this variance of the error of a subband coder using Lloyd-Max quantizers is explicitly determined. Given analysis filters that statistically separate the subbands, it is shown that this variance is minimized if the synthesis filters are chosen, which mould achieve perfect reconstruction in lossless coding. The globally optimum of such a filterbank, minimizing the coder error variance, is further obtained by proper choice of its analysis filters. An alternative design method is also evaluated and optimized. In this, the errors correlated with the signal are set to zero, leaving a random error residue uncorrelated with the signal. This design method is optimized by choosing the analysis filters so as to minimize the random error variance. The results are evaluated experimentally in the realistic setting of a logarithmically split subband image coding scheme.  相似文献   

11.
A new interband vector quantization of a human vision-based image representation is presented. The feature specific vector quantizer (FVQ) is suited for data compression beyond second-order decorrelation. The scheme is derived from statistical investigations of natural images and the processing principles of biological vision systems, the initial stage of the coding algorithm is a hierarchical, and orientation-selective, analytic bandpass decomposition, realized by even- and odd-symmetric filter pairs that are modeled after the simple cells of the visual cortex. The outputs of each even- and odd-symmetric filter pair are interpreted as real and imaginary parts of an analytic bandpass signal, which is transformed into a local amplitude and a local phase component according to the operation of cortical complex cells. Feature-specific multidimensional vector quantization is realized by combining the amplitude/phase samples of all orientation filters of one resolution layer. The resulting vectors are suited for a classification of the local image features with respect to their intrinsic dimensionality, and enable the exploitation of higher order statistical dependencies between the subbands. This final step is closely related to the operation of cortical hypercomplex or end-stopped cells. The codebook design is based on statistical as well as psychophysical and neurophysiological considerations, and avoids the common shortcomings of perceptually implausible mathematical error criteria. The resulting perceptual quality of compressed images is superior to that obtained with standard vector quantizers of comparable complexity.  相似文献   

12.
Direct-feedback coding is a refinement on the well-known differential coding method. Two filters are used at the transmitter of a direct-feedback coder; one connected in series with the input and the other in the forward path of a feedback loop that contains the quantizer. The first filter preemphasizes the signal and determines the overload characteristic of the coder; the other filter shapes the quantization noise and sets the stability of the feedback. At the receiver a filter reconstitutes the signal spectrum and deemphasizes the noise. For television the preemphasis should he a short time-constant differentiator, the deemphasis a short time integrator, and the feedback filter a long time integrator. Conventional differential coders use a single filter in the feedback path both to provide preemphasis and to shape the feedback characteristic, so the design is a compromise. Compared with direct-feedback coding they usually have less feedback gain and a larger time constant in the preemphasis and deemphasis, consequently, the contouring noise is more visible and the streaking caused by transmission error is longer. Although only application to television is considered, the methods have wider use. General formulae are given for the output noise and optimum filter characteristics; they take into account signal spectra, frequency weighting for noise, sampling rate, quantization step size, and an overload parameter. Measurements on real coders, operating on TV signals, and digital simulations confirm the results.  相似文献   

13.
On lattice quantization noise   总被引:3,自引:0,他引:3  
We present several results regarding the properties of a random vector, uniformly distributed over a lattice cell. This random vector is the quantization noise of a lattice quantizer at high resolution, or the noise of a dithered lattice quantizer at all distortion levels. We find that for the optimal lattice quantizers this noise is wide-sense-stationary and white. Any desirable noise spectra may be realized by an appropriate linear transformation (“shaping”) of a lattice quantizer. As the dimension increases, the normalized second moment of the optimal lattice quantizer goes to 1/2πe, and consequently the quantization noise approaches a white Gaussian process in the divergence sense. In entropy-coded dithered quantization, which can be modeled accurately as passing the source through an additive noise channel, this limit behavior implies that for large lattice dimension both the error and the bit rate approach the error and the information rate of an additive white Gaussian noise (AWGN) channel  相似文献   

14.
本文提出一种基于人的视觉特性的自适应预测图象编码方法(APICM),该方法核心是通过引入噪声反馈滤波器实现噪声重新分布,以抑制图象敏感区频带内的重构噪声.同时,在不灵敏频带内(如图象边缘区)引入一定噪声以达到整个恢复图象的噪声平滑.因而,可以用更粗(量化电平更少)的非均匀量化器实现.此方法不仅设计复杂度低,且性能比传统的DPCM方法好.  相似文献   

15.
The performance of a vector quantizer can be improved by using a variable-rate code. Three variable-rate vector quantization systems are applied to speech, image, and video sources and compared to standard vector quantization and noiseless variable-rate coding approaches. The systems range from a simple and flexible tree-based vector quantizer to a high-performance, but complex, jointly optimized vector quantizer and noiseless code. The systems provide significant performance improvements for subband speech coding, predictive image coding, and motion-compensated video, but provide only marginal improvements for vector quantization of linear predictive coefficients in speech and direct vector quantization of images. Criteria are suggested for determining when variable-rate vector quantization may provide significant performance improvement over standard approaches  相似文献   

16.
Subband coding is a popular and well established technique used in visual communications, such as image and video transmission. In the absence of quantization and transmission errors, the analysis and synthesis filters in a subband coding scheme can be designed to obtain perfect reconstruction of the input signal, but this is no longer the optimal solution in the presence of quantization of the subband coefficients. We presuppose the use of a two-dimensional (2-D) separable subband scheme and we address the problem of designing, for a given analysis filter bank and assuming uniform quantization of the subband coefficients, the set of row and column synthesis filters that minimize the mean squared reconstruction error at the output of the subband system. Since the corresponding optimization problem is inherently nonlinear, we propose a suboptimal solution that extends a one-dimensional (l-D) optimal filter design procedure, already presented in the literature, to a 2-D separable synthesis filter bank. The separable 2-D extension is not trivial, since the processing in one direction, e.g., the rows, alters the statistics of the signals for the design of the filters in the other direction, e.g., the columns. To further simplify the filter design, we propose to model the input image as a 2-D separable Markov process plus an additive white component. Several design examples using both synthetic signals and real world images are presented, showing that the filters designed using the proposed technique can give a significant gain with respect to the perfect reconstruction solution, especially when the dither technique is used for quantization. The simulation results also show that the proposed image model can be conveniently used in the synthesis filter design procedure.  相似文献   

17.
Interpolative vector quantization has been devised to alleviate the visible block structure of coded images plus the sensitive codebook problems produced by a simple vector quantizer. In addition, the problem of selecting color components for color picture vector quantization is discussed. Computer simulations demonstrate the success of this coding technique for color image compression at approximately 0.3 b/pel. Some background information on vector quantization is provided  相似文献   

18.
The application of DPCM to the coding of color television signals calls for the design of the quantization characteristics for the luminance and the two color difference components. In this paper we describe quantizer designs based on visibility thresholds of quantization noise measured as a function of prediction error for a number of test slides. We assume a quantizer for the luminance component designed previously by a similar procedure and conduct psychovisual tests for theUandVcolor components. The results show that, mainly for granular noise, there is some visual superposition of quantization noise between the luminance and theUchrominance signals, while little or no visual interaction is evident between the luminance and theVsignal impairments. The quantizers for theUandVcomponents are designed such that, with the previously designed luminance quantizer, the number of levels are minimized without exceeding the visibility thresholds. We conclude that a total of 6 bits per color sample are required to code theUandVcomponents together at 4.4 MHz.  相似文献   

19.
We develop a methodology for the analysis of signal quantization effects in critically sampled dyadic subband tree structures using a nonlinear gain-plus-additive-noise model for the probability density function (PDF)-optimized quantizer. We constrain the two-band nonquantized and uncompensated structure at each level to be perfect reconstruction (PR). We develop an equivalent uniform filter bank followed by its polyphase structure described by primitive submatrices and compute a rigorously correct mean squared error (MSE) in the frequency domain using cyclostationary concepts in terms of: (1) the allocated quantizer bits; (2) the filter coefficients; (3) an embedded compensation parameter vector. This MSE is then minimized over all three items above. Our optimization method is applied to the specific case of a four-channel dyadic tree with average bit rate constraint. This tree is represented by an eight-channel polyphase equivalent whose interchannel signals are correlated. We show how to represent rigorously the correlation of random noise between channels due to the embedded quantizers. Our design of paraunitary and biorthogonal structures with identical and nonidentical stages is performed, compared, and validated by computer simulation under the assumption of uncorrelated cross band noise. The nonidentical stage biorthogonal filter bank turned out to have the best performance in MSE sense, but the most robust structure is the nonidentical stage paraunitary filter bank  相似文献   

20.
An adaptive predictive coder providing almost toll quality at 16 kb/s and minimal degradation when the bit rate is lowered to 9.6 kb/s is described. The coder can operate at intermediate bit rates and can also change bit rate on a packet-by-packet basis. Variable bit rate operation is achieved through the use of switched quantization, thus eliminating the need for buffering of the output. A noise shaping filter provides flexible control of the output noise spectrum. The filter, in conjunction with an enhanced way to adapt the quantizer step size, which tries to accommodate the quantization noise feedback, accounts for the toll quality. By quantizing the residue with more than one quantizer, the effective number of bits per sample can be controlled in a deterministic way regardless of the entropy residue. The lower limit of operation is at 9.6 kb/s. Performance of the coder under random bit errors is also presented. It has been found that only at error rates of 10-2 and higher does the degradation becomes objectionable  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号