首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
Adaptive threshold modulation for error diffusion halftoning   总被引:5,自引:0,他引:5  
Grayscale digital image halftoning quantizes each pixel to one bit. In error diffusion halftoning, the quantization error at each pixel is filtered and fed back to the input in order to diffuse the quantization error among the neighboring grayscale pixels. Error diffusion introduces nonlinear distortion (directional artifacts), linear distortion (sharpening), and additive noise. Threshold modulation, which alters the quantizer input, has been previously used to reduce either directional artifacts or linear distortion. This paper presents an adaptive threshold modulation framework to improve halftone quality by optimizing error diffusion parameters in the least squares sense. The framework models the quantizer implicitly, so a wide variety of quantizers may be used. Based on the framework, we derive adaptive algorithms to optimize 1) edge enhancement halftoning and 2) green noise halftoning. In edge enhancement halftoning, we minimize linear distortion by controlling the sharpening control parameter. We may also break up directional artifacts by replacing the thresholding quantizer with a deterministic bit flipping (DBF) quantizer. For green noise halftoning, we optimize the hysteresis coefficients.  相似文献   

2.
Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank.  相似文献   

3.
Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. Tone-dependent error diffusion (TDED) reduces these artifacts by controlling the diffusion of quantization errors based on the input graylevel. We present an extension of TDED to color. In color-error diffusion, which color to render becomes a major concern in addition to finding optimal dot patterns. We propose a visually meaningful scheme to train input-level (or tone-) dependent color-error filters. Our design approach employs a Neugebauer printer model and a color human visual system model that takes into account spatial considerations in color reproduction. The resulting halftones overcome several traditional error-diffusion artifacts and achieve significantly greater accuracy in color rendition.  相似文献   

4.
Halftones and other binary images are difficult to process with causing several degradation. Degradation is greatly reduced if the halftone is inverse halftoned (converted to grayscale) before scaling, sharpening, rotating, or other processing. For error diffused halftones, we present (1) a fast inverse halftoning algorithm and (2) a new multiscale gradient estimator. The inverse halftoning algorithm is based on anisotropic diffusion. It uses the new multiscale gradient estimator to vary the tradeoff between spatial resolution and grayscale resolution at each pixel to obtain a sharp image with a low perceived noise level. Because the algorithm requires fewer than 300 arithmetic operations per pixel and processes 7x7 neighborhoods of halftone pixels, it is well suited for implementation in VLSI and embedded software. We compare the implementation cost, peak signal to noise ratio, and visual quality with other inverse halftoning algorithms.  相似文献   

5.
A multiscale error diffusion technique for digital halftoning   总被引:4,自引:0,他引:4  
A new digital halftoning technique based on multiscale error diffusion is examined. We use an image quadtree to represent the difference image between the input gray-level image and the output halftone image. In iterative algorithm is developed that searches the brightest region of a given image via "maximum intensity guidance" for assigning dots and diffuses the quantization error noncausally at each iteration. To measure the quality of halftone images, we adopt a new criterion based on hierarchical intensity distribution. The proposed method provides very good results both visually and in terms of the hierarchical intensity quality measure.  相似文献   

6.
Inverse halftoning and kernel estimation for error diffusion   总被引:8,自引:0,他引:8  
Two different approaches in the inverse halftoning of error-diffused images are considered. The first approach uses linear filtering and statistical smoothing that reconstructs a gray-scale image from a given error-diffused image. The second approach can be viewed as a projection operation, where one assumes the error diffusion kernel is known, and finds a gray-scale image that will be halftoned into the same binary image. Two projection algorithms, viz., minimum mean square error (MMSE) projection and maximum a posteriori probability (MAP) projection, that differ on the way an inverse quantization step is performed, are developed. Among the filtering and the two projection algorithms, MAP projection provides the best performance for inverse halftoning. Using techniques from adaptive signal processing, we suggest a method for estimating the error diffusion kernel from the given halftone. This means that the projection algorithms can be applied in the inverse halftoning of any error-diffused image without requiring any a priori information on the error diffusion kernel. It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known.  相似文献   

7.
This paper is a tradeoff study of image processing algorithms that can be used to transform continuous tone and halftone pictorial image input into spatially encoded representations compatible with binary output processes. A large percentage of the electronic output marking processes utilize a binary mode of operation. The history and rationale for this are reviewed and thus the economic justification for the tradeoff is presented. A set of image quality and processing complexity metrics are then defined. Next, a set of algorithms including fixed and adaptive thresholding, orthographic pictorial fonts, electronic screening, ordered dither, and error diffusion are defined and evaluated relative to their ability to reproduce continuous tone input. Finally, these algorithms, along with random nucleated halftoning, the alias reducing image enhancement system (ARIES), and a new algorithm, selective halftone rescreening (SHARE), are defined and evaluated as to their ability to reproduce halftone pictorial input.  相似文献   

8.
Image quality assessment based on a degradation model   总被引:19,自引:0,他引:19  
We model a degraded image as an original image that has been subject to linear frequency distortion and additive noise injection. Since the psychovisual effects of frequency distortion and noise injection are independent, we decouple these two sources of degradation and measure their effect on the human visual system. We develop a distortion measure (DM) of the effect of frequency distortion, and a noise quality measure (NQM) of the effect of additive noise. The NQM, which is based on Peli's (1990) contrast pyramid, takes into account the following: 1) variation in contrast sensitivity with distance, image dimensions, and spatial frequency; 2) variation in the local luminance mean; 3) contrast interaction between spatial frequencies; 4) contrast masking effects. For additive noise, we demonstrate that the nonlinear NQM is a better measure of visual quality than peak signal-to noise ratio (PSNR) and linear quality measures. We compute the DM in three steps. First, we find the frequency distortion in the degraded image. Second, we compute the deviation of this frequency distortion from an allpass response of unity gain (no distortion). Finally, we weight the deviation by a model of the frequency response of the human visual system and integrate over the visible frequencies. We demonstrate how to decouple distortion and additive noise degradation in a practical image restoration system  相似文献   

9.
Hierarchical Error Diffusion   总被引:1,自引:0,他引:1  
This paper develops a distinctive class of color error diffusion algorithm, called hierarchical error diffusion (HED). It aims to achieve perceptually pleasing color halftone through neither conventional joint quantization nor interchannel error diffusion. Instead, it explicitly controls three critical factors sequentially to yield high-quality color halftone: dot-overlapping control, dot-positioning control, and dot-coloring control. A specific implementation of HED is presented with the objective of minimum brightness variation rendering (MBVR). First, an optimal color transform is derived for dot-overlapping control to achieve minimum brightness variation color density (MBVCD). Then, the embedded monochrome error diffusion is employed in dot-positioning control. By sequentially thresholding the elements in partial density sum vector, better dot-positioning is encouraged for more visible color dots. The ldquoblue noiserdquo characteristics of dot-positioning from the monochrome error diffusion are inherited by the color halftone. The simple density priority strategy is applied in dot-coloring control. The pixel color error is diffused channel-independently with a single error filter in halftone dot color space. A comparison with the state-of-the-art color error diffusion algorithms demonstrates excellent halftone quality of HED, while without the typical artifacts of vector error diffusion. Evidence also shows that HED is closer to achieve MBVR than the minimum brightness variation quantization (MBVQ) color diffusion algorithm proposed in.  相似文献   

10.
Due to its high image quality and moderate computational complexity, error diffusion is a popular halftoning algorithm for use with inkjet printers. However, error diffusion is an inherently serial algorithm that requires buffering a full row of accumulated diffused error (ADE) samples. For the best performance when the algorithm is implemented in hardware, the ADE data should be stored on the chip on which the error diffusion algorithm is implemented. However, this may result in an unacceptable hardware cost. In this paper, we examine the use of quantization of the ADE to reduce the amount of data that must be stored. We consider both uniform and nonuniform quantizers. For the nonuniform quantizers, we build on the concept of tone-dependency in error diffusion, by proposing several novel feature-dependent quantizers that yield improved image quality at a given bit rate, compared to memoryless quantizers. The optimal design of these quantizers is coupled with the design of the tone-dependent parameters associated with error diffusion. This is done via a combination of the classical Lloyd-Max algorithm and the training framework for tone-dependent error diffusion. Our results show that 4-bit uniform quantization of the ADE yields the same halftone quality as error diffusion without quantization of the ADE. At rates that vary from 2 to 3 bits per pixel, depending on the selectivity of the feature on which the quantizer depends, the feature-dependent quantizers achieve essentially the same quality as 4-bit uniform quantization.  相似文献   

11.
基于色调处理技术的图像认证算法   总被引:3,自引:0,他引:3  
基于色调处理技术,该文给出了一种有效可行且具有自修复能力的图像认证算法。首先,基于误差扩散色调处理技术将水印图像4bit色调量化,井依据混沌置乱算子,将色调结果置乱,然后构造平均误差最小的特征集合C,最后依据误差扩散数据隐藏算法将置乱后水印图像隐藏于原始图像中;在认证端,从接收到的图像提取其中所隐藏水印信息并进行逆置乱,比较接收到的图像和反置乱后的隐藏信息,判断内容发生变化的位置,并依据所提取的水印信息修复被篡改图像。实验结果表明,该算法对删除、替换、篡改等破坏图像内容的恶意操作有精确的检测和定位,以及自修复能力。  相似文献   

12.
We present an algorithm for image browsing systems that embeds the output of binary Floyd-Steinberg (1975) error diffusion, or a low bit-depth gray-scale or color error diffused image into higher bit-depth gray-scale or color error diffused images. The benefits of this algorithm are that a low bit-depth halftoned image can be directly obtained from a higher bit-depth halftone for printing or progressive transmission simply by masking one or more bits off of the higher bit-depth image. The embedding can be done in any bits of the output, although the most significant or the least significant bits are most convenient. Due to constraints on the palette introduced by embedding, the image quality for the higher bit-depth halftone may be reduced. To preserve the image quality, we present algorithms for color palette organization, or binary index assignment, to be used as a preprocessing step to the embedding algorithm.  相似文献   

13.
Speckle is a form of multiplicative and locally correlated noise which degrades the signal-to-noise ratio (SNR) and contrast resolution of ultrasound images. This paper presents a new anisotropic level set method for despeckling low SNR, low contrast ultrasound images. The coefficient of variation, a speckle-robust edge detector is embedded in the well known geodesic “snakes” model to smooth the image level sets, while preserving and sharpening edges of a speckled image. The method achieves much better speckle suppression and edge preservation compared to the traditional anisotropic diffusion based despeckling filters. In addition, the performance of the filter is less sensitive to the speckle scale of the image and edge contrast parameter, which makes it more suitable for the detection of low contrast features in an ultrasound image. We validate the method using both synthetic and real ultrasound images and quantify the performance improvement over other state-of-the-art algorithms in terms of speckle noise reduction and edge preservation indices.  相似文献   

14.
Design of linear equalizers optimized for the structural similarity index.   总被引:2,自引:0,他引:2  
We propose an algorithm for designing linear equalizers that maximize the structural similarity (SSIM) index between the reference and restored signals. The SSIM index has enjoyed considerable application in the evaluation of image processing algorithms. Algorithms, however, have not been designed yet to explicitly optimize for this measure. The design of such an algorithm is nontrivial due to the nonconvex nature of the distortion measure. In this paper, we reformulate the nonconvex problem as a quasi-convex optimization problem, which admits a tractable solution. We compute the optimal solution in near closed form, with complexity of the resulting algorithm comparable to complexity of the linear minimum mean squared error (MMSE) solution, independent of the number of filter taps. To demonstrate the usefulness of the proposed algorithm, it is applied to restore images that have been blurred and corrupted with additive white gaussian noise. As a special case, we consider blur-free image denoising. In each case, its performance is compared to a locally adaptive linear MSE-optimal filter. We show that the images denoised and restored using the SSIM-optimal filter have higher SSIM index, and superior perceptual quality than those restored using the MSE-optimal adaptive linear filter. Through these results, we demonstrate that a) designing image processing algorithms, and, in particular, denoising and restoration-type algorithms, can yield significant gains over existing (in particular, linear MMSE-based) algorithms by optimizing them for perceptual distortion measures, and b) these gains may be obtained without significant increase in the computational complexity of the algorithm.  相似文献   

15.
It is a challenging work to design a robust halftone image watermarking scheme against desynchronization attacks. In this paper, we propose a feature-based digital watermarking method for halftone images with low computational complexity, good visual quality and reasonable resistance toward desynchronization attacks. Firstly, the feature points are extracted from host halftone image by using multi-scale Harris–Laplace detector, and the local feature regions (LFRs) are constructed according to the feature scale theory. Secondly, discrete Fourier transform (DFT) is performed on the LFRs, and the embedding positions (DFT coefficients) are selected adaptively according to the magnitude spectrum information. Finally, the digital watermark is embedded into the LFRs by quantizing the magnitudes of the selected DFT coefficients. By binding the watermark with the geometrically invariant halftone image features, the watermark detection can be done without synchronization error. Simulation results show that the proposed scheme is invisible and robust against common signals processing such as median filtering, sharpening, noise adding, and JPEG compression, etc., and desynchronization attacks such as rotation, scaling, translation (RST), cropping, local random bend, and print-scan, etc.  相似文献   

16.
A new class of dithering algorithms for black and white (B/W) images is presented. The basic idea behind the technique is to divide the image into small blocks and minimize the distortion between the original continuous-tone image and its low-pass-filtered halftone. This corresponds to a quadratic programming problem with linear constraints, which is solved via standard optimization techniques. Examples of B/W halftone images obtained by this technique are compared to halftones obtained via existing dithering algorithms.  相似文献   

17.
Data hiding watermarking for halftone images   总被引:11,自引:0,他引:11  
In many printer and publishing applications, it is desirable to embed data in halftone images. We proposed some novel data hiding methods for halftone images. For the situation in which only the halftone image is available, we propose data hiding smart pair toggling (DHSPT) to hide data by forced complementary toggling at pseudo-random locations within a halftone image. The complementary pixels are chosen to minimize the chance of forming visually undesirable clusters. Our experimental results suggest that DHSPT can hide a large amount of hidden data while maintaining good visual quality. For the situation in which the original multitone image is available and the halftoning method is error diffusion, we propose the modified data hiding error diffusion (MDHED) that integrates the data hiding operation into the error diffusion process. In MDHED, the error due to the data hiding is diffused effectively to both past and future pixels. Our experimental results suggest that MDHED can give better visual quality than DHSPT. Both DHSPT and MDHED are computationally inexpensive.  相似文献   

18.
We derive necessary conditions on the channel input and output probability density functions (PDFs) to achieve channel capacity for the additive white Gaussian noise (AWGN) channel with Tikhonov distributed phase error. We show that the Gaussian input does not achieve capacity, and we obtain a lower bound on the shaping gain. The shaping gain can be as large as 2.4 dB at rates as low as 0.5 b/symbol/Hz. This contrasts with the well-known shaping gain for the AWGN channel, which is small at low transmission rates and 1.53 dB at higher rates  相似文献   

19.
The encoding and decoding schemes presented are aimed at enabling the transfer of data through a channel in which two types of interference are added to the transmitted signal and the sum is quantized. One of these interferences is known (or can be estimated), whereas the second is an additive white Gaussian noise (AWGN). Since the input of the quantizer is not accessible, the known interference can not be removed from the received signal. We show that the error rate for an uncoded transmission through this channel is unacceptably large, even for low noise levels and linear quantization. It is also shown that the problem becomes even more severe when a nonlinear quantization is present. Therefore, coding is essential and a huge coding gain is achievable in this application. An upper-bound on the error rate, contributed by the component codes of a multilevel code, has been developed for multistage decoding. Results of computer simulations of a practical case with optimal and suboptimal decoding algorithms, both developed in this paper, are presented  相似文献   

20.
Because of its good image quality and moderate computational requirements, error diffusion has become a popular halftoning solution for desktop printers, especially inkjet printers. By making the weights and thresholds tone-dependent and using a predesigned halftone bitmap for tone-dependent threshold modulation, it is possible to achieve image quality very close to that obtained with far more computationally complex iterative methods. However, the ability to implement error diffusion in very low cost or large format products is hampered by the requirement to store the tone-dependent parameters and halftone bitmap, and also the need to store error information for an entire row of the image at any given point during the halftoning process. For the first problem, we replace the halftone bitmap by deterministic bit flipping, which has been previously applied to halftoning, and we linearly interpolate the tone-dependent weights and thresholds from a small set of knot points. We call this implementation a reduced lookup table. For the second problem, we introduce a new serial block-based approach to error diffusion. This approach depends on a novel intrablock scan path and the use of different parameter sets at different points along that path. We show that serial block-based error diffusion reduces off-chip memory access by a factor equal to the block height. With both these solutions, satisfactory image quality can only be obtained with new cost functions that we have developed for the training process. With these new cost functions and moderate block size, we can obtain image quality that is very close to that of the original tone-dependent error diffusion algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号