首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Adaptive threshold modulation for error diffusion halftoning   总被引:5,自引:0,他引:5  
Grayscale digital image halftoning quantizes each pixel to one bit. In error diffusion halftoning, the quantization error at each pixel is filtered and fed back to the input in order to diffuse the quantization error among the neighboring grayscale pixels. Error diffusion introduces nonlinear distortion (directional artifacts), linear distortion (sharpening), and additive noise. Threshold modulation, which alters the quantizer input, has been previously used to reduce either directional artifacts or linear distortion. This paper presents an adaptive threshold modulation framework to improve halftone quality by optimizing error diffusion parameters in the least squares sense. The framework models the quantizer implicitly, so a wide variety of quantizers may be used. Based on the framework, we derive adaptive algorithms to optimize 1) edge enhancement halftoning and 2) green noise halftoning. In edge enhancement halftoning, we minimize linear distortion by controlling the sharpening control parameter. We may also break up directional artifacts by replacing the thresholding quantizer with a deterministic bit flipping (DBF) quantizer. For green noise halftoning, we optimize the hysteresis coefficients.  相似文献   

2.
Traditional error diffusion halftoning is a high quality method for producing binary images from digital grayscale images. Error diffusion shapes the quantization noise power into the high frequency regions where the human eye is the least sensitive. Error diffusion may be extended to color images by using error filters with matrix-valued coefficients to take into account the correlation among color planes. For vector color error diffusion, we propose three contributions. First, we analyze vector color error diffusion based on a new matrix gain model for the quantizer, which linearizes vector error diffusion. The model predicts the key characteristics of color error diffusion, esp. image sharpening and noise shaping. The proposed model includes linear gain models for the quantizer by Ardalan and Paulos (1987) and by Kite et al. (1997) as special cases. Second, based on our model, we optimize the noise shaping behavior of color error diffusion by designing error filters that are optimum with respect to any given linear spatially-invariant model of the human visual system. Our approach allows the error filter to have matrix-valued coefficients and diffuse quantization error across color channels in an opponent color representation. Thus, the noise is shaped into frequency regions of reduced human color sensitivity. To obtain the optimal filter, we derive a matrix version of the Yule-Walker equations which we solve by using a gradient descent algorithm. Finally, we show that the vector error filter has a parallel implementation as a polyphase filterbank.  相似文献   

3.
Halftones and other binary images are difficult to process with causing several degradation. Degradation is greatly reduced if the halftone is inverse halftoned (converted to grayscale) before scaling, sharpening, rotating, or other processing. For error diffused halftones, we present (1) a fast inverse halftoning algorithm and (2) a new multiscale gradient estimator. The inverse halftoning algorithm is based on anisotropic diffusion. It uses the new multiscale gradient estimator to vary the tradeoff between spatial resolution and grayscale resolution at each pixel to obtain a sharp image with a low perceived noise level. Because the algorithm requires fewer than 300 arithmetic operations per pixel and processes 7x7 neighborhoods of halftone pixels, it is well suited for implementation in VLSI and embedded software. We compare the implementation cost, peak signal to noise ratio, and visual quality with other inverse halftoning algorithms.  相似文献   

4.
Error diffusion halftoning is a popular method of producing frequency modulated (FM) halftones for printing and display. FM halftoning fixes the dot size (e.g., to one pixel in conventional error diffusion) and varies the dot frequency according to the intensity of the original grayscale image. We generalize error diffusion to produce FM halftones with user-controlled dot size and shape by using block quantization and block filtering. As a key application, we show how block-error diffusion may be applied to embed information in hardcopy using dot shape modulation. We enable the encoding and subsequent decoding of information embedded in the hardcopy version of continuous-tone base images. The encoding-decoding process is modeled by robust data transmission through a noisy print-scan channel that is explicitly modeled. We refer to the encoded printed version as an image barcode due to its high information capacity that differentiates it from common hardcopy watermarks. The encoding/halftoning strategy is based on a modified version of block-error diffusion. Encoder stability, image quality versus information capacity tradeoffs, and decoding issues with and without explicit knowledge of the base image are discussed.  相似文献   

5.
Multitoning is the representation of digital pictures using a given set of available color intensities, which are also known as tones or quantization levels. It can be viewed as the generalization of halftoning, where only two such quantization levels are available. Its main application is for printing and, similar to halftoning, can be applied to both colored and grayscale images. In this paper, we present a method to produce multitones based on the multiscale error diffusion technique. Key characteristics of this technique are: 1) the use of an image quadtree; 2) the quantization order of the pixels being determined through "maximum intensity guidance" on the image quadtree; and 3) noncausal error diffusion. Special care has been given to the problem of banding, which is one of the inherent limitations in error diffusion when applied to multitoning. Banding is evident in areas of the image with values close to one of the available quantization levels; our approach is to apply a preprocessing step to alleviate part of the problem. Our results are evaluated both in terms of visual appearance and using a set of standard metrics, with the latter demonstrating the blue-noise characteristics and very low anisotropy of the proposed method.  相似文献   

6.
Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. Tone-dependent error diffusion (TDED) reduces these artifacts by controlling the diffusion of quantization errors based on the input graylevel. We present an extension of TDED to color. In color-error diffusion, which color to render becomes a major concern in addition to finding optimal dot patterns. We propose a visually meaningful scheme to train input-level (or tone-) dependent color-error filters. Our design approach employs a Neugebauer printer model and a color human visual system model that takes into account spatial considerations in color reproduction. The resulting halftones overcome several traditional error-diffusion artifacts and achieve significantly greater accuracy in color rendition.  相似文献   

7.
Modeling and quality assessment of halftoning by error diffusion   总被引:11,自引:0,他引:11  
Digital halftoning quantizes a graylevel image to one bit per pixel. Halftoning by error diffusion reduces local quantization error by filtering the quantization error in a feedback loop. In this paper, we linearize error diffusion algorithms by modeling the quantizer as a linear gain plus additive noise. We confirm the accuracy of the linear model in three independent ways. Using the linear model, we quantify the two primary effects of error diffusion: edge sharpening and noise shaping. For each effect, we develop an objective measure of its impact on the subjective quality of the halftone. Edge sharpening is proportional to the linear gain, and we give a formula to estimate the gain from a given error filter. In quantifying the noise, we modify the input image to compensate for the sharpening distortion and apply a perceptually weighted signal-to-noise ratio to the residual of the halftone and modified input image. We compute the correlation between the residual and the original image to show when the residual can be considered signal independent. We also compute a tonality measure similar to total harmonic distortion. We use the proposed measures for edge sharpening, noise shaping, and tonality to evaluate the quality of error diffusion algorithms.  相似文献   

8.
Hierarchical Error Diffusion   总被引:1,自引:0,他引:1  
This paper develops a distinctive class of color error diffusion algorithm, called hierarchical error diffusion (HED). It aims to achieve perceptually pleasing color halftone through neither conventional joint quantization nor interchannel error diffusion. Instead, it explicitly controls three critical factors sequentially to yield high-quality color halftone: dot-overlapping control, dot-positioning control, and dot-coloring control. A specific implementation of HED is presented with the objective of minimum brightness variation rendering (MBVR). First, an optimal color transform is derived for dot-overlapping control to achieve minimum brightness variation color density (MBVCD). Then, the embedded monochrome error diffusion is employed in dot-positioning control. By sequentially thresholding the elements in partial density sum vector, better dot-positioning is encouraged for more visible color dots. The ldquoblue noiserdquo characteristics of dot-positioning from the monochrome error diffusion are inherited by the color halftone. The simple density priority strategy is applied in dot-coloring control. The pixel color error is diffused channel-independently with a single error filter in halftone dot color space. A comparison with the state-of-the-art color error diffusion algorithms demonstrates excellent halftone quality of HED, while without the typical artifacts of vector error diffusion. Evidence also shows that HED is closer to achieve MBVR than the minimum brightness variation quantization (MBVQ) color diffusion algorithm proposed in.  相似文献   

9.
Digital halftoning is the process of generating a pattern of pixels with a limited number of colors that, when seen by the human eye, is perceived as a continuous-tone image. Digital halftoning is used to display continuous-tone images in media in which the direct rendition of the tones is impossible. The most common example of such media is ink or toner on paper, and the most common rendering devices for such media are, of course, printers. Halftoning works because the eye acts as a spatial low-pass filter that blurs the rendered pixel pattern, so that it is perceived as a continuous-tone image. Although all halftoning methods rely at least implicitly, on some understanding of the properties of human vision and the display device, the goal of model-based halftoning techniques is to exploit explicit models of the display device and the human visual system (HVS) to maximize the quality of the displayed images. Based on the type of computation involved, halftoning algorithms can be broadly classified into three categories: point algorithms (screening or dithering), neighborhood algorithms (error diffusion), and iterative algorithms [least squares and direct binary search (DBS)]. All of these algorithms can incorporate HVS and printer models. The best halftone reproductions, however, are obtained by iterative techniques that minimize the (squared) error between the output of the cascade of the printer and visual models in response to the halftone image and the output of the visual model in response to the original continuous-tone image.  相似文献   

10.
We describe a procedure by which Joint Photographic Experts Group (JPEG) compression may be customized for gray-scale images that are to be compressed before they are scaled, halftoned, and printed. Our technique maintains 100% compatibility with the JPEG standard, and is applicable with all scaling and halftoning methods. The JPEG quantization table is designed using frequency-domain characteristics of the scaling and halftoning operations, as well as the frequency sensitivity of the human visual system. In addition, the Huffman tables are optimized for low-rate coding. Compression artifacts are significantly reduced because they are masked by the halftoning patterns, and pushed into frequency bands where the eye is less sensitive. We describe how the frequency-domain effects of scaling and halftoning may be measured, and how to account for those effects in an iterative design procedure for the JPEG quantization table. We also present experimental results suggesting that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel (with reference to the number of pixels in the original image) when it is used with bilinear interpolation and either error diffusion or ordered dithering. Based on these results, we believe that in terms of the achieved bit rate, the performance of our encoder is typically at least 20% better than that of a JPEG encoder using the suggested baseline tables.  相似文献   

11.
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.  相似文献   

12.
Joint halftoning and watermarking   总被引:2,自引:0,他引:2  
A framework to jointly halftone and watermark a grayscale images is presented. The framework needs the definition of three components: a human visual system (HVS)-based error metric between the continuous-tone image and a halftone, a watermarking scheme with a corresponding watermark detection measure, and a search strategy to traverse the space of halftones. We employ the HVS-based error metric used in the direct binary search (DBS) halftoning algorithm, and we use a block-based spread spectrum watermarking scheme and the toggle and swap search strategy of DBS. The halftone is printed on a desktop printer and scanned using a flatbed scanner. The watermark is detected from the scanned image and a number of post-processed versions of the scanned image, including one restored in Adobe PhotoShop. The results show that the watermark is extremely resilient to printing, scanning, and post-processing; for a given baseline image quality, joint optimization is better than watermarking and halftoning independently. For this particular algorithm, the original continuous-tone image is required to detect the watermark.  相似文献   

13.
This paper studies video halftoning that renders a digital video sequence onto display devices, which have limited intensity resolutions and color palettes, by trading the spatiotemporal resolution for enhanced intensity/color resolution. This trade is needed when a continuous tone video is not necessary or not practical for video display, transmission, and storage. In particular, the quantization error of a pixel is diffused to its spatiotemporal neighbors by separable one-dimensional temporal and two-dimensional spatial error diffusions. Motion-adaptive gain control is employed to enhance the temporal consistency of the visual patterns by minimizing the flickering artifacts. Experimental results of halftone and colortone videos are demonstrated and evaluated with various halftoning techniques.  相似文献   

14.
Due to its high image quality and moderate computational complexity, error diffusion is a popular halftoning algorithm for use with inkjet printers. However, error diffusion is an inherently serial algorithm that requires buffering a full row of accumulated diffused error (ADE) samples. For the best performance when the algorithm is implemented in hardware, the ADE data should be stored on the chip on which the error diffusion algorithm is implemented. However, this may result in an unacceptable hardware cost. In this paper, we examine the use of quantization of the ADE to reduce the amount of data that must be stored. We consider both uniform and nonuniform quantizers. For the nonuniform quantizers, we build on the concept of tone-dependency in error diffusion, by proposing several novel feature-dependent quantizers that yield improved image quality at a given bit rate, compared to memoryless quantizers. The optimal design of these quantizers is coupled with the design of the tone-dependent parameters associated with error diffusion. This is done via a combination of the classical Lloyd-Max algorithm and the training framework for tone-dependent error diffusion. Our results show that 4-bit uniform quantization of the ADE yields the same halftone quality as error diffusion without quantization of the ADE. At rates that vary from 2 to 3 bits per pixel, depending on the selectivity of the feature on which the quantizer depends, the feature-dependent quantizers achieve essentially the same quality as 4-bit uniform quantization.  相似文献   

15.
Inverse halftoning and kernel estimation for error diffusion   总被引:8,自引:0,他引:8  
Two different approaches in the inverse halftoning of error-diffused images are considered. The first approach uses linear filtering and statistical smoothing that reconstructs a gray-scale image from a given error-diffused image. The second approach can be viewed as a projection operation, where one assumes the error diffusion kernel is known, and finds a gray-scale image that will be halftoned into the same binary image. Two projection algorithms, viz., minimum mean square error (MMSE) projection and maximum a posteriori probability (MAP) projection, that differ on the way an inverse quantization step is performed, are developed. Among the filtering and the two projection algorithms, MAP projection provides the best performance for inverse halftoning. Using techniques from adaptive signal processing, we suggest a method for estimating the error diffusion kernel from the given halftone. This means that the projection algorithms can be applied in the inverse halftoning of any error-diffused image without requiring any a priori information on the error diffusion kernel. It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known.  相似文献   

16.
The classic signal quantization problem was introduced by Lloyd. We formulate another, similar problem: The optimal mapping of digital fine grayscale images (such as 9-13 bits-per-pixel medical images) to a coarser scale (e.g., 8 bits per pixel on conventional computer monitors). While the former problem is defined basically in the real signal domain with smoothly distributed noise, the latter refers to an essentially digital domain. As we show in this paper, it is this difference that makes the classic quantization methods virtually inapplicable in typical cases of requantization of the already digitized images. We found experimentally that an algorithm based on dynamic programming provides significantly better results than Lloyd's method.  相似文献   

17.
In this paper, we introduce two novel techniques for digital color halftoning with green-noise-stochastic dither patterns generated by homogeneously distributing minority pixel clusters. The first technique employs error diffusion with output-dependent feedback where, unlike monochrome image halftoning, an interference term is added such that the overlapping of pixels of different colors can be regulated for increased color control. The second technique uses a green-noise mask, a dither array designed to create green-noise halftone patterns, which has been constructed to also regulate the overlapping of different colored pixels. As is the case with monochrome image halftoning, both techniques are tunable, allowing for large clusters in printers with high dot-gain characteristics, and small clusters in printers with low dot-gain characteristics.  相似文献   

18.
Color quantization and processing by Fibonacci lattices   总被引:1,自引:0,他引:1  
Color quantization is sampling of three-dimensional (3-D) color spaces (such as RGB or Lab) which results in a discrete subset of colors known as a color codebook or palette. It is extensively used for display, transfer, and storage of natural images in Internet-based applications, computer graphics, and animation. We propose a sampling scheme which provides a uniform quantization of the Lab space. The idea is based on several results from number theory and phyllotaxy. The sampling algorithm is very much systematic and allows easy design of universal (image-independent) color codebooks for a given set of parameters. The codebook structure allows fast quantization and ordered dither of color images. The display quality of images quantized by the proposed color codebooks is comparable with that of image-dependent quantizers. Most importantly, the quantized images are more amenable to the type of processing used for grayscale ones. Methods for processing grayscale images cannot be simply extended to color images because they rely on the fact that each gray-level is described by a single number and the fact that a relation of full order can be easily established on the set of those numbers. Color spaces (such as RGB or Lab) are, on the other hand, 3-D. The proposed color quantization, i.e., color space sampling and numbering of sampled points, makes methods for processing grayscale images extendible to color images. We illustrate possible processing of color images by first introducing the basic average and difference operations and then implementing edge detection and compression of color quantized images.  相似文献   

19.
Using vector quantization for image processing   总被引:1,自引:0,他引:1  
A review is presented of vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, which is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks, such as enhancement classification, halftoning, and edge detection, and to reduce the computational complexity by performing the tasks simultaneously with the compression. The fundamental ideas of vector quantization are explained, and vector quantization algorithms that perform image processing are surveyed  相似文献   

20.
冀鑫  冀小平 《电视技术》2015,39(23):101-105
基于内容的图像检索算法一直是图像领域研究的热门课题,因此提出一种新的融合矢量量化与LBP的图像检索算法。首先,将彩色图像转化到HSI颜色空间,进行矢量量化编码,统计图像码字出现的频数,形成颜色直方图,完成颜色特征的提取;然后,再将彩色图像转化成灰度图像,利用局部二进制模式(LBP)算法提取纹理特征;最后,相似度计算采用颜色特征和纹理特征相似度加权平均,并且改变颜色特征和纹理特征的权值,多次实验,得到使查准率最高的权值。实验结果表明,算法能有效地提升图像检索性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号