首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 496 毫秒
1.
Joint halftoning and watermarking   总被引:2,自引:0,他引:2  
A framework to jointly halftone and watermark a grayscale images is presented. The framework needs the definition of three components: a human visual system (HVS)-based error metric between the continuous-tone image and a halftone, a watermarking scheme with a corresponding watermark detection measure, and a search strategy to traverse the space of halftones. We employ the HVS-based error metric used in the direct binary search (DBS) halftoning algorithm, and we use a block-based spread spectrum watermarking scheme and the toggle and swap search strategy of DBS. The halftone is printed on a desktop printer and scanned using a flatbed scanner. The watermark is detected from the scanned image and a number of post-processed versions of the scanned image, including one restored in Adobe PhotoShop. The results show that the watermark is extremely resilient to printing, scanning, and post-processing; for a given baseline image quality, joint optimization is better than watermarking and halftoning independently. For this particular algorithm, the original continuous-tone image is required to detect the watermark.  相似文献   

2.
The authors previously proposed a look up table (LUT) based method for inverse halftoning of images. The LUT for inverse halftoning is obtained from the histogram gathered from a few sample halftone images and corresponding original images. Many of the entries in the LUT are unused because the corresponding binary patterns hardly occur in commonly encountered halftones. These are called nonexistent patterns. In this paper, we propose a tree structure which will reduce the storage requirements of an LUT by avoiding nonexistent patterns. We demonstrate the performance on error diffused images and ordered dither images. Then, we introduce LUT based halftoning and tree-structured LUT (TLUT) halftoning. Even though the TLUT method is more complex than LUT halftoning, it produces better halftones and requires much less storage than LUT halftoning. We demonstrate how the error diffusion characteristics can be achieved with this method. Afterwards, our algorithm is trained on halftones obtained by direct binary search (DBS). The complexity of TLUT halftoning is higher than the error diffusion algorithm but much lower than the DBS algorithm. Also, the halftone quality of TLUT halftoning increases if the size of the TLUT gets bigger. Thus, the halftone image quality between error diffusion and DBS will be achieved depending on the size of the tree-structure in the TLUT algorithm  相似文献   

3.
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.  相似文献   

4.
Hybrid LMS-MMSE inverse halftoning technique   总被引:1,自引:0,他引:1  
The objective of this work is to reconstruct high quality gray-level images from bilevel halftone images. We develop optimal inverse halftoning methods for several commonly used halftone techniques, which include dispersed-dot ordered dither, clustered-dot ordered dither, and error diffusion. At first, the least-mean-square (LMS) adaptive filtering algorithm is applied in the training of inverse halftone filters. The resultant optimal mask shapes are significantly different for various halftone techniques, and these mask shapes are also quite different from the square shape that was frequently used in the literature. In the next step, we further reduce the computational complexity by using lookup tables designed by the minimum mean square error (MMSE) method. The optimal masks obtained from the LMS method are used as the default filter masks. Finally, we propose the hybrid LMS-MMSE inverse halftone algorithm. It normally uses the MMSE table lookup method for its fast speed. When an empty cell is referred, the LMS method is used to reconstruct the gray-level value. Consequently, the hybrid method has the advantages of both excellent reconstructed quality and fast speed. In the experiments, the error diffusion yields the best reconstruction quality among all three halftone techniques.  相似文献   

5.
The direct binary search (DBS) algorithm employs a search heuristic to minimize the mean-squared perceptually filtered error between the halftone and continuous-tone original images. Based on an efficient method for evaluating the effect on the mean squared error of trial changes to the halftone image, we show that DBS also minimizes in a pointwise sense the absolute error under the same visual model, but at twice the viewing distance associated with the mean-squared error metric. This dual interpretation sheds light on the convergence properties of the algorithm, and clearly explains the tone bias that has long been observed with halftoning algorithms of this type. It also demonstrates how tone bias and texture quality are linked via the scale parameter, the product of printer resolution and viewing distance. Finally, we show how the tone bias can be eliminated by tone-correcting the continuous-tone image prior to halftoning it.  相似文献   

6.
Inkjet printer model-based halftoning.   总被引:3,自引:0,他引:3  
The quality of halftone prints produced by inkjet (IJ) printers can be limited by random dot-placement errors. While a large literature addresses model-based halftoning for electrophotographic printers, little work has been done on model-based halftoning for IJ printers. In this paper, we propose model-based approaches to both iterative least-squares halftoning and tone-dependent error diffusion (TDED). The particular approach to iterative least-squares halftoning that we use is direct binary search (DBS). For DBS, we use a stochastic model for the equivalent gray-scale image, based on measured dot statistics of printed IJ halftone patterns. For TDED, we train the tone-dependent weights and thresholds to mimic the spectrum of halftone textures generated by model-based DBS. We do this under a metric that enforces both the correct radially averaged spectral profile and angular symmetry at each radial frequency. Experimental results generated with simulated printers and a real printer show that both IJ model-based DBS and IJ model-based TDED very effectively suppress IJ printer-induced artifacts.  相似文献   

7.
Digital halftoning is the process of generating a pattern of pixels with a limited number of colors that, when seen by the human eye, is perceived as a continuous-tone image. Digital halftoning is used to display continuous-tone images in media in which the direct rendition of the tones is impossible. The most common example of such media is ink or toner on paper, and the most common rendering devices for such media are, of course, printers. Halftoning works because the eye acts as a spatial low-pass filter that blurs the rendered pixel pattern, so that it is perceived as a continuous-tone image. Although all halftoning methods rely at least implicitly, on some understanding of the properties of human vision and the display device, the goal of model-based halftoning techniques is to exploit explicit models of the display device and the human visual system (HVS) to maximize the quality of the displayed images. Based on the type of computation involved, halftoning algorithms can be broadly classified into three categories: point algorithms (screening or dithering), neighborhood algorithms (error diffusion), and iterative algorithms [least squares and direct binary search (DBS)]. All of these algorithms can incorporate HVS and printer models. The best halftone reproductions, however, are obtained by iterative techniques that minimize the (squared) error between the output of the cascade of the printer and visual models in response to the halftone image and the output of the visual model in response to the original continuous-tone image.  相似文献   

8.
A halftone watermarking method of high quality, robustness, and capacity flexibility is presented in this paper. An objective halftone image quality evaluation method based on the human visual system obtained by a least-mean-square algorithm is also introduced. In the encoder, the kernels-alternated error diffusion (KAEDF) is applied. It is able to maintain the computational complexity at the same level as ordinary error diffusion. Compared with Hel-Or using ordered dithering, the proposed KAEDF yields a better image quality through using error diffusion. We also propose a weighted lookup table (WLUT) in the decoder instead of lookup table (LUT), as proposed by Pei and Guo, so as to achieve a higher decoded rate. As the experimental results demonstrate, this technique is able to guard against degradation due to tampering, cropping, rotation, and print-and-scan processes in error-diffused halftone images.  相似文献   

9.
Error diffusion is a procedure for generating high quality bilevel images from continuous-tone images so that both the continuous and halftone images appear similar when observed from a distance. It is well known that certain objectionable patterning artifacts can occur in error-diffused images. Here, we consider a method for adjusting the error-diffusion filter concurrently with the error-diffusion process so that an error criterion is minimized. The minimization is performed using the least mean squares (LMS) algorithm in adaptive signal processing. Using both raster and serpentine scanning, we show that such an algorithm produces better halftone image quality compared to traditional error diffusion with a fixed filter. Based on the adaptive error-diffusion algorithm, we propose a method for constructing a halftone image that can be rendered at multiple resolutions. Specifically, the method generates a halftone from a continuous tone image such that if the halftone is down-sampled, a binary image would result that is also a high quality rendition of the continuous-tone image at a reduced resolution. Such a halftone image is suitable for progressive transmission, and for cases where rendition at several resolutions is required. Cases for noninteger scaling factors are also considered.  相似文献   

10.
A strategy for efficiently coding stereo video sequences is investigated. To fully utilize the suppression and the contrast sensitivity property of the human visual system, a novel coding scheme with two special mechanisms, the spatiotemporal HVS model and the binary correlation disparity estimator, is proposed to efficiently reduce the video signal redundancy and the computational complexity, while maintaining a high subjective image quality. Compared with existing stereo video coding systems, the proposed coding scheme supports a lower transmission bit rate and has less computational complexity. The simulation results also show that the subjective image quality of the reconstructed full color stereo sequences at 0.25-0.4 bits per pixel (bpp) is satisfactory  相似文献   

11.
In order to further improve the efficiency of video compression, we introduce a perceptual characteristics of Human Visual System (HVS) to video coding, and propose a novel video coding rate control algorithm based on human visual saliency model in H.264/AVC. Firstly, we modifie Itti's saliency model. Secondly, target bits of each frame are allocated through the correlation of saliency region between the current and previous frame, and the complexity of each MB is modified through the saliency value and its...  相似文献   

12.
It is a challenging work to design a robust halftone image watermarking scheme against desynchronization attacks. In this paper, we propose a feature-based digital watermarking method for halftone images with low computational complexity, good visual quality and reasonable resistance toward desynchronization attacks. Firstly, the feature points are extracted from host halftone image by using multi-scale Harris–Laplace detector, and the local feature regions (LFRs) are constructed according to the feature scale theory. Secondly, discrete Fourier transform (DFT) is performed on the LFRs, and the embedding positions (DFT coefficients) are selected adaptively according to the magnitude spectrum information. Finally, the digital watermark is embedded into the LFRs by quantizing the magnitudes of the selected DFT coefficients. By binding the watermark with the geometrically invariant halftone image features, the watermark detection can be done without synchronization error. Simulation results show that the proposed scheme is invisible and robust against common signals processing such as median filtering, sharpening, noise adding, and JPEG compression, etc., and desynchronization attacks such as rotation, scaling, translation (RST), cropping, local random bend, and print-scan, etc.  相似文献   

13.
Due to its high image quality and moderate computational complexity, error diffusion is a popular halftoning algorithm for use with inkjet printers. However, error diffusion is an inherently serial algorithm that requires buffering a full row of accumulated diffused error (ADE) samples. For the best performance when the algorithm is implemented in hardware, the ADE data should be stored on the chip on which the error diffusion algorithm is implemented. However, this may result in an unacceptable hardware cost. In this paper, we examine the use of quantization of the ADE to reduce the amount of data that must be stored. We consider both uniform and nonuniform quantizers. For the nonuniform quantizers, we build on the concept of tone-dependency in error diffusion, by proposing several novel feature-dependent quantizers that yield improved image quality at a given bit rate, compared to memoryless quantizers. The optimal design of these quantizers is coupled with the design of the tone-dependent parameters associated with error diffusion. This is done via a combination of the classical Lloyd-Max algorithm and the training framework for tone-dependent error diffusion. Our results show that 4-bit uniform quantization of the ADE yields the same halftone quality as error diffusion without quantization of the ADE. At rates that vary from 2 to 3 bits per pixel, depending on the selectivity of the feature on which the quantizer depends, the feature-dependent quantizers achieve essentially the same quality as 4-bit uniform quantization.  相似文献   

14.
This paper is a tradeoff study of image processing algorithms that can be used to transform continuous tone and halftone pictorial image input into spatially encoded representations compatible with binary output processes. A large percentage of the electronic output marking processes utilize a binary mode of operation. The history and rationale for this are reviewed and thus the economic justification for the tradeoff is presented. A set of image quality and processing complexity metrics are then defined. Next, a set of algorithms including fixed and adaptive thresholding, orthographic pictorial fonts, electronic screening, ordered dither, and error diffusion are defined and evaluated relative to their ability to reproduce continuous tone input. Finally, these algorithms, along with random nucleated halftoning, the alias reducing image enhancement system (ARIES), and a new algorithm, selective halftone rescreening (SHARE), are defined and evaluated as to their ability to reproduce halftone pictorial input.  相似文献   

15.
Foveation scalable video coding with automatic fixation selection   总被引:3,自引:0,他引:3  
Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability, robustness, and security. In this paper, we follow two recent trends in image and video coding research. One is to incorporate human visual system (HVS) models to improve the current state-of-the-art of image and video coding algorithms by better exploiting the properties of the intended receiver. The other is to design rate scalable image and video codecs, which allow the extraction of coded visual information at continuously varying bit rates from a single compressed bitstream. Specifically, we propose a foveation scalable video coding (FSVC) algorithm which supplies good quality-compression performance as well as effective rate scalability. The key idea is to organize the encoded bitstream to provide the best decoded video at an arbitrary bit rate in terms of foveated visual quality measurement. A foveation-based HVS model plays an important role in the algorithm. The algorithm is adaptable to different applications, such as knowledge-based video coding and video communications over time-varying, multiuser and interactive networks.  相似文献   

16.
交通异常情况检测一直是交通管理中的重要任务,其在智能交通系统中显得尤为重要。传统的检测方法首先对目标物体(行人和车辆)进行区分,然后再对提取的车辆进行轨迹异常判断。在车流量日益加剧的今天,此种方法增加了计算机的运算复杂度。针对上述算法计算量过于复杂的问题,本文提出了基于像素点的背景方法,首先结合隐含马尔可夫模型(Hidden Markov Model)和共发模型(Co-occurences model),对视频中的异常像素点进行判断,然后通过仿三维模型车辆进行识别的方法,对由异常像素点组成的车辆进行最终的识别。实验结果表明该算法具有较强的有效性和在智能交通系统中有较好的应用。  相似文献   

17.
Because of its good image quality and moderate computational requirements, error diffusion has become a popular halftoning solution for desktop printers, especially inkjet printers. By making the weights and thresholds tone-dependent and using a predesigned halftone bitmap for tone-dependent threshold modulation, it is possible to achieve image quality very close to that obtained with far more computationally complex iterative methods. However, the ability to implement error diffusion in very low cost or large format products is hampered by the requirement to store the tone-dependent parameters and halftone bitmap, and also the need to store error information for an entire row of the image at any given point during the halftoning process. For the first problem, we replace the halftone bitmap by deterministic bit flipping, which has been previously applied to halftoning, and we linearly interpolate the tone-dependent weights and thresholds from a small set of knot points. We call this implementation a reduced lookup table. For the second problem, we introduce a new serial block-based approach to error diffusion. This approach depends on a novel intrablock scan path and the use of different parameter sets at different points along that path. We show that serial block-based error diffusion reduces off-chip memory access by a factor equal to the block height. With both these solutions, satisfactory image quality can only be obtained with new cost functions that we have developed for the training process. With these new cost functions and moderate block size, we can obtain image quality that is very close to that of the original tone-dependent error diffusion algorithm.  相似文献   

18.
The development of objective image quality assessment (IQA) metrics aligned with human perception is of fundamental importance to numerous image-processing applications. Recently, human visual system (HVS)-based engineering algorithms have received widespread attention for their low computational complexity and good performance. In this paper, we propose a new IQA model by incorporating these available engineering principles. A local singular value decomposition (SVD) is first utilised as a structural projection tool to select local image distortion features, and then, both perceptual spatial pooling and neural networks (NN) are employed to combine feature vectors to predict a single perceptual quality score. Extensive experiments and cross-validations conducted with three publicly available IQA databases demonstrate the accuracy, consistency, robustness, and stability of the proposed approach compared to state-of-the-art IQA methods, such as Visual Information Fidelity (VIF), Visual Signal to Noise Ratio (VSNR), and Structural Similarity Index (SSIM).  相似文献   

19.
Digital color halftoning is the process of transforming continuous-tone color images into images with a limited number of colors. The importance of this process arises from the fact that many color imaging systems use output devices such as color printers and low-bit depth displays that are bilevel or multilevel with a few levels. The goal is to create the perception of a continuous-tone color image using the limited spatiochromatic discrimination capability of the human visual system. In decreasing order of how locally algorithms transform a given image into a halftone and, therefore, in increasing order of computational complexity and halftone quality, monochrome digital halftoning algorithms can be placed in one of three categories: 1) point processes (screening or dithering), 2) neighborhood algorithms (error diffusion), and 3) iterative methods. All three of these algorithm classes can be generalized to digital color halftoning with some modifications. For an in-depth discussion of monochrome halftoning algorithms, the reader is directed to the July 2003 issue of IEEE Signal Processing Magazine. In the remainder of this article, we only address those aspects of halftoning that specifically have to do with color. For a good overview of digital color halftoning, the reader is directed to Haines et al. (2003). In addition, Agar et al. (2003) contains a more in-depth treatment of some of the material found in this work.  相似文献   

20.
本文通过简化视频质量评估中人眼感知模型的复杂性,提出了一种新的无参考视频质量评估模型.首先通过分别抽取视频的空间域和时间域特征,然后按照视频局部块、视频帧、视频段等从细到粗的不同粒度,模拟人眼感知特性进行多重加权汇聚,最终得到整段视频的特征向量描述.本方法以支持向量回归器为评估模型训练工具,通过有监督的视频样本库训练,以无参考方式完成未知视频的质量评估.实验结果表明,该评估算法的性能不但要优于当前已知最经典的无参考评估算法Video BLLINDS,而且与部分参考评估算法相当.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号