共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a colorization method in YCbCr color space, which is based on the maximum a posteriori estimation of a color image given a monochrome image as is our previous method in RGB color space. The presented method in YCbCr space is much simpler than that in RGB space and requires much less computation time, while both methods in YCbCr and RGB space produce color images with comparable PSNR values. The proposed colorization in YCbCr is applied to JPEG compressed color images aiming at better recovery of downsampled chrominance planes. Experimental results show that colorization in YCbCr is usually effective for quality improvement of JPEG color images. 相似文献
2.
This paper presents a colorization algorithm which adds color to monochrome images. In this paper, the colorization problem is formulated as the maximum a posteriori (MAP) estimation of a color image given a monochrome image. Markov random field (MRF) is used for modeling a color image which is utilized as a prior for the MAP estimation. The MAP estimation problem for a whole image is decomposed into local MAP estimation problems for each pixel. Using 0.6% of whole pixels as references, the proposed method produced pretty high quality color images with 25.7-32.6 dB PSNR values for eight images. 相似文献
3.
This paper proposes a novel single-image super-resolution algorithm based on linear Bayesian maximum a posteriori (MAP) estimation and sparse representation. Starting from constructing several probability distribution priors in representation vector, we develop a linear Bayesian MAP estimator to acquire the most probable high-resolution (HR) image behind the low-resolution (LR) observation. Our new algorithm involves three main steps: (1) obtaining an initial estimate of the HR image via bi-cubic interpolation algorithm, (2) performing sparse coding on the initial estimate to get the representation vector and its support, (3) using the MAP estimator to restore the desired representation vector and then reconstructing the HR output. Simulated results show that the proposed method can achieve a more competitive performance both in subjective visual quality and in peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) measures, compared with other state-of-the-art super-resolution methods. 相似文献
4.
To reduce communication bandwidth or storage space, image compression is needed. However, the subjective quality of compressed images may be unacceptable and the improvement of quality for compressed images may be desirable. This paper extends and modifies classified vector quantization (CVQ) to improve the quality of compressed images. The process consists of two phases: the encoding phase and the decoding phase. The encoding procedure needs a codebook for the encoder, which transforms a compressed image to a set of codeword-indices. The decoding phase also requires a different codebook for the decoder, which enhances a compressed image from a set of codeword-indices. Using CVQ to improve a compressed image's quality is different from the existing algorithm, which cannot reconstruct the high frequency components for compressed images. The experimental results show that the image quality is improved dramatically. For images in the training set, the improvement of PSNR is about 3 dB. For images, which are outside the training set, the improvement of PSNR is about 0.57 dB, which is comparable to the existing method. 相似文献
5.
基于局部算子不同形式的TV(total variation)模型用于彩色图像的噪声去除时往往存在边缘模糊、纹理模糊、阶梯效应、Mosaic效应等问题.因此,将传统局部的Tikhonov模型、TV模型、MTV(multi-channel total variation)模型、CTV(color total variation)模型推广到基于非局部算子概念的NL-CT(non-local color Tikhonov)模型、NL-LTV(non-local layered total variation)模型、NL-MTV(non-local multi-channel total variation)模型、NL-CTV(non-local color total variation)模型,并通过引入辅助变量和Bregman迭代参数设计了相应的快速Split Bregman算法.实验结果表明,所提出的非局部TV模型都很好地解决了局部模型中出现的问题,在纹理、边缘、光滑度等特征保持方面取得了良好特性,其中NL-CTV处理效果最好,但是计算效率较低. 相似文献
6.
7.
提出了一种基于JPEG2000的图象局部可分级检索方法。首先,对图象采用JPEG2000的基本算法进行压缩编码;然后,把压缩码流按零树结构编排,每个零树结构对应着图象的某个网格区域,从零树结构所包含的DWT系数可以分级重建这个网格区域的内容。检索时,先从图象的最低分辨率子图象(即LL子图象)的预览开始,一旦发现目标,则锁定目标;然后在空间分辨率和信噪比两个方面逐级浮现目标所在网格的内容。由于该方法仅仅使用部分压缩码流数据来恢复图象的局部内容,从而节省了大量的计算资源;最后给出了基于JPEG2000的图象局部可分级检索的实验实例。 相似文献
8.
This paper investigates the application of variations of Stochastic Relaxation with Annealing (SRA) as proposed by Geman and Geman [1] to the Bayesian restoration of binary images corrupted by white noise. After a general review we present some specific prior models and show examples of their application. It appears that a proper selection of the prior model is critical for the success of the method. We obtained better results on artificial images which fitted the model closely than on real images for which there was no precise model. 相似文献
9.
彩色人体图像的二值化方法 总被引:1,自引:0,他引:1
为了快速有效的对彩色人体图像进行二值化处理,提出了一种新的自适应二值化处理方法.该方法根据人体肤色特征,建立人体肤色模型;根据该模型,分割出肤色和非肤色,再直接转换成二值图像.由于人体肤色受到不同光照和环境的影响,人体彩色图像中的人体肤色值会有偏移.为了改善肤色模型适应不同环境和光照,在肤色分割过程中,采用了自适应Gamma校正方法.最后,给出了该方法的实验结果.实验结果表明,提出的彩色人体图像的二值化方法快速且有效. 相似文献
10.
针对彩色逆半调图像提出一种基于空间信息的质量评价方法。首先,将图像变换到视觉均匀颜色空间S-CIELab空间,通过定义人眼感知色差图像和感知梯度图像,构造评价彩色逆半调图像质量的定量指标。仿真结果表明:该指标反映的结果符合人眼视觉特性,与人的主观评价结果基本一致,它为基于图像内容的彩色逆半调方法的设计提供了依据。 相似文献
11.
K.O. Cheng Author Vitae N.F. Law Author Vitae Author Vitae 《Pattern recognition》2010,43(10):3314-3323
In this paper, some fast feature extraction algorithms are addressed for joint retrieval of images compressed in JPEG and JPEG2000 formats. In order to avoid full decoding, three fast algorithms that convert block-based discrete cosine transform (BDCT) into wavelet transform are developed, so that wavelet-based features can be extracted from JPEG images as in JPEG2000 images. The first algorithm exploits the similarity between the BDCT and the wavelet packet transform. For the second and third algorithms, the first algorithm or an existing algorithm known as multiresolution reordering is first applied to obtain bandpass subbands at fine scales and the lowpass subband. Then for the subbands at the coarse scale, a new filter bank structure is developed to reduce the mismatch in low frequency features. Compared with the extraction based on full decoding, there is more than 72% reduction in computational complexity. Retrieval experiments also show that the three proposed algorithms can achieve higher precision and recall than the multiresolution reordering, especially around the typical range of compression ratio. 相似文献
12.
为克服常用空间域彩色图像滤波算法不能兼顾去噪和保持图像细节的不足,提出了一种保护边缘及细节的彩色图像滤波算法.将待处理RGB彩色图像分解成R,G、B这3幅分量图像,对于每一幅分量图像分别利用能兼顾去噪和保持图像细节的滤波算法进行滤波去噪,将滤波处理后的3幅分量图像合成为一幅RGB彩色图像.因算法根据待处理像素的不同隶属情况选用了不同的滤波模板进行滤波处理,故在有效地滤除彩色图像中的噪声的同时较好的保护了彩色图像的边缘及细节,弥补了常用空域彩色图像滤波算法不能兼顾去噪和保持图像细节的不足. 相似文献
13.
The CLADYN compressor banks on the topological properties of a color image to achieve the highest possible data rate reduction with the minimum amount of visible degradation.In the case of numerical color TV, the input data at 166 MB/s is the CCIR imposed luminance Y and chrominance B- Y, RY components. The compressor output is at 25.26 MB/s, exclusive of sound channels and error correcting overhead. High reconstructed picture quality is obtained (at the rate of 6.56/1) without the use of any TV temporal redundancy. In the case of multispectral images, excellent image reconstruction is obtained with a signal to quantization noise ranging from 40 to 50 dBs and with a data rate reduction factor higher than 4/1 for scenes comprising 3–5 spectral channels. This type of compressor is not very sensitive to channel misregistration, is robust to the propagation of the transmission errors and outputs fixed length words. 相似文献
14.
In this work a new multisecret sharing scheme for secret color images among a set of users is proposed. The protocol allows that each participant in the scheme to share a secret color image with the rest of participants in such a way that all of them can recover all the secret color images only if the whole set of participants pools their shadows. The proposed scheme is based on the use of bidimensional reversible cellular automata with memory. The security of the scheme is studied and it is proved that the protocol is ideal and perfect and that it resists the most important statistical attacks. 相似文献
15.
诸葛斌 《计算机工程与应用》2008,44(2):109-111
对数字人彩色照片数据进行高质量的实时三维表面重建提出了一种新算法。该算法利用交互分割平台提取出彩色体数据中单个器官的三维表面点集,再根据对二值体数据滤波后的灰度值计算灰度梯度估算得到表面点的法向量。最后用带颜色的表面点来描述器官的三维表面,利用显卡OpenGL接口对表面点集进行三维显示。在微机环境下对美国数字人照片数据集中的肝脏和肺部两个器官进行了三维重建,在保证图像质量的前提下重建速度超过25帧/s。提出的算法能对高分辨率的彩色体数据进行高质量的实时三维表面重建。 相似文献
16.
17.
When performing block-matching based motion estimation with the ML estimator, one would try to match blocks from the two images, within a predefined search area. The estimated motion vector is that which maximizes a likelihood function, formulated according to the image formation model. Two new maximum likelihood motion estimation schemes for ultrasound images are presented. The new likelihood functions are based on the assumption that both images are contaminated by a Rayleigh distributed multiplicative noise. The new approach enables motion estimation in cases where a noiseless reference image is not available. Experimental results show a motion estimation improvement with regards to other known ML estimation methods. 相似文献
18.
端金鸣 《中国图象图形学报》2013,18(7)
基于局部算子的不同形式的TV模型用于彩色图像噪声去除时往往存在边缘模糊、纹理模糊、阶梯效应、Mosaic效应等问题。本文基于非局部算子概念将传统的Tikhonov模型、TV模型、MTV模型、CTV模型推广到NL-CT模型、NL-LTV模型、NL-MTV模型、NL-CTV模型,并通过引入辅助变量和Bregman迭代参数设计了相应的快速Split Bregman算法。实验表明,所提出的非局部TV模型在纹理、边缘、光滑度等特征保持方面具有良好特性,且差异不大,但不同模型的计算效率存在较大差异。 相似文献
19.
目的 为了解决现有图像区域复制篡改检测算法只能识别图像中成对的相似区域而不能准确定位篡改区域的问题,提出一种基于JPEG(joint photographic experts group)图像双重压缩偏移量估计的篡改区域自动检测定位方法。方法 首先利用尺度不变特征变换(SIFT)算法提取图像的特征点和相应的特征向量,并采用最近邻算法对特征向量进行初步匹配,接下来结合特征点的色调饱和度(HSI)彩色特征进行优化匹配,消除彩色信息不一致引发的误匹配;然后利用随机样本一致性(RANSAC)算法对匹配对之间的仿射变换参数进行估计并消除错配,通过构建区域相关图确定完整的复制粘贴区域;最后根据对复制粘贴区域分别估计的JPEG双重压缩偏移量区分复制区域和篡改区域。结果 与经典SIFT和SURF(speeded up robust features)的检测方法相比,本文方法在实现较高检测率的同时,有效降低了检测虚警率。当第2次JPEG压缩的质量因子大于第1次时,篡改区域的检出率可以达到96%以上。 结论 本文方法可以有效定位JPEG图像的区域复制篡改区域,并且对复制区域的几何变换以及常见的后处理操作具有较强的鲁棒性。 相似文献
20.
针对典型算法在描述DCT域相关性方面存在的不足,所引起的隐写检测综合性能不理想的问题,将MCM与HVS进行有机结合,提出一种JPEG彩色图像通用隐写分析算法。设计8-邻域MCM型,全面描述DCT系数之间的相关性,基于HVS分别在YCbCr模型空间Y分量抽取Markov状态转移矩、在其相应RGB三通道抽取Markov状态转移矩的主对角线邻域相似熵作为特征统计量,并进行合理“绑定”,采用PCA技术对其进行优化选择,构建高效分类特征向量。实验结果表明该算法对于Jsteg、F5、Outguess、MB1、MB2攻击具有较高的可靠性和检测正确率。 相似文献