首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
彩色液晶显示器件色差补偿优化   总被引:1,自引:0,他引:1  
为了改善彩色液晶显示器件的色差,提升画面品质,针对(128,192,192)灰阶图形下色差的问题,对色差产生的机理进行了分析。模拟了RGB三色像素透过率对该图形下色差的影响。通过分别调整RGB彩膜膜厚,不仅相对调整了三色像素下的液晶盒层厚,而且还调整了三色像素的透过率,改善了RGB的Gamma曲线,对色差进行了补偿优化。实验结果表明,增加G像素和减小B像素的液晶盒层厚,对色差有明显补偿改善效果;同时,当RGB膜厚保持不变时,色差与液晶层厚呈正比关系,液晶层厚每减小1%时,色差约降低0.2。当RGB三色液晶盒层厚从(3.47,3.38,3.4)μm分别调整到(3.5,3.45,3.36)μm和(3.5,3.44,3.28)μm时,(128,192,192)灰阶图形下样品色差平均值从13.8分别降低到12.8和12。通过分别调整RGB彩膜膜厚,相对调整了三色像素下的液晶层盒厚和液晶量调节,可以一定程度优化补偿色差。  相似文献   

2.
Computer generated (CG) images have been gradually overspread on the Internet, resulting in difficult discrimination from natural images (NIs) captured by an authentic imaging device. Although some discriminators can deal with NIs in JPEG format, the classification between uncompressed NIs (that are possibly generated in any imaging procedure before compression) and CG ones still remains unknown. Thus, this paper aims to establish multiple discriminators classifying between NIs and CG images. We first describe the main imaging procedure and its intrinsic property, which characterizes the discriminative features for classification. Then, the residual noise (representing intrinsic characteristic) is extracted. Its statistical distribution indeed helps us establish multiple discriminators, consisting of the generalized likelihood ratio test (GLRT) under the framework of hypothesis testing theory. Extensive experiments empirically verify our proposed multiple discriminators outperform many prior arts. Furthermore, the robustness of discriminators is validated with considering some post-processing attacks.  相似文献   

3.
Image is one of the most widely used information carrier exchanged in the Internet, which raises a problem of privacy leakage. Private images are vulnerable to be intercepted and altered by an attacker, violating the owner’s privacy. When an image is tampered maliciously, it is often necessary to perform geometric transformations such as scaling to hide the traces of tampering, introducing resampling traces. In the last two decades, spectral analysis is the most commonly used method for resampling detection. However, since JPEG compression severely interferes the statistical characteristics of resampled images and introduces blocking artifacts, the robustness is really poor for most classical spectrum-based methods in the presence of JPEG compression. In this paper, we propose a method to estimate the upscaling factors of upscaled images in the presence of JPEG compression. A comprehensive analysis in spectrum of scaled images is given. We find that both the location and their difference of spectral peaks in the spectrum of the upscaled pre-JPEG images are related to the upscaling factor. Hence, we adopt the difference histogram of spectral peaks to screen candidate upscaling factors and obtain the final estimation by additional verification step according to the location of the spectral peaks. The experimental results demonstrate the effectiveness of the proposed method.  相似文献   

4.
Image forensics is a form of image analysis for finding out the condition of an image in the complete absence of any digital watermark or signature.It can be used to authenticate digital images and identify their sources.While the technology of exemplar-based inpainting provides an approach to remove objects from an image and play visual tricks.In this paper,as a first attempt,a method based on zero-connectivity feature and fuzzy membership is proposed to discriminate natural images from inpainted images.Firstly,zero-connectivity labeling is applied on block pairs to yield matching degree feature of all blocks in the region of suspicious,then the fuzzy memberships are computed and the tampered regions are identified by a cut set.Experimental results demonstrate the effectiveness of our method in detecting inpainted images.  相似文献   

5.
With the deepening of social information, the panoramic image has drawn a significant interest of viewers and researchers as it can provide a very wide field of view (FoV). Since panoramic images are usually obtained by capturing images with the overlapping regions and then stitching them together, image stitching plays an important role in generating panoramic images. In order to effectively evaluate the quality of stitched images, a novel quality assessment method based on bi-directional matching is proposed for stitched images. Specifically, dense correspondences between the testing and benchmark stitched images are first established by bi-directional SIFT-flow matching. Then, color-aware, geometric-aware and structure-aware features are respectively extracted and fused via support vector regression (SVR) to obtain the final quality score. Experiments on our newly constructed database and ISIQA database demonstrate that the proposed method can achieve comparable performance compared with the conventional blind quality metrics and the quality metrics specially designed for stitched images.  相似文献   

6.
In multimedia communication systems, digital images may contain various visual contents, among which the natural scene images (NSIs) and screen content images (SCIs) are two important and common types. The existing full-reference image quality assessment (IQA) metrics are designed for only one type of images, but cannot precisely perceive the visual quality of another type. It is still unclear what the different characteristics are between NSIs and SCIs resulting in this failure. Inspired by some psychological studies, we figure out that it is due to the different structural scale levels between NSIs and SCIs. Given this observation, this paper introduces the gradient degradation of Gaussians (GDoG) to analyze the images’ structural scale level, proposes a fast unified IQA index for both NSIs and SCIs by incorporating an adaptive weighting strategy on double scales. Experimental results conducted on several databases verify the effectiveness and efficiency of the proposed unified IQA index for both types of images, also demonstrate that the adaptive weighting strategy based on GDoG can improve the existing models for cross-content-type images.  相似文献   

7.
The development of powerful and low-cost hardware devices allied with great advances on content editing and authoring tools have promoted the creation of computer generated images (CG) to a degree of unrivaled realism. Differentiating a photo-realistic computer generated image from a real photograph (PG) can be a difficult task to naked eyes. Digital forensics techniques can play a significant role in this task. As a matter of fact, important research has been made by the scientific community in this regard. Most of the approaches focus on single image features aiming at detecting differences between real and computer generated images. However, with the current technology advances, there is no universal image characterization technique that completely solves this problem. In our work, we (1) present a complete study of several CG versus PG approaches; (2) create a large and heterogeneous dataset to be used as a training and validation database; (3) implement representative methods of the literature; and (4) devise automatic ways for combining the best approaches. We compared the implemented methods using the same validation environment showing their pros and cons with a common benchmark protocol. We collected approximately 4850 photographs and 4850 CGs with large diversity of image content and quality. We implemented a total of 13 methods. Results show that this set of methods can achieve up to 93% of accuracy when used without any form of machine learning fusion. The same methods, when combined through the implemented fusion schemes, can achieve an accuracy rate of 97%, representing a reduction of 57% of the classification error over the best individual result.  相似文献   

8.
贺元兴  穆佰利  李建  李伟 《激光技术》2014,38(6):747-752
为了考察光阑截断时高斯光束质量与波像差之间的关系,采用高斯光束值作为截断高斯光束质量的评价参量,通过数值仿真的方法分析光学系统波像差对截断高斯光束质量的影响,给出了高斯光束值与波像差间的拟合关系。讨论了高斯光束质量与Kolmogoroff大气湍流强度间的关系,并给出了二者间的拟合公式。结果表明,该拟合公式的计算结果在相当广的湍流强度范围内与数值仿真结果较好吻合,这也进一步验证了高斯光束值与波像差拟合关系的正确性。  相似文献   

9.
A new method for text detection and recognition in natural scene images is presented in this paper. In the detection process, color, texture, and OCR statistic features are combined in a coarse-to-fine framework to discriminate texts from non-text patterns. In this approach, color feature is used to group text pixels into candidate text lines. Texture feature is used to capture the “dense intensity variance” property of text pattern. Statistic features from OCR (Optical Character Reader) results are employed to further reduce detection false alarms empirically. After the detection process, a restoration process is used. This process is based on plane-to-plane homography. It is carried out to refine the background plane of text when an affine transformation is detected on a located text and independent of camera parameters. Experimental results tested from a large dataset have demonstrated that the proposed method is effective and practical.  相似文献   

10.
During scanning and transmission, images can be corrupted by salt and pepper noise, which negatively affects the quality of subsequent graphic vectorization or text recognition. In this paper, we present a new algorithm for salt and pepper noise suppression in binary images. The algorithm consists of the computation of block prior probabilities from training noise-free images; noise level estimation; and the maximum a posteriori probability estimation of each image block. Our experiments show that the proposed method performs significantly better than the state of the art techniques.  相似文献   

11.
Due to the wide diffusion of JPEG coding standard, the image forensic community has devoted significant attention to the development of double JPEG (DJPEG) compression detectors through the years. The ability of detecting whether an image has been compressed twice provides paramount information toward image authenticity assessment. Given the trend recently gained by convolutional neural networks (CNN) in many computer vision tasks, in this paper we propose to use CNNs for aligned and non-aligned double JPEG compression detection. In particular, we explore the capability of CNNs to capture DJPEG artifacts directly from images. Results show that the proposed CNN-based detectors achieve good performance even with small size images (i.e., 64 × 64), outperforming state-of-the-art solutions, especially in the non-aligned case. Besides, good results are also achieved in the commonly-recognized challenging case in which the first quality factor is larger than the second one.  相似文献   

12.
Availability of the powerful image editing softwares and advancement in digital cameras has given rise to large amount of manipulated images without any traces of tampering, generating a great demand for automatic forgery detection algorithms in order to determine its authenticity. When altering an image like copy–paste or splicing to conceal traces of tampering, it is often necessary to resize the pasted portion of the image. The resampling operation may highly likely disturb the underlying inconsistency of the pasted portion that can be used to detect the forgery. In this paper, an algorithm is presented that blindly detects global rescaling operation and estimate the rescaling factor based on the autocovariance sequence of zero-crossings of second difference of the tampered image. Experimental results using UCID and USC-SIPI database show the validity of the algorithm under different interpolation schemes. The technique is robust and successfully detects rescaling operation for images that have been subjected to various forms of attacks like JPEG compression and arbitrary cropping. As expected, some degradation in detection accuracy is observed as the JPEG quality factor decreased.  相似文献   

13.
14.
针对胸水细胞显微图像的特点提出一种改进的随机Hough变换(MRHT,modified randomized hough transform)检测圆和椭圆的算法.该算法分为两步:利用椭圆的几何性质,确定可能的椭圆中心的位置;在限定区域内,通过多次3点随机抽样,计算椭圆除中心坐标外的其他3个参数.研究表明,该算法可以同时检测多个圆和椭圆,可以从胸水细胞显微图像的复杂背景中较为准确地提取圆形和椭圆形细胞.实验结果表明,该算法有较高的检测效率,检测精度和较强的鲁棒性.  相似文献   

15.
针对高分辨率光学遥感图像中人造目标的检测问 题,对传统的相位编组直线段提取算法和k-means 聚类算法了改进,提出了一种k-means聚类和几何特征 相结合的检测方法。根据自然物体和 人造目标在几何外形上表现出的不同特性,首先运用改进的相位编组算法对图像进行快速的 直线段提取; 然后以获取的直线段中心点为处理对象,运用k-means聚类算法 对提取的直线段进行密度聚类;最后,根 据每个类中的直线段数目和构成的几何基元情况,进行人造目标的判定。实验结果表明,本 文算法对遥感 图像中的房屋、汽车、船舰和飞机跑道等多类人造目标可达到90%以 上的检测精度,并具有较高的检测速度,对于一幅512pixel ×512pixel的图像,整个检测过程在100ms 以内。  相似文献   

16.
RX及其变种在高光谱图像中的异常检测   总被引:10,自引:3,他引:10       下载免费PDF全文
为了提高核RX算法在高光谱图像异常检测中的稳定性,将核矩阵正则化,并提出正则化的核RX算法(rkRX)。将规范化后的正则化RX算法和正则化的核RX算法融合改进,称为融合RX算法(mRX),该算法同时考虑了原始线性空间和高维特征空间的异常检测结果,使异常检测效果更加稳定。在仿真图像和真实高光谱图像的实验中,上述2种算法与原始的RX、正则化RX(rRX)和核RX(kRX)3种算法进行了比较,使用了双窗口技术和核主成分分析(KPCA)进行特征提取和基于高阶统计量的特征选择作为预处理来降低数据维数,并在未降维数据上比较上述5种算法。最后,使用ROC曲线评价检测效果,结果表明:提出的2种算法提高了检测效果并具有一定鲁棒性。  相似文献   

17.
Shadow detection is significant for scene understanding. As a common scenario, soft shadows have more ambiguous boundaries than hard shadows. However, they are rarely present in the available benchmarks since annotating for them is time-consuming and needs expert help. This paper discusses how to transfer the shadow detection capability from available shadow data to soft shadow data and proposes a novel shadow detection framework (MUSD) based on multi-scale feature fusion and unsupervised domain adaptation. Firstly, we set the existing labeled shadow dataset (i.e., SBU) as the source domain and collect an unlabeled soft shadow dataset (SSD) as the target domain to formulate an unsupervised domain adaptation problem. Next, we design an efficient shadow detection network based on the double attention module and multi-scale feature fusion. Then, we use the global–local feature alignment strategy to align the task-related feature distributions between the source and target domains. This allows us to obtain a robust model and achieve domain adaptation effectively. Extensive experimental results show that our method can detect soft shadows more accurately than existing state-of-the-art methods.  相似文献   

18.
针对自然场景标签图像背景复杂、目标区域形式多样、色调单一、空间集中等问题,提出了一种基于笔画宽度特征和剪枝算法的自然场景标签检测方法。根据图像的特点,首先利用阈值法和启发式规则,初步筛选出字符候选区;然后通过设计的笔画宽度特征提取算法,获得每一个候选区的文本相似度,融合基于惩罚函数的剪枝算法实现背景区域的滤除,得到进一步分割后的标签检测区域;使用形态学处理和区域面积的细分割后,最终生成目标检测图像。多组对比实验检测结果表明,本文算法具有良好的目标检测效果和优异的普适性。  相似文献   

19.
Joint fingerprinting and decryption (JFD) is useful in securing media transmission and distribution in a multicasting environment. Common drawbacks of the existing JFD methods are the transmitted data may leak the content of data, and a subscriber cannot determine if a received image is modified such that tampering attack can be mounted successfully. Here we focus on security and privacy of image multicasting and introduce a new framework called JFDA (joint privacy-preserving fingerprinting, decryption, and authentication). It has several main characteristics, JFDA: (1) accomplishes fingerprinting in the encryption domain to preserve privacy and prevent encrypted data from being tampered without additional hash code/digest, (2) prevents tampering attack on the decrypted data to ensure the fidelity of the fingerprinted data, (3) makes user subscribing to a visual media be an examiner to authenticate the same visual media over the Internet. The effectiveness of the proposed method is confirmed by experimental results.  相似文献   

20.

基于深度学习的SAR图像舰船目标检测算法对图像的数量和质量有很高的要求,而收集大体量的舰船SAR图像并制作相应的标签需要消耗大量的人力物力和财力。该文在现有SAR图像舰船目标检测数据集(SSDD)的基础上,针对目前检测算法对数据集利用不充分的问题,提出基于生成对抗网络(GAN)和线上难例挖掘(OHEM)的SAR图像舰船目标检测方法。利用空间变换网络在特征图上进行变换,生成不同尺寸和旋转角度的舰船样本的特征图,从而提高检测器对不同尺寸、旋转角度的舰船目标的适应性。利用OHEM在后向传播过程中发掘并充分利用难例样本,去掉检测算法中对样本正负比例的限制,提高对样本的利用率。通过在SSDD数据集上的实验证明以上两点改进对检测算法性能分别提升了1.3%和1.0%,二者结合提高了2.1%。以上两种方法不依赖于具体的检测算法,且只在训练时增加步骤,在测试时候不增加计算量,具有很强的通用性和实用性。

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号