首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Compression algorithms are widely used in medical imaging systems for efficient image storage, transmission, and display. In the acceptance of lossy compression algorithms in the clinical environment, important factors are the assessment of 'visually lossless' compression thresholds, as well as the development of assessment methods requiring fewer data and time than observer performance based studies. In this study a set of quantitative measurements related to medical image quality parameters is proposed for compression assessment. Measurements were carried out using region of interest (ROI) operations on computer-generated test images, with characteristics similar to radiographic images. As a paradigm, the assessment of the lossy Joint Photographic Expert Group (JPEG) algorithm, available in a telematics application for healthcare, is presented. A compression ratio of 15 was found as the visually lossless threshold for the JPEG lossy algorithm, in agreement with previous observer performance studies. Up to this ratio low contrast discrimination is not affected, image noise level is decreased, high contrast line-pair amplitude is decreased by less than 3%, and input/output gray level differences are minor (less than 1%). This type of assessment provides information regarding the type of loss, offering cost and time benefits, in parallel with the advantages of test image adaptation to the requirements of a certain imaging modality and clinical study.  相似文献   

2.
High-dynamic-range still-image encoding in JPEG 2000   总被引:1,自引:0,他引:1  
The raw size of a high-dynamic-range (HDR) image brings about problems in storage and transmission. Many bytes are wasted in data redundancy and perceptually unimportant information. To address this problem, researchers have proposed some preliminary algorithms to compress the data, like RGBE/XYZE, OpenEXR, LogLuv, and so on. HDR images can have a dynamic range of more than four orders of magnitude while conventional 8-bit images retain only two orders of magnitude of the dynamic range. This distinction between an HDR image and a conventional image leads to difficulties in using most existing image compressors. JPEG 2000 supports up to 16-bit integer data, so it can already provide image compression for most HDR images. In this article, we propose a JPEG 2000-based lossy image compression scheme for HDR images of all dynamic ranges. We show how to fit HDR encoding into a JPEG 2000 encoder to meet the HDR encoding requirement. To achieve the goal of minimum error in the logarithm domain, we map the logarithm of each pixel value into integer values and then send the results to a JPEG 2000 encoder. Our approach is basically a wavelet-based HDR still-image encoding method.  相似文献   

3.
近年来由于科技的发展和互联网的兴起,图像资料已被广泛地应用在网络上,而图像压缩不但可以节省图像资料占用的内存空间,并且可以加速其传输速度,因此图像压缩技术目前被广泛应用于医学、手机、数据传输、多媒体影音、互联网络等。这里主要是针对无失真的图像压缩技术,先将原始图像转成256色的GIF格式,然后再建立一个索引矩阵,矩阵中元素是由原色RGB信息对应所组成,利用索引矩阵排序法并配合编码簿,再加上配合LZW、CALIC、JPEG2000、JPEG-LS等不同压缩算法,来比较压缩的效果。  相似文献   

4.
有损压缩与重采样操作在图像像素间产生的相关统计特性导致有损压缩图像难以被检测。为解决该问题,提出一种适用于无损图像的重采样检测算法,利用插值信号的周期性对图像频域特征进行分析,通过估算插值系数实现重采样检测。实验结果表明,该算法鲁棒性强、应用范围广,对于JPEG有损压缩图像的重采样检测具有较高的正确率。  相似文献   

5.
一种自适应图像隐藏方法   总被引:1,自引:0,他引:1  
提出一种基于人类视觉系统(HVS)的数字图像隐藏算法,并给出了具体的嵌入和提取算法。仿真结果表明,该方法提高信息嵌入量,具有较好的信息隐藏效果,同时对JPEG压缩具有一定的抵抗能力。  相似文献   

6.
Digital multimedia forensics is an emerging field that has important applications in law enforcement and protection of public safety and national security. In digital imaging, JPEG is the most popular lossy compression standard and JPEG images are ubiquitous. Today’s digital techniques make it easy to tamper JPEG images without leaving any visible clues. Furthermore, most image tampering involves JPEG double compression, it heightens the need for accurate analysis of JPEG double compression in image forensics. In this paper, to improve the detection of JPEG double compression, we transplant the neighboring joint density features, which were designed for JPEG steganalysis, and merge the joint density features with marginal density features in DCT domain as the detector for learning classifiers. Experimental results indicate that the proposed method improves the detection performance. We also study the relationship among compression factor, image complexity, and detection accuracy, which has not been comprehensively analyzed before. The results show that a complete evaluation of the detection performance of different algorithms should necessarily include image complexity as well as the double compression quality factor. In addition to JPEG double compression, the identification of image capture source is an interesting topic in image forensics. Mobile handsets are widely used for spontaneous photo capture because they are typically carried by their users at all times. In the imaging device market, smartphone adoption is currently exploding and megapixel smartphones pose a threat to the traditional digital cameras. While smartphone images are widely disseminated, the manipulation of images is also easily performed with various photo editing tools. Accordingly, the authentication of smartphone images and the identification of post-capture manipulation are of significant interest in digital forensics. Following the success of our previous work in JPEG double compression detection, we conducted a study to identify smartphone source and post-capture manipulation by utilizing marginal density and neighboring joint density features together. Experimental results show that our method is highly promising for identifying both smartphone source and manipulations. Finally, our study also indicates that applying unsupervised clustering and supervised classification together leads to improvement in identifying smartphone sources and manipulations and thus provides a means to address the complexity issue of the intentional post-capture manipulation on smartphone images.  相似文献   

7.
提出一种基于DSP和FPGA协同设计实现视频图像压缩的控制逻辑方案。由FPGA模块来实现图像采集,DSP模块进行编码压缩,同时针对块匹配算法中搜索精度与计算复杂度相关性问题,介绍了一种基于块匹配的量子行为的微粒群优化算法(Block Match Quantum-behaved Particle Swarm Optimization,BMQPSO)。在图像的实时压缩算法处理中,先对原始图像序列每一帧的宏块用微粒子进行搜索,再根据收敛性要求对压缩编码进行优化。实验结果表明该算法压缩效果优于经典搜索算法。  相似文献   

8.
为了适应图像的存储和传输,需要对图像进行压缩以减少数据量,因此,许多压缩技术和标准应运而生.利用JPEG压缩标准,根据图像的数据结构将图像进行了分区域操作,从而实现JPEG压缩模式下的无损压缩.实验结果表明,该算法得到的压缩图像在与JPEG同等质量的条件下,数据量要小于JPEG压缩的数据量.  相似文献   

9.
Authentication of image data is a challenging task. Unlike data authentication systems that detect a single bit change in the data, image authentication systems must remain tolerant to changes resulting from acceptable image processing or compression algorithms while detecting malicious tampering with the image. Tolerance to the changes due to lossy compression systems is particularly important because in the majority of cases images are stored and transmitted in compressed form, and so it is important for verification to succeed if the compression is within the allowable range.In this paper we consider an image authentication system that generates an authentication tag that can be appended to an image to allow the verifier to verify the authenticity of the image. We propose a secure, flexible, and efficeint image authentication algorithm that is tolerant to image degradation due to JPEG lossy compression within designed levels. (JPEG is the most widely used image compression system and is the de facto industry standard.) By secure we mean that the cost of the best known attack againt the system is high, by flexible we mean that the level of protection can be adjusted so that higher security can be obtained with increased length of the authentication tag, and by efficient we mean that the computation can be performed largely as part of the JPEG compression, allowing the generation of the authentication tag to be efficiently integrated into the compression system. The authentication tag consists of a number of feature codes that can be computed in parallel, and thus computing the tag is effectively equivalent to computing a single feature code. We prove the soundness of the algorithm and show the security of the system. Finally, we give the results of our experiments.  相似文献   

10.
高光谱图像压缩质量评价技术研究进展   总被引:1,自引:0,他引:1       下载免费PDF全文
众所周知,数据量的庞大,致使高光谱图像数据的应用受到很大限制。这种庞大的数据量对于许多情况,尤其对于卫星数据链路,由于受带宽和星上存储能力的局限,致使不能实时进行数据传输,因此必须使用有损压缩方式来减小高光谱图像的数据量。但由于有损压缩带来的信息丢失,对高光谱数据的不同后续应用影响不同,因此压缩图像的质量评价技术得到了广泛重视。为使人们对这一质量评价技术有一定了解,首先对高光谱图像压缩方法进行简单介绍;然后对现存的客观失真参数评价、应用算法统计结果评价、相似敏感度标准抽取评价等主要的压缩质量评价技术进行综述,同时比较它们的优缺点,并在此基础上,提出了一种基于最优性能的质量评估框架;最后对该技术今后的研究发展进行了展望。  相似文献   

11.
王文延  何妮  吴沛宏 《计算机仿真》2010,27(5):217-219,277
研究图像压缩算法优化算法的效果,针对目前通用无损压缩算法都难以对JPEG文件进行压缩,提出了一种洗牌算法和无损压缩算法相结合的无损压缩方法。在将JPEG图像文件进行压缩前对文件进行有效规整,产生冗余,然后再结合通用无损压缩算法进行无损压缩进一步去除文件内部信息冗余。进行仿真实验验证,算法能够进一步将JPEG图像文件无损压缩1%-3%。结果表明,上述算法能进一步无损去除文件内部冗余,减小文件体积,算法简单,时间复杂度低,易于实现,是一种有效的快速图像压缩算法。算法已经获得专利保护。  相似文献   

12.
伪造图像典型篡改操作的检测   总被引:1,自引:0,他引:1       下载免费PDF全文
在图像篡改中常使用几何变换、JPEG(Joint Photographic Experts Group)压缩以及模糊操作,其特性是图像伪作检测的依据。首先定义兼顾重采样和JPEG压缩特性的块度量因子,将待测图像重叠分块计算块度量因子,利用其值的不一致性来检测定位篡改区域。实验结果表明,与现有针对性单一的检测方法相比,该方法可以检测更多篡改组合模式下的篡改操作并能有效定位出篡改区域,且对于有损JPEG压缩具有较好的鲁棒性。其次,提出一种检测模糊痕迹的方法。利用一定的模糊核对待测图像进行再次模糊,计算模糊前后两图像的像素差值,根据差值图像值的不同分类完成模糊篡改区域的定位。实验结果表明,该方法能实现对不同模糊方式的盲检测,且对JPEG压缩的抵抗能力较好,同时与现有基于分块检测的方法相比,大大降低了计算复杂度且能检测出较细小的模糊痕迹。  相似文献   

13.
随着智能设备和社交网络的飞速发展,通过网络传输的数字图像成为了实施隐蔽通信的新型重要载体,适应网络信道的图像隐写技术有望成为开放网络环境下可靠、隐蔽传递信息的一种重要方式。然而,数字图像通过Facebook、Twitter、微信、微博等社交网络传输的过程中,往往会遭受压缩、缩放、滤波等处理,对传统信息隐藏技术在兼顾鲁棒性与抗检测性方面提出了新的挑战。为此,研究者经过多年的努力探索,提出了可抵抗多种图像处理攻击和统计检测的新型鲁棒隐写技术。本文结合网络有损信道中隐蔽通信应用需求,对现有的数字图像鲁棒隐写技术进行综述。首先简要介绍本领域的研究背景,并从图像水印和隐写两方面对图像信息隐藏技术的基本概念、相关技术和发展趋势进行了简要总结。在此基础上,将图像鲁棒隐写技术的研究架构分为载体图像选择、鲁棒载体构造、嵌入代价度量、嵌入通道选择、信源/信道编码以及应用安全策略等方面,并分别对相关方法的基本原理进行了归纳和阐述。随后,对具有代表性的相关方法进行了对比测试,并结合应用场景需求给出了推荐的鲁棒隐写方法。最后,指出了数字图像鲁棒隐写技术有待进一步研究解决的问题。  相似文献   

14.
水印安全性、抗JPEG 的鲁棒性和篡改检测能力的矛盾、计算复杂度是现有基于数字水印的半脆弱图像认证算法需要克服的主要问题. 本文提出一种结合Zernike 矩和水印的图像认证算法. 利用图像小波变换低频子带的Zernike 矩幅值的半脆弱特性区分恶意攻击和偶然攻击. 结合基于HVS (人类视觉系统) 的水印后,可判断图像是否受到伪认证攻击,提高了水印安全性. 通过采用基于提升格式的整数小波变换,有效降低了算法计算复杂性. 实验结果表明,算法对较低质量因子的JPEG有损压缩鲁棒,对剪切、替换等恶意修改敏感且可准确定位篡改位置.  相似文献   

15.
This paper presents a new lossy image compression technique which uses singular value decomposition (SVD) and wavelet difference reduction (WDR). These two techniques are combined in order for the SVD compression to boost the performance of the WDR compression. SVD compression offers very high image quality but low compression ratios; on the other hand, WDR compression offers high compression. In the Proposed technique, an input image is first compressed using SVD and then compressed again using WDR. The WDR technique is further used to obtain the required compression ratio of the overall system. The proposed image compression technique was tested on several test images and the result compared with those of WDR and JPEG2000. The quantitative and visual results are showing the superiority of the proposed compression technique over the aforementioned compression techniques. The PSNR at compression ratio of 80:1 for Goldhill is 33.37 dB for the proposed technique which is 5.68 dB and 5.65 dB higher than JPEG2000 and WDR techniques respectively.  相似文献   

16.
在JPEG标准中,基于图像压缩的有损压缩算法中的离散余弦变换(DCT),应用于很多图像压缩场合,并且在实际操作中,能获得较高的压缩比,同时压缩后的图像与原始图像的视觉效果基本相同,因此得到了广泛应用。为了达到提高图像质量的目的,文中提出了一个基于二维离散余弦变换(DCT)的图像压缩改进算法,该算法通过设置量化系数来控制图像压缩数组的大小。同时,在图像压缩部分利用DCT快速算法。仿真实验结果表明:该算法进一步提高了图像的峰值信噪比(PSNR)和主观视觉质量。  相似文献   

17.
Effect of Severe Image Compression on Iris Recognition Performance   总被引:1,自引:0,他引:1  
We investigate three schemes for severe compression of iris images in order to assess what their impact would be on recognition performance of the algorithms deployed today for identifying people by this biometric feature. Currently, standard iris images are 600 times larger than the IrisCode templates computed from them for database storage and search; but it is administratively desired that iris data should be stored, transmitted, and embedded in media in the form of images rather than as templates computed with proprietary algorithms. To reconcile that goal with its implications for bandwidth and storage, we present schemes that combine region-of-interest isolation with JPEG and JPEG2000 compression at severe levels, and we test them using a publicly available database of iris images. We show that it is possible to compress iris images to as little as 2000 bytes with minimal impact on recognition performance. Only some 2% to 3% of the bits in the IrisCode templates are changed by such severe image compression, and we calculate the entropy per code bit introduced by each compression scheme. Error tradeoff curve metrics document very good recognition performance despite this reduction in data size by a net factor of 150, approaching a convergence of image data size and template size.  相似文献   

18.
The aim of this paper is analysis of image formats used for FPGA implementation of edge detection methods. All cameras used in FPGA applications give Raw RGB output video format, some cameras provide also YUV, YCbCr, RGB565/555 or compressed JPEG formats. If the FPGA circuit has limited number of configurable logic blocks (CLB) the JPEG format seems to be good solution how to increase the size of the processed image. On the other hand, using an image with lossy compression can more or less affect the overall result of image processing. The first goal of this paper is to show whether lossy image compression can affect the quality of edge detection. The results presented in this article show that lossy image compression can impair the efficiency of edge detection by up to six percent. Many researchers have proposed FPGA implementation of some edge detection methods. Usually their first step is RGB to grayscale conversion because they use edge detection methods for grayscale images. The second goal of this paper is to show that a performance of FPGA implementation can be improved if YUV, YCbCr or Raw RGB camera output formats are used instead of RGB format.  相似文献   

19.
王立  伍宗富 《计算机仿真》2006,23(6):147-149
JPEG2000是一种最新的静止图象压缩标准,此标准采用了很先进的压缩技术使其在压缩性能、渐进传输和灵活性等方面都优于JPEG,并且能够提供优良的压缩率控制功能。通过对压缩率的控制,使码流可在任意位置截断,并可利用截断的码流对图像进行解码,能够很方便地实现图像质量、分辨率的渐进传输。压缩率控制算法对图像压缩质量、编码效率、存贮器占用有直接影响。该文详细阐述了两种压缩率控制算法原理,其中一种是JPEG2000建议的压缩后率失真优化PCRD算法,另一种是优先扫描率分配PSRA算法,并对两种算法在算法复杂度、存贮器消耗和压缩质量进行了比较,最后用实验数据评价了两种算法的有效性。  相似文献   

20.
提出了一种抗JPEG有损压缩的半脆弱图像数字水印算法。该算法充分利用混沌映射对初值的敏感性,并根据JPEG图像压缩过程中DCT系数的不变特性,将预先量化的DCT低频系数和水印密钥合成为混沌系统的初值,经过多次混沌迭代生成水印。根据水印调整DCT中频系数的大小,完成水印的嵌入调制。通过将图像块的编号映射为混沌系统的迭代次数,避免了针对块独立水印算法的诸如矢量量化等攻击。实验结果表明,该算法对JPEG有损压缩具有良好的鲁棒性,同时可对图像内容的恶意篡改进行精确的检测与定位。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号