首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
在实时数据库中,测点数量多,数据量庞大,数据变化慢,数据冗余多,且实时数据库对实时性的要求很高,因此需要高效的压缩算法对实时数据进行压缩。实时数据库中的数据压缩算法分为有损和无损两类,文中就数据有损压缩进行了研究。通过对现有的有损压缩算法进行分析和比较,总结并提出了一个新的算法。该算法基于预测和动态修正,对实时数据进行快速高效的有损压缩。通过测试和比较,该算法在提高压缩比的同时能满足系统对还原精度的要求。  相似文献   

2.
In this article, the researcher introduces a hybrid chain code for shape encoding, as well as lossless and lossy bi-level image compression. The lossless and lossy mechanisms primarily depend on agent movements in a virtual world and are inspired by many agent-based models, including the Paths model, the Bacteria Food Hunt model, the Kermack–McKendrick model, and the Ant Colony model. These models influence the present technique in three main ways: the movements of agents in a virtual world, the directions of movements, and the paths where agents walk. The agent movements are designed, tracked, and analyzed to take advantage of the arithmetic coding algorithm used to compress the series of movements encountered by the agents in the system. For the lossless mechanism, seven movements are designed to capture all the possible directions of an agent and to provide more space savings after being encoded using the arithmetic coding method. The lossy mechanism incorporates the seven movements in the lossless algorithm along with extra modes, which allow certain agent movements to provide further reduction. Additionally, two extra movements that lead to more substitutions are employed in the lossless and lossy mechanisms. The empirical outcomes show that the present approach for bi-level image compression is robust and that compression ratios are much higher than those obtained by other methods, including JBIG1 and JBIG2, which are international standards in bi-level image compression. Additionally, a series of paired-samples t-tests reveals that the differences between the current algorithms’ results and the outcomes from all the other approaches are statistically significant.  相似文献   

3.
The state-of-the-art H.264/AVC was designed for lossy video coding in the beginning. Recently, the H.264/AVC FRExt was developed by removing transformation and quantization for lossless coding. In this paper, we propose an efficient intra lossless coding method based on a pixel-wise prediction. The proposed algorithm introduces an additional intra prediction mode that employs the LOCO-I predictor of JPEG LS. We found that the proposed lossless coding algorithm achieved approximately 22.0, 2.6, and 10.7% bit saving in terms of compression ratio, compared to the H.264/AVC FRExt, lossless intra 4:4:4, and Takamura’s lossless coding methods, respectively.  相似文献   

4.
为解决大量工业浮点数据在GPRS网络上传输时实时性降低的问题,提出了基于科学计算双浮点数压缩算法(FPC)与区间编码相结合的无损压缩方法IFPC实现工业浮点数据的压缩传输及解压缩。先对FPC算法与通用无损压缩算法应用在浮点数部分时的压缩效果作实验对比,实验结果表明FPC算法相比于通用的无损压缩算法在浮点数压缩上具有较好的压缩率以及较短的压缩与解压缩时间。将FPC算法与区间编码结合后的IFPC算法对整个数据域压缩与解压缩的实验结果表明,所提出的方法相比通用无损压缩算法,压缩率最低可提高7.6%,压缩时间最低可减少49.1%,综合传输时间减少了21.3%,提高了传输实时性。  相似文献   

5.
6.
提出了一种新颖的适合医学图像的压缩算法AR-EWC(embedded wavelet coding of arbitrary ROI).该算法保证了感兴趣区域边界与背景光滑过渡,支持任意形状ROI的无失真编码,生成内嵌有损到无损图像质量渐进优化的码流,并且可以对ROI独立随机存取.针对医学图像的特点,对包含重要临床诊断信息区域采用完全无失真的无损压缩,而对背景采用高压缩比的有损压缩,既保证了医学图像对高质量的要求,相对于传统的无损方案又大大提高了图像的整体压缩比.在临床头部MR图像数据集上的实验表明,该算法在保证了ROI区域无损压缩的前提下,达到了与经典的有损压缩算法相当的压缩比率.  相似文献   

7.
Raster‐based topographic maps are commonly used in geoinformation systems to overlay geographic entities on top of digital terrain models. Using compressed texture formats for encoding topographic maps allows reducing latency times while visualizing large geographic datasets. Topographic maps encompass high‐frequency content with large uniform regions, making current compressed texture formats inappropriate for encoding them. In this paper we present a method for locally‐adaptive compression of topographic maps. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarily‐sized texel runs, a cumulative run‐length encoding supporting fast random‐access, and a compression algorithm supporting lossless and lossy compression. Our scheme can be easily implemented on current programmable graphics hardware allowing real‐time GPU decompression and rendering of bilinear‐filtered topographic maps.  相似文献   

8.
当前大多数的先加密后压缩ETC(encryption-then-compression)方法只能够获得有限固定的压缩率,而无法获取到实际需求的任意压缩率。针对此问题提出一种具有任意压缩率的加密彩色图像有损压缩算法,该算法采用均匀下采样和随机下采样有机结合的方式对加密图像进行压缩,以获得加密图像的任意压缩率。接收方接收到加密图像的压缩序列后通过解压解密获得解密图像,随后把从解密图像有损重构原始图像的过程表征为一个结合下采样压缩方式约束的最优化问题,并设计一种基于卷积神经网络的有损ETC系统图像重构模型ETRN(ETC-oriented reconstruction network)来求解该优化问题。ETRN模型包含浅层特征提取层SFE(shallow feature extraction)、残差堆叠模块RIR(residual in residual)、残差信息补充模块RCS(residual content supplementation)、下采样约束模块DC(down-sampling constraint)。实验仿真结果表明,提出的加密彩色图像有损压缩算法能够获得优秀的加密压缩和重构性能,充分体现了该方法的可行性和有效性。  相似文献   

9.
为了提高现今被广泛使用的智能卡的安全性,本文提出了在智能卡中加入持卡人人像信息的方案,并且针对方案中由于智能卡容量的限制而引起的彩色人像信息的压缩问题提出了一种基于人像特征的分级压缩算法。该算法把无损压缩与有损压缩统一在小波变换的框架下,首先在图像中检测出人脸区域,然后对该区域采用无损压缩或低压缩比有损压缩,而对其它区域则采用高压缩比有损压缩。实验证明,该算法在压缩比和重建图像质量上取得了较好的效果。  相似文献   

10.
The popularity of multimedia applications has resulted in development of lossless and lossy compression techniques. This paper presents a novel lossy compression scheme for the low depth-of-field (DOF) images where the quality factor is altered based on whether we are compressing object-of-interest (OOI) or the background. The proposed method involves segmentation of OOI and then the application of lossy scheme. The experimental results shows that the proposed method performs well in compressing the given image (higher compression ratio), at the same time maintaining the acceptable quality (high PSNR).  相似文献   

11.
有损压缩与重采样操作在图像像素间产生的相关统计特性导致有损压缩图像难以被检测。为解决该问题,提出一种适用于无损图像的重采样检测算法,利用插值信号的周期性对图像频域特征进行分析,通过估算插值系数实现重采样检测。实验结果表明,该算法鲁棒性强、应用范围广,对于JPEG有损压缩图像的重采样检测具有较高的正确率。  相似文献   

12.
All the various data hiding methods can be simply divided into two types: (1) the extracted important data are lossy, (2) the extracted important data are lossless. The proposed method belongs to the second type. In this paper, a module-based substitution method with lossless secret data compression function is used for concealing smoother area of secret image by modifying fewer pixels in the generated stego-image. Compared with the previous data hiding methods that extract lossless data, the generated stego-image by the proposed method is always with better quality, unless the hidden image is with very strong randomness.  相似文献   

13.
面向任务的医学图象压缩   总被引:4,自引:0,他引:4       下载免费PDF全文
现代医学成象技术产生了大量的医学数字图象,而这些图象的存储和传输却存在很大问题,传统上,采用无损压缩编码方法改善这些图象的存储和传输效率,全为了达到较高的压缩比,必须采用有损压缩,然而,有损压缩会给图象带来失真,必须谨慎使用,医学图象通常,由二类区域构成,其中一类包含重要的诊断信息,由于其错误描述的代价非常高,因此提供一种高重的质量的压缩方法更加必要,另一类区域的信息较为次要,其压缩的目标则要求达到尽可能高的压缩比,为了既能保证感兴趣区图象的重构质量,又能获得较高压缩比,提出了一种面向任务的医学图象压缩算法,该方法把无损压缩和有损压缩统一在小波变换的框架下,对感兴趣区采用无损压缩,而对其他部分则采用有损压缩,实验证明,该压缩方法在压缩比和重建图象质量上均达到了较好的性能。  相似文献   

14.
在海量科学数据存储和传输压力愈来愈大的背景下,我们针对结构网格离散的科学计算数据,研究了基于二维9点、三维27点最优内插预测的科学数据压缩方法.数值实验表明该方法大大优于现有的压缩算法,可以较好地解决科学计算数据的压缩存储问题.  相似文献   

15.
一种多光谱遥感图象的近无损压缩方法   总被引:5,自引:1,他引:5       下载免费PDF全文
近无损压缩是在无损压缩和有损压缩之间的一种折衷。多光谱遥感图象的近无损压缩通常用K-L变换去除谱间冗余,用数字余弦变换(DCT)去除空间冗余来实现。本文分析了多光谱遥感图象空间冗余和谱间冗余的特点,提出用K-L变换和预测树方法去除两类冗余。该方法更好地去除了谱间冗余,取得了较好的实验结果。  相似文献   

16.
提出了一种ARGB数据的无损压缩优化算法以及FPGA实现方法。为了避免对整个文件的解压和压缩,采用了Deflate算法的相关方法对图像按块进行压缩和解压,极大提高了存储器的访问效率。利用了Deflate算法对小块进行压缩,发挥了Deflate中LZ77压缩的Huffman压缩技术来优化压缩算法。通过VIVADO HLS将算法实现成FPGA电路,采用多张图片进行了实际应用,证实了算法的有效性,并分析了其功耗和时序信息。  相似文献   

17.

The need of human beings for better social media applications has increased tremendously. This increase has necessitated the need for a digital system with a larger storage capacity and more processing power. However, an increase in multimedia content size reduces the overall processing performance. This occurs because the process of storing and retrieving large files affects the execution time. Therefore, it is extremely important to reduce the multimedia content size. This reduction can be achieved by image and video compression. There are two types of image or video compression: lossy and lossless. In the latter compression, the decompressed image is an exact copy of the original image, while in the former compression, the original and the decompressed image differ from each other. Lossless compression is needed when every pixel matters. This can be found in autoimage processing applications. On the other hand, lossy compression is used in applications that are based on human visual system perception. In these applications, not every single pixel is important; rather, the overall image quality is important. Many video compression algorithms have been proposed. However, the balance between compression rate and video quality still needs further investigation. The algorithm developed in this research focuses on this balance. The proposed algorithm exhibits diversity of compression stages used for each type of information such as elimination of redundant and semi redundant frames, elimination by manipulating consecutive XORed frames, reducing the discrete cosine transform coefficients based on the wanted accuracy and compression ratio. Neural network is used to further reduce the frame size. The proposed method is a lossy compression type, but it can reach the near-lossless type in terms of image quality and compression ratio with comparable execution time.

  相似文献   

18.
一种基于ROI区域生长的医学图像压缩方法   总被引:1,自引:0,他引:1  
为了克服传统基于ROI的压缩方法提高不了含有狭长的不规则ROI图像的压缩比等缺点,提出一种基于ROI区域生长的医学图像压缩方法.该方法使用区域生长的方法,将医学图像的ROI分割提取后,对ROI和非感兴趣区域(RONI)采用无损和有损相结合的方法,较好地解决了医学图像的高压缩比和高质量的矛盾,缩短了压缩时间.同时该方法还提供了良好的人机交互方式,便于操作,并能够很好地适用于其他场合的压缩.  相似文献   

19.
In the past decade, the number of mobile devices has increased significantly. These devices are in turn showing more computational capabilities. It is therefore possible to envision a near future where client applications may be deployed on these devices. There are, however, constraints that hinder this deployment, especially the limited communication bandwidth and storage space available. This paper describes the Efficient XML Data Exchange Manager (EXEM) that combines context-dependent lossy and lossless compression mechanisms used to support lightweight exchange of objects in XML format between server and client applications. The lossy compression mechanism reduces the size of XML messages by using known information about the application. The lossless compression mechanism decouples data and metadata (compression dictionary) content. We illustrate the use of EXEM with a prototype implementation of the lossless compression mechanism that shows the optimization of the available resources on the server and the mobile client. These experimental results demonstrate the efficiency of the EXEM approach for XML data exchange in the context of mobile application development.
Serhan DagtasEmail:
  相似文献   

20.
目的 传统隐写技术在实际社交网络信道上难以保护秘密信息的完整性。在社交网络中,图像往往经过有损压缩信道进行传输,从而导致隐蔽通信失效。为了保证经过压缩信道传输的载密图像鲁棒性,设计安全鲁棒的隐蔽通信技术具有实际应用价值。基于最小化图像信息损失,本文提出无损载体和鲁棒代价结合的JPEG图像鲁棒隐写。方法 首先,指出构造无损载体能有效维持隐写安全性和鲁棒性的平衡,对经过压缩信道前后的JPEG图像空域像素块进行差分,构造无损载体以确定鲁棒嵌入域;其次,通过对离散余弦变换(discrete cosine transform,DCT)系数进行“±1”操作,并计算空域信息在压缩传输前后的损失,设计衡量DCT系数抗压缩性能的鲁棒代价;同时,验证在低质量因子压缩信道下鲁棒代价更能区分DCT系数的鲁棒能力,最后,利用校验子格编码(syndrome-trellis code,STC),结合无损载体和鲁棒代价对秘密信息进行嵌入。结果 实验在BossBase1.01图像库上进行对比实验,相比于传统JPEG隐写技术,构造无损载体作为嵌入域能有效地将信息平均提取错误率降低24.97%,图像的正确提取成功率提高了21.35%;在此基础上,鲁棒代价进一步将信息平均提取错误率降低1.05%,将图像的正确提取成功率提高16.12%,验证了本文方法显著提高了隐写抗压缩性能。与J-UNIWARD (JPEG universal wavelet relative distortion)、JCRISBE (JPEG compression resistant solution with BCH code)和AutoEncoder (autoencoder and adaptive BCH encoding)3种现有典型隐写方法相比,提出的方法信息平均提取错误率分别降低了95.78%、93.17%和87.38%,图像的正确提取成功率为另外3种隐写方法的86.69倍、30.74倍和4.13倍。图像视觉质量逼近传统隐写方法,并保持较好的抗检测性。结论 本文提出的抗低质量因子JPEG压缩鲁棒隐写方法,获得的中间图像在经过压缩信道后,具有较强的抗压缩性和抗检测性,并保持较高的图像质量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号