首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
基于IWT图像压缩技术的LED同步显示系统设计   总被引:1,自引:1,他引:0  
在全彩LED大屏幕同步显示系统中,存在图像的实时显示与通信带宽之间的矛盾.文章采用整数小波变换(IWT)算法进行图像压缩后传输,大幅减小了通信量.由于IWT算法仅含有整数加减和移位运算,避免了当前大多数图像压缩解压算法需要复杂的浮点型运算的缺陷,从而可以在显示屏控制器中实现图像的快速解压.实验表明,此算法可以得到较好的图像压缩效果,VLSI硬件设计只占用很少的系统逻辑单元,同时可达到很快的图像重构速度,能够满足同步屏图像实时显示的要求.  相似文献   

2.
毛峡  湛杰 《电视技术》2011,35(7):52-55
CCSDS图像压缩算法是空间数据系统咨询委员会针对深空探测图像制定的图像压缩标准.该算法的码率可控、复杂度较低,而且算法结构适合于硬件实现,支持对空间数据的高速实时处理.给出了基于可编程逻辑电路硬件的位平面编码算法的设计与实现方法,并通过分析直流、交流系数编码过程中各模块内部和模块间的关系,设计了流水线,使系统处理速度...  相似文献   

3.
在巡航导弹飞控数据链传输系统中,高效的图像压缩编码方法能降低图像数据传输和弹载终端对图像数据存储的负载。采用了一种压缩效率高,易于硬件实现的基于小波变换的图像压缩编码方案,并进行编解码算法分析和性能仿真。仿真表明,在满足飞控数据链压缩比大于15:1时所恢复的图像,人眼主观判图效果好,PSNR达到约35dB。  相似文献   

4.
给出了一个基于JPEG2000算法的红外图像压缩解压缩系统的设计方案。该系统使用专用的硬件压缩芯片ADV202以及ADV7179、ADV7189和FPGA内嵌的NiosII软核处理器,解决了直接采用FPGA+DSP设计复杂、开发周期长的缺点。NiosII软核处理器通过IIC总线对ADV202、ADV7189的控制实现图像数据的A/D转换和图像压缩,对ADV202、ADV7179的控制实现图像的解压缩和D/A转换。与此同时采用抽场补场的方式实现图像压缩和传输速率的要求,并且通过改变抽场参数可以实现传输速率的改变。实验表明文中的设计方案达到了系统要求的各项技术指标。  相似文献   

5.
本文基于JPEG2000压缩标准,采用两片ADV202芯片并行处理的方式实现宽幅图像压缩,该系统可用于航空航天领域宽幅图像的实时传输。本文给出了系统的硬件电路和软件的设计,并介绍了以多片ADV202为核心基于FPGA和DSP实现的宽幅图像压缩系统初始化操作及多片ADV202并行实现的方法。  相似文献   

6.
弹载图像小波实时处理技术研究   总被引:1,自引:0,他引:1  
以战场图像侦察弹为需求背景,采用小波理论对弹载图像实时压缩技术进行了研究,分析了弹载图像压缩特点和机理,给出了小波帧内压缩算法模型及弹载实时压缩硬件实现电路,实验室硬件仿真结果表明该模型可用于战场弹载图像的无线传输。  相似文献   

7.
叙述一种高分辨率图像采集卡的设计方案,本卡由图像采集系统、图像压缩系统和数据上传系统组成,主要介绍了其硬件实现。  相似文献   

8.
基于嵌入式系统的视频图像传输系统   总被引:9,自引:0,他引:9  
在信息技术飞速发展的今天 ,人们对图像传输的需求越来越迫切。文章针对实时图像传输的特点 ,提出了一种采用基于小波变换图像压缩芯片ADV612和嵌入式Linux系统以及蓝牙所构成的嵌入式无线图像传输系统 ,并给出了硬件的实现方案  相似文献   

9.
本文叙述了一种高分辨率图像采集卡的实现方法,本卡由图像采集系统、图像压缩系统和数据上传系统组成,其中图像采集系统负责接收原始的图像数据并对其进行一定的预处理;图像压缩系统负责接收图像采集系统预处理后的数据并进行压缩;数据上传系统负责与PC机的数据交递。本文主要介绍其硬件实现。  相似文献   

10.
本文简要介绍了图像压缩的重要性和常用的无损图像压缩算法,分析了快速高效无损图像压缩算法(FELICS)的优势,随后详细分析了该算法的编码步骤和硬件实现方案,最后公布了基于该方案的FPGA性能指标。和其他压缩算法相比该方案可极大地减小无损图像压缩系统所需的存储空间和压缩时间。  相似文献   

11.
张驰  高杰 《无线电工程》2006,36(4):29-31
实现远距离地空高速图像传输的关键技术是高质量的图像压缩算法。提出了一种JPEG格式的数字化图像压缩编码技术,并针对编码方式采取不同的码流控制技术,来实现以较低码速率传输高质量的图像信号。采用两点注入式锁相2CPFSK的调制方式,使振荡源的宽带调制特性和低噪声特性同时得到满足,提高了通信质量。  相似文献   

12.
The compression and decompression of continuous-tone images is important in document management and transmission systems. This paper considers an alternative image representation scheme, based on Gaussian derivatives, to the standard discrete cosine transformation (DCT), within a Joint Photographic Experts Group (JPEG) framework. Depending on the computer arithmetic hardware used, the approach developed might yield a compression/decompression technique twice as fast as the DCT and of (essentially) equal quality.  相似文献   

13.
基于帧间去相关的超光谱图像压缩方法   总被引:6,自引:1,他引:6  
针对超光谱图像的特点和硬件实现的实际需要,提出了一种基于小波变换的前向预测帧间去相关超光谱图像压缩算法。通过图像匹配和帧间去相关,消除超光谱图像帧间的冗余,对残差图像的压缩采用基于小波变换的快速位平面结合自适应算术编码的压缩算法,按照率失真准则控制输出码流,实现了对超光谱图像的高保真压缩。通过实验证明了该方案的有效性,基于小波变换的快速位平面结合自适应算术编码的压缩算法速度优于SPIHT,而且易于硬件实现。  相似文献   

14.
Prioritized DCT for compression and progressive transmission ofimages   总被引:2,自引:0,他引:2  
An approach is based on the block discrete cosine transform (DCT). The novelty of this approach is that the transform coefficients of all image blocks are coded and transmitted in absolute magnitude order. The resulting ordered-by-magnitude transmission is accomplished without sacrificing coding efficiency by using partition priority coding. Coding and transmission are adaptive to the characteristics of each individual image. and therefore, very efficient. Another advantage of this approach is its high progression effectiveness. Since the largest transform coefficients that capture the most important characteristics of images are coded and transmitted first, this method is well suited for progressive image transmission. Further compression of the image-data is achieved by multiple distribution entropy coding, a technique based on arithmetic coding. Experiments show that the approach compares favorably with previously reported DCT and subband image codecs.  相似文献   

15.
贾铸 《电视技术》1999,(11):8-11
在简要介绍图像信源统计特性的基础上,分别以H.263和JPEG标准中的算术编码器为例,阐述了算术编码方法在活动图像和静止图像压缩编码领域内的应用,最后对其发展前景作了展望。  相似文献   

16.
In this paper, we propose a new approach for block-based lossless image compression by defining a new semiparametric finite mixture model-based adaptive arithmetic coding. Conventional adaptive arithmetic encoders start encoding a sequence of symbols with a uniform distribution, and they update the frequency of each symbol by incrementing its count after it has been encoded. When encoding an image row by row or block by block, conventional adaptive arithmetic encoders provide the same compression results. In addition, images are normally non-stationary signals, which means that different areas in an image have different probability distributions, so conventional adaptive arithmetic encoders which provide probabilities for the whole image are not very efficient. In the proposed compression scheme, an image is divided into non-overlapping blocks of pixels, which are separately encoded with an appropriate statistical model. Hence, instead of starting to encode each block with a uniform distribution, we propose to start with a probability distribution which is modeled by a semiparametric mixture obtained from the distributions of its neighboring blocks. The semiparametric model parameters are estimated through maximum likelihood using the expectation–maximization algorithm in order to maximize the arithmetic coding efficiency. The results of comparative experiments show that we provide significant improvements over conventional adaptive arithmetic encoders and the state-of-the-art lossless image compression standards.  相似文献   

17.
An algorithm for compression of bilevel images   总被引:2,自引:0,他引:2  
This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).  相似文献   

18.
The large size of hyperspectral imaging poses a significant threat to its potential use in real life due to the abundant information stored in it. The use of deep learning for such data processing is visible in recent applications. In this work, we propose a lossy hyperspectral image compression algorithm based on the concept of autoencoders. It uses a combination of the convolution layer and max-pooling layer to reduce the dimensions of the input image and generate a compressed image. The original image with some loss of information is reconstructed using transpose convolution layer that uses reverse of the procedure used by the encoder. The compressed image has been entropy coded using an adaptive arithmetic coder for transmission or storage application. The method provides an improvement of 28% in PSNR with 21 times increment in the compression ratio. The effect of compression on classification has also been evaluated in the experiment using state of art classification algorithm. Negligible difference in classification accuracy was obtained that proves the effectiveness of the proposed algorithm.  相似文献   

19.
A Scalable Architecture for MPEG-4 Wavelet Quantization   总被引:3,自引:0,他引:3  
Wavelet-based image compression has been adopted in MPEG-4 for visual texture coding. All wavelet quantization schemes in MPEG-4—Single Quantization (SQ), Multiple Quantization (MQ) and Bi-level Quantization—use Embedded Zero Tree (EZT) coding followed by an adaptive arithmetic coder for the compression and quantization of a wavelet image. This paper presents the OZONE chip, a dedicated hardware coprocessor for EZT and arithmetic coding. Realized in a 0.5 m CMOS technology and operating at 32 MHz, the EZT coder is capable of processing up to 25.6 Mega pixel-bitplanes per second. This is equivalent to the lossless compression of 31.6 8-bit grayscale CIF images (352 × 288) per second. The adaptive arithmetic coder processes up to 10 Mbit per second. The combination of the performance of the EZT coder and the arithmetic coder allows the OZONE to perform visual-lossless compression of more than 30 CIF images per second. Due to its novel and scalable architecture, parallel operation of multiple OZONEs is supported. The OZONE functionality is demonstrated on a PC-based compression system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号