首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 297 毫秒
1.
Virtually all teleconferencing systems incorporating video use compression algorithms to reduce the video data rate to manageable proportions. For a given bit rate, higher compression brings higher quality, although there can also be disadvantages, such as increased delay. Conferencing implies interconnectability, so the development of appropriate standards has played a vital part in enabling the videoconferencing market in particular to become established. This paper describes the main techniques used in video compression and how these have been embodied in successive generations of standards appropriate to various application areas. Other non-standardised techniques, and forthcoming standards are also discussed.  相似文献   

2.
Nonlinear pulse compression has been used to achieve transmission beyond the linear dispersion limit for 20 Gbit/s optical time-division-multiplexed data. Error free system operation has been achieved over an operating wavelength range of 10 nm above the wavelength of zero dispersion, in the anomalous dispersion regime.<>  相似文献   

3.
Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.  相似文献   

4.
丁菲  党建成 《信息技术》2012,(5):141-144,148
星上图像压缩的方法越来越广泛地被数据量巨大的卫星型号所采用,而DWT类图像压缩方法以其更高的压缩比、独特的码流结构及良好的抗误码性能成为星上图像压缩的主要方法。针对高误码率星地链路上DWT类压缩图像的误码扩散问题,提出了一种截断压缩码流并对保留码流加强信道编码保护的策略。分析和仿真结果表明,该策略重建图像的效果相比现有的传输模式有大幅提高,甚至能够优于直接下传模式。  相似文献   

5.
Reliability, scalability and clinical viability are of utmost importance in the design of wireless Brain Machine Interface systems (BMIs). This paper reports on the design and implementation of a neuroprocessor for conditioning raw extracellular neural signals recorded through microelectrode arrays chronically implanted in the brain of awake behaving rats. The neuroprocessor design exploits a sparse representation of the neural signals to combat the limited wireless telemetry bandwidth. We demonstrate a multimodal processing capability (monitoring, compression, and spike sorting) inherent in the neuroprocessor to support a wide range of scenarios in real experimental conditions. A wireless transmission link with rate-dependent compression strategy is shown to preserve information fidelity in the neural data. The optimal design for compression and sorting performance was evaluated for multiple sampling frequencies, wavelet basis choice and power consumption. At 32 channels, the neuroprocessor has been fully implemented on a 5 mm × 5 mm nano-FPGA, and the prototyping resulted in 5.19 mW power consumption, bringing its performance within the power-size constraints for clinical use.  相似文献   

6.
In this article, a run length encoding-based test data compression technique has been addressed. The scheme performs Huffman coding on different parts of the test data file separately. It has been observed that up to a 6% improvement in compression ratio and a 29% improvement in test application time can be achieved sacrificing only about 6.5% of the decoder area. We have compared our results with the other contemporary works reported in the literature. It has been observed that for most of the cases, our scheme produces a better compression ratio and that the area requirements are much less.  相似文献   

7.
The compression of elevation data is studied. The performance of JPEG-LS, the new international ISO/ITU standard for lossless and near-lossless (controlled-lossy) still-image compression, is investigated both for data from the USGS digital elevation model (DEM) database and the navy-provided digital terrain model (DTM) data. Using JPEG-LS has the advantage of working with a standard algorithm. Moreover, in contrast with algorithms like the popular JPEG-lossy standard, this algorithm permits the completely lossless compression of the data as well as a controlled lossy mode where a sharp upper bound on the elevation error is selected by the user. All these are achieved at a very low computational complexity. In addition to these algorithmic advantages, they show that JPEG-LS achieves significantly better compression results than those obtained with other (nonstandard) algorithms previously investigated for the compression of elevation data. The results here reported suggest that JPEG-LS can immediately be adopted for the compression of elevation data for a number of applications  相似文献   

8.
We present a parallel algorithm, architecture, and implementation for efficient Lempel-Ziv (LZ)-based data compression. The parallel algorithm exhibits a scalable, parameterized, and regular structure and is well suited for VLSI array implementation. Based on our parallel algorithm and systematic design methodologies, two semisystolic array architectures have been developed which are low power and area efficient. The first architecture trades off the compression speed for the area and has a low run-time overhead for multichannel compression. The second architecture achieves a high compression rate (one data symbol per clock) at the expense of the area due to a large clock load and global wiring. Compared to a recent state-of-the-art parallel architecture, our first array structure requires significantly less chip area (≃330 k versus ≃36 k transistors) and more than an order of magnitude less power (≈1.0 W versus ≈70 mW) while still providing the compression speed required for most data communication applications. Hence, data compression can be adopted in portable data communication as well as wireless local area networks. The second architecture has at least three times less area and power while providing the same constant compression rate. To demonstrate the correctness of our design, a prototype module for the first architecture has been implemented using 1.2 μ complementary metal-oxide-semiconductor (CMOS) technology. The compression module contains 32 simple and identical processors, has an average compression rate of 12.5 million bytes/s, and consumes 18.34 mW without the dictionary (≈70 mW with a 4.1k SRAM for the dictionary) while operating at a 100 MHz clock rate (simulated)  相似文献   

9.
Compression filters with bandwidths up to 1000 MHz have application in high-resolution RADAR system and rapid-scan receiver systems. A technique is presented for realizing a microwave linear delay (quadratic phase) versus frequency compression filter with sufficient delay accuracy to make compression ratios of up to 1000 to 1 feasible. The dispersive element in the compression filter is a silver tape with its broad side placed perpendicularly between the ground planes (instead of parallel, as in conventional stripline). The tape is folded back and forth upon itself in such a way that substantial coupling takes place between adjacent turns of the tape. A computer program has been written to determine the dimensions of the tape to achieve a linear delay versus frequency characteristic. A folded tape compression filter was constructed with a differential delay of 1.2 /spl mu/s over a bandwidth of 600 MHz centered at 1350 MHz giving a compression factor of 720 to 1. This filter was constructed in four identical sections, each section of which had a differential delay of 0.3 /spl mu/s over the same bandwidth as the complete filter. The entire filter (four sections) occupies a volume about 16 by 4 by 5 inches. Measurement data are presented which illustrate that the desired accurate delay characteristic was realized to within the /spl plusmn/ 1 ns measurement uncertainty.  相似文献   

10.
Many transform-based compression techniques, such as Fourier, Walsh, Karhunen-Loeve (KL), wavelet, and discrete cosine transform (DCT), have been investigated and devised for electrocardiogram (ECG) signal compression. However, the recently introduced Burrows-Wheeler Transformation has not been completely investigated. In this paper, we investigate the lossless compression of ECG signals. We show that when compressing ECG signals, utilization of linear prediction, Burrows-Wheeler Transformation, and inversion ranks yield better compression gain in terms of weighted average bit per sample than recently proposed ECG-specific coders. Not only does our proposed technique yield better compression than ECG-specific compressors, it also has a major advantage: with a small modification, the proposed technique may be used as a universal coder.  相似文献   

11.
The Lempel-Ziv algorithm and message complexity   总被引:1,自引:0,他引:1  
Data compression has been suggested as a nonparametric way of discriminating between message sources. Compressions obtained from a Lempel-Ziv algorithm for relatively short messages, such as those encountered in practice, are examined. The intuitive notion of message complexity has less connection with compression than one might expect from known asymptotic results about infinite messages. Nevertheless, discrimination by compression remains an interesting possibility  相似文献   

12.
We examine the use of exponential-Golomb codes and subexponential codes can be used for the compression of scan test data in core-based system-on-a-chip (SOC) designs. These codes are well-known in the data compression domain but their application to SOC testing has not been explored before. We show that these codes often provide slighly higher compression than alternative methods that have been proposed recently.  相似文献   

13.
王瑞芳 《应用激光》2001,21(5):331-332
为探讨柴油机缸内压缩温度场对柴油机喷雾过程的影响,采用激光双曝光全息干涉应用于柴油机缸内喷雾与压缩温度场的耦合过程.得到了温度场与浓度场耦合的全息干涉条纹,直观地显示了喷雾油束与温度场的形状.进而提出了一种研究柴油机缸内过程的新方法.  相似文献   

14.
HEVC静态图像压缩与JPEG2000性能比较与分析   总被引:1,自引:0,他引:1  
林子明  梁利平 《电视技术》2015,39(13):20-23
基于离散小波变换DWT(Discrete Wavelet Transform)的JPEG 2000代表着静态图像的最高水平.HEVC(High Efficiency Video Coding)提出了一个静态图像压缩档次——Main Still Profile,其帧内编码模式采用多种新的算法实现.通过大量实验比较发现,基于HEVC静态图像压缩比JPEG 2000具有更高的压缩效率,将来有望取代JPEG 2000成为新的静态图像压缩标准.  相似文献   

15.
Current gene-expression microarrays carry enormous amounts of information. Compression is necessary for efficient distribution and storage. This paper examines JPEG2000 compression of cDNA microarray images and addresses the accuracy of classification and feature selection based on decompressed images. Among other options, we choose JPEG2000 because it is the latest international standard for image compression and offers lossy-to-lossless compression while achieving high lossless compression ratios on microarray images. The performance of JPEG2000 has been tested on three real data sets at different compression ratios, ranging from lossless to 45:1. The effects of JPEG2000 compression/decompression on differential expression detection and phenotype classification have been examined. There is less than a 4% change in differential detection at compression rates as high as 20:1, with detection accuracy suffering less than 2% for moderate to high intensity genes, and there is no significant effect on classification at rates as high as 35:1. The supplementary material is available at .  相似文献   

16.
This paper reports the effect of compression by applying delta encoding and Huffman coding schemes together on speech signals of American-English and Hindi from International Phonetic Alphabet database. First of all, these speech signals have been delta encoded and then compressed by Huffman coding. By doing so, it has been observed here that the Huffman coding gives high compression ratio for this delta encoded speech signals as compared to the compression on the input speech signals only incorporating Huffman coding.  相似文献   

17.
针对奇异值分解变换在对图像进行压缩时,其压缩比是动态可调的,从而提出了一种新的方法,在图像压缩过程中能自适应的寻找最佳压缩比。文章引入能量差Q的概念,利用Q与奇异值个数q之间的关系,通过确定合适的Q值,动态寻找最佳的q值,从而达到动态图像压缩的目的。经过对多幅灰度图像的压缩实验证明,该方法切实可行,具有一定的应用前景。  相似文献   

18.
为避免红外视频在压缩存储过程中发生细节损失,提出了一种红外视频无损压缩的实现方案。该方案采用FPGA作为核心控制,以ADV212为压缩芯片并对其进行分析,选择合适的工作模式进行无损压缩。通过在线切换配置,还可对所存储视频进行解压缩和回放。该方案已经在自主研发的手持式红外观测仪中使用,效果良好。  相似文献   

19.
基于图像修复的图像编码技术   总被引:2,自引:2,他引:0  
蒋伟  杨俊杰 《通信技术》2011,44(1):43-44,47
数字图像修复技术是图像恢复领域的一个重要分支,它的目的是根据图像现有的信息来自动恢复丢失的信息,它可以用于旧照片中丢失信息的恢复、视频文字去除以及视频错误隐藏等。结合数字修复技术的图像压缩编码技术是基于对象的第二代图像编码中的研究热点之一。这里首先介绍了图像修复技术的概念,然后分类说明了几种常见的修复技术,在此基础上,得到了基于图像修复的压缩编码技术的基本处理流程,最后以具体算法为例说明了基于图像修复的压缩编码技术的压缩过程。  相似文献   

20.
基于提升小波变换的雷达视频数据实时压缩算法   总被引:1,自引:0,他引:1  
张增辉  胡卫东  郁文贤 《电子学报》2005,33(10):1910-1913
雷达视频数据压缩具有大数据量、需要高保真度和实时处理的特点.而现有的一些高保真数据压缩算法都比较复杂,不能满足实时处理的要求.已有的硬件实现算法虽然可以实时处理,但其保真度差,同时难以调整压缩比.通过对实际雷达视频回波数据的分析,表明该数据具有相关性强、相关长度短的特点.在此基础之上,本文提出了一种基于提升格式5-3小波变换和简单的Golomb-Rice编码方法的压缩算法,该算法具有低复杂、高保真的特点,并对算法的运算量进行了详细地分析.实测实验表明,通过软件方式实现该算法即可满足雷达视频数据实时压缩的要求.最后,该算法已被成功应用到雷达海情记录系统中.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号