共查询到20条相似文献,搜索用时 0 毫秒
1.
Low bit-rate efficient compression for seismic data 总被引:3,自引:0,他引:3
Averbuch A.Z. Meyer R. Stromberg J.-O. Coifman R. Vassiliou A. 《IEEE transactions on image processing》2001,10(12):1801-1814
Some marine seismic data sets exceed 10 Tbytes, and there are seismic surveys planned with a volume of around 120 Tbytes. The need to compress these very large seismic data files is imperative. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. Up to now some of the algorithms used for seismic data compression were based on some form of wavelet or local cosine transform, while using a uniform or quasiuniform quantization scheme and they finally employ a Huffman coding scheme. Using this family of compression algorithms we achieve compression results which are acceptable to geophysicists, only at low to moderate compression ratios. For higher compression ratios or higher decibel quality, significant compression artifacts are introduced in the reconstructed images, even with high-dimensional transforms. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40 dB in the decompressed seismic data sets. Several established compression algorithms are reviewed, and some new compression algorithms are introduced. All of these compression techniques are applied to a good representation of seismic data sets, and their results are documented in this paper. One of the conclusions is that adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression. 相似文献
2.
Two techniques are presented for optimising the performance of an adaptive peak-picking algorithm described in an earlier publication. These involve transmitting the last signal peak falling within a tolerance range about the previous transmitted peak, or sending a `same-again? signal for each peak falling within the tolerance range. 相似文献
3.
Two new asynchronous data-compression techniques are described. These involve picking out peaks of the signal in one case, and peaks together with certain other important points of slope variation, in the second case. Signal reconstruction is effected by joining these points by straight lines. 相似文献
4.
Arithmetic coding for data compression 总被引:17,自引:0,他引:17
Howard P.G. Vitter J.S. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1994,82(6):857-865
Arithmetic coding provides an effective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an efficient implementation that uses table lookup as a first alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible effect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding 相似文献
5.
Gavish A. Lempel A. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1996,42(5):1375-1380
We investigate uniquely decodable match-length functions (MLFs) in conjunction with Lempel-Ziv (1977) type data compression. An MLF of a data string is a function that associates a nonnegative integer with each position of the string. The MLF is used to parse the input string into phrases. The codeword for each phrase consists of a pointer to the beginning of a maximal match consistent with the MLF value at that point. We propose several sliding-window variants of LZ compression employing different MLF strategies. We show that the proposed methods are asymptotically optimal for stationary ergodic sources and that their convergence compares favorably with the LZ1 variant of Wyner and Ziv (see Proc. IEEE, vol.82, no.6, p.872, 1994) 相似文献
6.
Visual data compression for multimedia applications 总被引:2,自引:0,他引:2
Ebrahimi T. Kunt M. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1998,86(6):1109-1125
The compression of visual information in the framework of multimedia applications is discussed. To this end, major approaches to compress still as well as moving pictures are reviewed. The most important objective in any compression algorithm is that of compression efficiency. High-compression coding of still pictures can be split into three categories: waveform, second-generation, and fractal coding techniques. Each coding approach introduces a different artifact at the target bit rates. The primary objective of most ongoing research in this field is to mask these artifacts as much as possible to the human visual system. Video-compression techniques have to deal with data enriched by one more component, namely, the temporal coordinate. Either compression techniques developed for still images can be generalized for three-dimensional signals (space and time) or a hybrid approach can be defined based on motion compensation. The video compression techniques can then be classified into the following four classes: waveform, object-based, model-based, and fractal coding techniques. This paper provides the reader with a tutorial on major visual data-compression techniques and a list of references for further information as the details of each method 相似文献
7.
Seismic-while-drilling services efficiently support drilling decisions. They use the vibrations produced by the drill-bit during perforation as a downhole seismic source. The seismic signal is recorded by sensors on the surface, and it is processed in order to obtain/update an image of the subsurface around the borehole. To improve the characterization of the source, some sensors have been experimentally installed also downhole, on the drill pipes in close proximity to the bit: data logged downhole have been able to give better quality information. Currently, the main drawback of downhole equipment is the absence of a high-bit-rate telemetry system to enable real-time activities. This problem may be solved by employing either an offline solution, with limited memory capacity up to few hundreds of megabytes, or an online solution with telemetry at a very low bit-rate (few bits per second). However, following the offline approach with standard acquisition parameters, the internal storage memory would be filled up in just a few hours at high acquisition rates. On the contrary, with the online solution, only a small portion of the acquired signals (or only alarm information about potentially dangerous events) can be transmitted in real time to the surface by using conventional mud-pulse telemetry. We present a lossy data compression algorithm based on a new representation of downhole data in angle domain, which is suitable for downhole implementation and may be successfully applied to both online and offline solutions. Numerical tests based on real field data achieve compression ratios up to 112:1 without major loss of information. This allows a significant increase in downhole time acquisition and in real-time information that can be transmitted through mud-pulse telemetry. 相似文献
8.
Signal and noise estimation from seismic reflection data using spectral coherence methods 总被引:1,自引:0,他引:1
《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1984,72(10):1340-1356
Spectral coherence analysis is unrivaled as a quantitative tool over a range of practical problems in seismic interpretation, data processing, quality assessment for data acquisition, and research. Its great virtue is its ability to supply the detailed error information necessary for a thorough interpretation of results. Ordinary coherence analysis is employed in line intersection analysis and the design of filters to cross-equalize differently acquired seismic sections in a given area; both ordinary and partial coherence methods are indispensable in matching synthetic seismograms and seismic data; and multiple coherences are used to estimate the coherent signal and incoherent noise content of seismic sections and gathers. The precise meaning of the signal and noise estimates output by coherent analysis has to be related to the particular technique employed and the type of data input to it. The principles and procedures for analyzing seismic data with these methods are reviewed and illustrated with practical examples. 相似文献
9.
Data compression techniques are classified into four categories which describe the effect a compression method has on the form of the signal transmitted. Each category is defined and examples of techniques in each category are given. Compression methods which have received previous investigation, such as the geometric aperture methods, as well as techniques which have not received much attention, such as Fourier filter, optimum discrete filter, and variable sampling rate compression, are described. Results of computer simulations with real data are presented for each method in terms of rms and peak errors versus compression ratio. It is shown that, in general, the geometric aperture techniques give results comparable to or better than the more "exotic" methods and are more economical to implement at the present state-of-the-art. In addition, the aperture compression methods provide bounded peak error which is not readily obtainable with other methods. A general system design is given for a stored-logic data compression system with adaptive buffer control to prevent loss of data and to provide efficient transmission of multiplexed channels with compressed data. An adaptive buffer design is described which is shown to be practical, based on computer simulations with five different types of representative data. 相似文献
10.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1982,28(3):443-457
The optimal data compression problem is posed in terms of an alphabet constraint rather than an entropy constraint. Solving the optimal alphabet-constrained data compression problem yields explicit source encoder/decoder designs, which is in sharp contrast to other approaches. The alphabet-constrained approach is shown to have the additional advantages that (1) classical waveform encoding schemes, such as pulse code modulation (PCM), differential pulse code modulation (DPCM), and delta modulation (DM), as well as rate distortion theory motivated tree/trellis coders fit within this theory; (2) the concept of preposterior analysis in data compression is introduced, yielding a rich. new class of coders: and (3) it provides a conceptual framework for the design of joint source/channel coders for noisy channel applications. Examples are presented of single-path differential encoding, delayed (or tree) encoding, preposterior analysis, and source coding over noisy channels. 相似文献
11.
In this article we provide an overview of rate-distortion (R-D) based optimization techniques and their practical application to image and video coding. We begin with a short discussion of classical rate-distortion theory and then we show how in many practical coding scenarios, such as in standards-compliant coding environments, resource allocation can be put in an R-D framework. We then introduce two popular techniques for resource allocation, namely, Lagrangian optimization and dynamic programming. After a discussion of these techniques as well as some of their extensions, we conclude with a quick review of literature in these areas citing a number of applications related to image and video compression and transmission 相似文献
12.
Outlines an eigen-structure algorithm for passive seismic array data. The approach combines broadband multicomponent and narrowband multi-sensor eigen-structure routines. Synthetic data is used to demonstrate the algorithm's ability to resolve multiple signals in both bearing and wavenumber. Analysis of three-component seismic array data is also shown 相似文献
13.
A universal algorithm for sequential data compression 总被引:12,自引:0,他引:12
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1977,23(3):337-343
A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source. 相似文献
14.
Nijim Y.W. Stearns S.D. Mikhael W.B. 《Geoscience and Remote Sensing, IEEE Transactions on》1996,34(1):52-56
For some classes of signals, particularly those dominated by low frequency components, such as seismic data first and higher order differences between adjacent signal samples are generally smaller compared with the signal samples. In this paper, evaluating the differencing approach for losslessly compressing several classes of seismic signals is given. Three different approaches employing derivatives are developed and applied. The performance of the techniques presented and the adaptive linear predictor are evaluated and compared for the lossless compression of different seismic signal classes. The proposed differentiator approach yields comparable residual energy compared with that obtained employing the linear predictor technique. The two main advantages of the differentiation method are: (1) the coefficients are fixed integers which do not have to be encoded; and (2) greatly reduced computational complexity, relative to the existing algorithms. These advantages are particularly attractive for real time processing. They have been confirmed experimentally by compressing different seismic signals. Sample results including the compression ratio, i.e., the ratio of the number of bits per sample without compression to those with compression using arithmetically encoded residues are also given 相似文献
15.
Reconstruction algorithms: Transform methods 总被引:6,自引:0,他引:6
Transform methods for image reconstruction from projections are based on analytic inversion formulas. In this tutorial paper, the inversion formula for the case of two-dimensional (2-D) reconstruction from line integrals is manipulated into a number of different forms, each of which may be discretized to obtain different algorithms for reconstruction from sampled data. For the convolution-backprojection algorithm and the direct Fourier algorithm the emphasis is placed on understanding the relationship between the discrete operations specified by the algorithm and the functional operations expressed by the inversion formula. The performance of the Fourier algorithm may be improved, with negligible extra computation, by interleaving two polar sampling grids in Fourier space. The convolution-backprojection formulas are adapted for the fan-beam geometry, and other reconstruction methods are summarized, including the rho-filtered layergram method, and methods involving expansions in angular harmonics. A standard mathematical process leads to a known formula for iterative reconstruction from projections at a finite number of angles. A new iterative reconstruction algorithm is obtained from this formula by introducing one-dimensional (1-D) and 2-D interpolating functions, applied to sampled projections and images, respectively. These interpolating functions are derived by the same Fourier approach which aids in the development and understanding of the more conventional transform methods. 相似文献
16.
This correspondence presents entropy analyses for dithered and undithered quantized sources. Two methods are discussed that reduce the increase in entropy caused by the dither. The first method supplies the dither to the lossless encoding-decoding scheme. It is argued that this increases the complexity of the encoding-decoding scheme. A method to reduce this complexity is proposed. The second method is the usage of a dead-zone quantizer. A procedure for determining the optimal dead-zone width in the mean-square sense is given 相似文献
17.
EEG data compression techniques 总被引:4,自引:0,他引:4
Electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format 相似文献
18.
Migration of seismic data 总被引:2,自引:0,他引:2
《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1984,72(10):1302-1315
Reflection seismology seeks to determine the structure of the earth from seismic records obtained at the surface. The processing of these data by digital computers is aimed at rendering them more comprehensible geologically. Seismic migration is one of these processes. Its purpose is to "migrate" the recorded events to their correct spatial positions by backward projection or depropagation based on wave theoretical considerations. During the last 15 years several methods have appeared on the scene. The purpose of this paper is to provide an overview of the major advances in this field. Migration methods examined here fall in three major categories: 1) integral solutions, 2) depth extrapolation methods, and 3) time extrapolation methods. Within these categories, the pertinent equations and numerical techniques are discussed in some detail. The topic of migration before stacking is treated separately with an outline of two different approaches to this important problem. 相似文献
19.
《Electronics & Communication Engineering Journal》1995,7(1):5-10
Conventional digital audio, as used for instance on compact discs, requires a large bandwidth for transmission and enormous amounts of storage space. Developments in high-speed digital signal processing chips have made it practical to reduce these requirements by employing sophisticated data compression techniques which reduce redundant and irrelevant information in the source signal. The paper reviews the types of digital audio data compression coders that are now available, their applications and the principles involved 相似文献
20.
董锐 《信息安全与通信保密》2005,(4):88
机要数据存在于某个范围内,在未经许可的情况下不得访问,这些数据直接关系到各个政府、企业、研究机构的核心机密,甚至威胁到国家安全。按照数据特点的不同,机要数据通常包括:机要文件、电子图纸、财务资料、专利技术、运营统计。 机要数据的主要存在形式包括:办公类电子文件,常见的格式有DOC、XLS、MDB、WPS、PDF 等;图形类文件,常见的图类型文件通常保存的格式比较特殊,且基本都使用特定的应用程序打开。 机要数据主要分布在机要类政府部分、研究类机构、制造型企业、财务部门。 对于这种机要数据存在的网络环境,通常存在以下特点:… 相似文献