首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Experiments with wavelets for compression of SAR data   总被引:3,自引:0,他引:3  
Wavelet transform coding is shown to be an effective method for compression of both detected and complex synthetic aperture radar (SAR) imagery. Three different orthogonal wavelet transform filters are examined for use in SAR data compression; the performances of the filters are correlated with mathematical properties such as regularity and number of vanishing moments. Excellent quality reconstructions are obtained at data rates as low as 0.25 bpp for detected data and as low as 0.5 bits per element (or 2 bpp) for complex data  相似文献   

2.
The authors propose a new wavelet image coding technique for synthetic aperture radar (SAR) data compression called a progressive space-frequency quantization (PSFQ). PSFQ performs spatial quantization via rate distortion-optimized zerotree pruning of wavelet coefficients that are coded using a progressive subband coding technique. They compared the performances of zerotree-based methods: EZW, SPIHT, SFQ, and PSFQ with the classical wavelet-based method (CWM), which uses uniform scalar quantization of subbands followed by recency rank coding. The performances of the methods based on zerotree quantization were better than the CWM in the rate distortion sense. The embedded coding techniques perform better SNR results than the methods using scalar quantization. However, the probability density function (PDF) of the reconstructed amplitude SAR data compressed using CWM, better corresponded to the PDF of the original data than the PDF of the reconstructed data compressed using the zerotree based methods. The amplitude PDF of the reconstructed data obtained using PSFQ compression algorithm better corresponded to the original PDF than the amplitude PDF of the data obtained using the multilook method  相似文献   

3.
A comparison of several algorithms for SAR raw data compression   总被引:16,自引:0,他引:16  
Proposes new algorithms for synthetic aperture radar (SAR) raw data compression and compares the resulting image quality with the quality achieved by commonly used methods. The compression is carried out in time and frequency domain, with statistic, crisp, and fuzzy methods. The algorithms in the time domain lead to high resolution and a good signal-to-noise ratio, but they do not optimize the performance of the compression according to the frequency envelope of the signal power in both range and azimuth directions. The hardware requirements for the compression methods in the frequency domain are significant, but a higher performance is obtained. Even with a data rate of 3 bits/sample, a satisfactory phase accuracy is achieved which is an essential parameter for polarimetric and interferometric applications. Preliminary analysis concerning the suitability of the proposed algorithms for different SAR applications shows that the compression ratio should be adaptively selected according to the specific application  相似文献   

4.
《信息技术》2015,(12):138-142
极坐标格式算法(PFA)是传统聚束SAR的经典成像算法,但用于斜视聚束成像存在二维频域插值计算量巨大、插值精度受插值核函数长度制约以及最终图像旋转带来成像质量下降等问题。针对上述问题,文中提出一种基于尺度变化的快速PFA算法,不仅能避免斜视成像后的图像旋转操作,还具有计算效率高的优势。快速PFA算法只需要FFT和复乘运算即可完成,与直接插值PFA算法相比计算量降低到原来的30%~50%。仿真实验验证了文中方法的有效性。  相似文献   

5.
基于条带式SAR与聚束式SAR内在联系的SAR成像研究   总被引:1,自引:0,他引:1  
针对条带式合成孔径雷达(stripmap sar)与聚束式合成孔径雷达(spotlight sar)的方位向频谱结构,讨论了两种模式SAR之间的区别与联系,利用两者之间的内在联系,提出斜视条件下将条带式SAR数据分块,进行聚束式处理的方法,并对聚束式成像区域大小参数的选择进行了分析.对于相同尺寸的成像区域,对条带式SAR进行聚束式处理可以减小运算量.采用空间频率插值成像算法实现了条带式SAR与聚束式SAR成像算法上的统一,最后应用外场实测数据完成成像,成像结果证实了理论分析的正确性.  相似文献   

6.
A general polarimetric model for orbital and Earth synthetic aperture radar (SAR) systems that explicitly includes key radar architecture elements and is not dependent on the reciprocity assumption is developed. The model includes systems whose receiving configuration is independent of the transmitted polarization (one configuration), as well as systems with two distinct receiving configurations, depending on the commanded transmitted polarization (H or V). Parameters that are independent of target illumination angle and those with illumination angle dependence are considered separately, allowing calibration approaches which are valid for targets at different illumination angles. The calibration methods presented make use of the model linearity to provide tests for the radar model accuracy and for SAR data quality. X-band polarimetric SAR are used to validate the theory and illustrate the calibration approach. The extension of the model and calibration method to other radar systems is discussed  相似文献   

7.
This paper investigates a novel approach based on the deramping technique for squinted sliding spotlight Synthetic Aperture Radar (SAR) imaging to resolve the azimuth spectrum aliasing problem. First of all, the properties of the azimuth spectrum and the squint angle impacts on the azimuth spectrum aliasing problem are analyzed. Based on the analysis result, an operation of filtering is added to the azimuth preprocessing step of traditional Two-Step Focusing Approach (TSPA) to resolve the azimuth folding problem and remove the influence of the squint angle on the azimuth spectrum aliasing problem. Then, a modified Range Migration Algorithm (RMA) is performed to obtain the precise focused image. Furthermore, the focused SAR image folding problem of traditional TSPA is illuminated in this paper. An azimuth post-processing step is proposed to unfold the aliased SAR image. Simulation experiment results prove that the proposed approach can solve the spectrum aliasing problem and process squinted sliding spotlight data efficiently.  相似文献   

8.
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.  相似文献   

9.
While serial concatenated codes were designed to provide good overall performance with reasonable system complexity, they may arise naturally in certain cases, such as the interface between two networks. In this work we consider the problem of constrained rate allocation between nonsystematic block codes in a serial concatenated coding system with either ideal or no interleaving between the codes. Given constraints on system parameters, such as a limit on the overall rate, analytic guidelines for the selection of good inner code rates are found by using an upper bound on the average system block error rate.  相似文献   

10.
Spatial compression of Seasat SAR imagery   总被引:2,自引:0,他引:2  
The results of a study of the techniques for spatial compression of synthetic-aperture-radar (SAR) imagery are summarized. Emphasis is on image-data volume reduction for archive and online storage applications while preserving the image resolution and radiometric fidelity. A quantitative analysis of various techniques, including vector quantization (VQ) and adaptive discrete cosine transform (ADCT), is presented. Various factors such as compression ratio, algorithm complexity, and image quality are considered in determining the optimal algorithm. The compression system requirements are established for electronic access of an online archive system based on the results of a survey of the science community. The various algorithms are presented and their results evaluated considering the effects of speckle noise and the wide dynamic range inherent in SAR imagery  相似文献   

11.
We investigate uniquely decodable match-length functions (MLFs) in conjunction with Lempel-Ziv (1977) type data compression. An MLF of a data string is a function that associates a nonnegative integer with each position of the string. The MLF is used to parse the input string into phrases. The codeword for each phrase consists of a pointer to the beginning of a maximal match consistent with the MLF value at that point. We propose several sliding-window variants of LZ compression employing different MLF strategies. We show that the proposed methods are asymptotically optimal for stationary ergodic sources and that their convergence compares favorably with the LZ1 variant of Wyner and Ziv (see Proc. IEEE, vol.82, no.6, p.872, 1994)  相似文献   

12.
JPEG compression history estimation for color images   总被引:5,自引:0,他引:5  
We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.  相似文献   

13.
Gnavi  S. Grangetto  A. Magli  E. Olmo  G. 《Electronics letters》2002,38(20):1171-1172
A novel rate allocation algorithm for video transmission over lossy networks subject to bursty packet losses is presented. A Gilbert-Elliot model is used at the encoder to drive the selection of coding parameters. Experimental results using the H.26L test model show a significant performance improvement with respect to the assumption of independent packet losses  相似文献   

14.
For original paper see ibid., vol.36, no.5, pt.1, p.1531-9 (1998). The quality phase-gradient autofocus (QPGA) technique was proposed to speed up the estimation convergence of phase-gradient autofocus by selectively increasing the pool of quality synchronization sources instead of selecting the "brightest" pixels within the image. It is now found that the QPGA, with its inherent scatter "growing" concept and target-filtering procedure, is also able to focus in environments with stationary and moving targets.  相似文献   

15.
The hybrid stripmap/spotlight mode for a synthetic aperture radar (SAR) system is able to generate microwave images with an azimuth resolution better than the one achieved in the stripmap mode and a ground coverage better than the one of the spotlight mode. In this paper, time- and frequency-domain-based procedures to simulate the raw signal in the hybrid stripmap/spotlight mode are presented and compared. We show that a two-dimensional Fourier domain approach, although highly desirable for its efficiency, is not viable. Accordingly, we propose a one-dimensional (1-D) range Fourier domain approach, followed by 1-D azimuth time-domain integration. This method is much more efficient than the time-domain one, so that extended scenes can be considered. In addition, it involves approximations usually acceptable in actual cases. Effectiveness of the simulation scheme is assessed by using numerical examples.  相似文献   

16.
Visual data compression for multimedia applications   总被引:2,自引:0,他引:2  
The compression of visual information in the framework of multimedia applications is discussed. To this end, major approaches to compress still as well as moving pictures are reviewed. The most important objective in any compression algorithm is that of compression efficiency. High-compression coding of still pictures can be split into three categories: waveform, second-generation, and fractal coding techniques. Each coding approach introduces a different artifact at the target bit rates. The primary objective of most ongoing research in this field is to mask these artifacts as much as possible to the human visual system. Video-compression techniques have to deal with data enriched by one more component, namely, the temporal coordinate. Either compression techniques developed for still images can be generalized for three-dimensional signals (space and time) or a hybrid approach can be defined based on motion compensation. The video compression techniques can then be classified into the following four classes: waveform, object-based, model-based, and fractal coding techniques. This paper provides the reader with a tutorial on major visual data-compression techniques and a list of references for further information as the details of each method  相似文献   

17.
Seismic-while-drilling services efficiently support drilling decisions. They use the vibrations produced by the drill-bit during perforation as a downhole seismic source. The seismic signal is recorded by sensors on the surface, and it is processed in order to obtain/update an image of the subsurface around the borehole. To improve the characterization of the source, some sensors have been experimentally installed also downhole, on the drill pipes in close proximity to the bit: data logged downhole have been able to give better quality information. Currently, the main drawback of downhole equipment is the absence of a high-bit-rate telemetry system to enable real-time activities. This problem may be solved by employing either an offline solution, with limited memory capacity up to few hundreds of megabytes, or an online solution with telemetry at a very low bit-rate (few bits per second). However, following the offline approach with standard acquisition parameters, the internal storage memory would be filled up in just a few hours at high acquisition rates. On the contrary, with the online solution, only a small portion of the acquired signals (or only alarm information about potentially dangerous events) can be transmitted in real time to the surface by using conventional mud-pulse telemetry. We present a lossy data compression algorithm based on a new representation of downhole data in angle domain, which is suitable for downhole implementation and may be successfully applied to both online and offline solutions. Numerical tests based on real field data achieve compression ratios up to 112:1 without major loss of information. This allows a significant increase in downhole time acquisition and in real-time information that can be transmitted through mud-pulse telemetry.  相似文献   

18.
Transform methods for seismic data compression   总被引:7,自引:0,他引:7  
The authors consider the development and evaluation of transform coding algorithms for the storage of seismic signals. Transform coding algorithms are developed using the discrete Fourier transform (DFT), the discrete cosine transform (DCT), the Walsh-Hadamard transform (WHT), and the Karhunen-Loeve transform (KLT). These are evaluated and compared to a linear predictive coding algorithm for data rates ranging from 150 to 550 bit/s. The results reveal that sinusoidal transforms are well-suited for robust, low-rate seismic signal representation. In particular, it is shown that a DCT coding scheme reproduces faithfully the seismic waveform at approximately one-third of the original rate  相似文献   

19.
The authors compare the subband compression capabilities of eight filter sets (consisting of linear-phase quadrature mirror filters (QMFs), perfect reconstruction filters, and nonlinear phase wavelets) at different bit rates, using-a filter-based bit allocation procedure. Using DPCM and PCM in HDTV subband coding, it is found that QMFs have an edge over the rest.  相似文献   

20.
A three-component scattering model for polarimetric SAR data   总被引:26,自引:0,他引:26  
An approach has been developed that involves the fit of a combination of three simple scattering mechanisms to polarimetric SAR observations. The mechanisms are canopy scatter from a cloud of randomly oriented dipoles, evenor double-bounce scatter from a pair of orthogonal surfaces with different dielectric constants and Bragg scatter from a moderately rough surface. This composite scattering model is used to describe the polarimetric backscatter from naturally occurring scatterers. The model is shown to describe the behavior of polarimetric backscatter from tropical rain forests quite well by applying it to data from NASA/Jet Propulsion Laboratory's (JPLs) airborne polarimetric synthetic aperture radar (AIRSAR) system. The model fit allows clear discrimination between flooded and nonflooded forest and between forested and deforested areas, for example. The model is also shown to be usable as a predictive tool to estimate the effects of forest inundation and disturbance on the fully polarimetric radar signature. An advantage of this model fit approach is that the scattering contributions from the three basic scattering mechanisms can be estimated for clusters of pixels in polarimetric SAR images. Furthermore, it is shown that the contributions of the three scattering mechanisms to the HH, HV, and VV backscatter can be calculated from the model fit. Finally, this model fit approach is justified as a simplification of more complicated scattering models, which require many inputs to solve the forward scattering problem  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号