共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Experiments with wavelets for compression of SAR data 总被引:3,自引:0,他引:3
Werness S.A. Wei S.C. Carpinella R. 《Geoscience and Remote Sensing, IEEE Transactions on》1994,32(1):197-201
Wavelet transform coding is shown to be an effective method for compression of both detected and complex synthetic aperture radar (SAR) imagery. Three different orthogonal wavelet transform filters are examined for use in SAR data compression; the performances of the filters are correlated with mathematical properties such as regularity and number of vanishing moments. Excellent quality reconstructions are obtained at data rates as low as 0.25 bpp for detected data and as low as 0.5 bits per element (or 2 bpp) for complex data 相似文献
3.
Gleich D. Planinsic P. Gergic B. Cucej Z. 《Geoscience and Remote Sensing, IEEE Transactions on》2002,40(1):3-10
The authors propose a new wavelet image coding technique for synthetic aperture radar (SAR) data compression called a progressive space-frequency quantization (PSFQ). PSFQ performs spatial quantization via rate distortion-optimized zerotree pruning of wavelet coefficients that are coded using a progressive subband coding technique. They compared the performances of zerotree-based methods: EZW, SPIHT, SFQ, and PSFQ with the classical wavelet-based method (CWM), which uses uniform scalar quantization of subbands followed by recency rank coding. The performances of the methods based on zerotree quantization were better than the CWM in the rate distortion sense. The embedded coding techniques perform better SNR results than the methods using scalar quantization. However, the probability density function (PDF) of the reconstructed amplitude SAR data compressed using CWM, better corresponded to the PDF of the original data than the PDF of the reconstructed data compressed using the zerotree based methods. The amplitude PDF of the reconstructed data obtained using PSFQ compression algorithm better corresponded to the original PDF than the amplitude PDF of the data obtained using the multilook method 相似文献
4.
Qi Xuan Zhu Minhui Peng Hailiang 《电子科学学刊(英文版)》1996,13(2):110-115
This paper presents a simple but eifective algorithm to speed up the codebook search in a vector quantization scheme of SAR raw data when a minimum square error(MSE) criterion is used. A considerable reduction in the number of operations is achieved. 相似文献
5.
Zhu Minhui Peng Hailiang Wu Yirong Qi Xuan 《电子科学学刊(英文版)》1996,13(2):97-101
Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall performance measure. This paper proposes a new technology to design the decoder codebook, which is different from the encoder codebook to optimise the overall performance. The performance improvement is achieved with no effect on encoding complexity, both storage and time consuming, but a modest increase in storage complexity of decoder. 相似文献
6.
A comparison of several algorithms for SAR raw data compression 总被引:16,自引:0,他引:16
Benz U. Strodl K. Moreira A. 《Geoscience and Remote Sensing, IEEE Transactions on》1995,33(5):1266-1276
Proposes new algorithms for synthetic aperture radar (SAR) raw data compression and compares the resulting image quality with the quality achieved by commonly used methods. The compression is carried out in time and frequency domain, with statistic, crisp, and fuzzy methods. The algorithms in the time domain lead to high resolution and a good signal-to-noise ratio, but they do not optimize the performance of the compression according to the frequency envelope of the signal power in both range and azimuth directions. The hardware requirements for the compression methods in the frequency domain are significant, but a higher performance is obtained. Even with a data rate of 3 bits/sample, a satisfactory phase accuracy is achieved which is an essential parameter for polarimetric and interferometric applications. Preliminary analysis concerning the suitability of the proposed algorithms for different SAR applications shows that the compression ratio should be adaptively selected according to the specific application 相似文献
7.
SAR image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. The authors propose a compression scheme driven by texture analysis, homogeneity mapping and speckle noise reduction within the wavelet framework. The image compressibility and interpretability are improved by incorporating speckle reduction into the compression scheme. The authors begin with the classical set partitioning in hierarchical trees (SPIHT) wavelet compression scheme, and modify it to control the amount of speckle reduction, applying different encoding schemes to homogeneous and nonhomogeneous areas of the scene. The results compare favorably with the conventional SPIHT wavelet and the JPEG compression methods 相似文献
8.
聚束模式是合成孔径雷达(SAR)的一种比较特殊的工作方式,可以获得普通条带模式难以达到的超高分辨率图像。本文以高分辨率星载聚束SAR系统顶层参数设计为主要出发点,结合详细的系统设计与仿真分析过程,分析了聚束模式下大距离徙动、脉冲重复频率(PRF)选取、天线方向图设计等多方面的系统设计影响因素,能够为星载聚束SAR顶层参数设计提供有效参考。 相似文献
9.
10.
基于条带式SAR与聚束式SAR内在联系的SAR成像研究 总被引:1,自引:0,他引:1
针对条带式合成孔径雷达(stripmap sar)与聚束式合成孔径雷达(spotlight sar)的方位向频谱结构,讨论了两种模式SAR之间的区别与联系,利用两者之间的内在联系,提出斜视条件下将条带式SAR数据分块,进行聚束式处理的方法,并对聚束式成像区域大小参数的选择进行了分析.对于相同尺寸的成像区域,对条带式SAR进行聚束式处理可以减小运算量.采用空间频率插值成像算法实现了条带式SAR与聚束式SAR成像算法上的统一,最后应用外场实测数据完成成像,成像结果证实了理论分析的正确性. 相似文献
11.
12.
13.
Touzi R. Livingstone C.E. Lafontaine J.R.C. Lukowski T.I. 《Geoscience and Remote Sensing, IEEE Transactions on》1993,31(6):1132-1145
A general polarimetric model for orbital and Earth synthetic aperture radar (SAR) systems that explicitly includes key radar architecture elements and is not dependent on the reciprocity assumption is developed. The model includes systems whose receiving configuration is independent of the transmitted polarization (one configuration), as well as systems with two distinct receiving configurations, depending on the commanded transmitted polarization (H or V). Parameters that are independent of target illumination angle and those with illumination angle dependence are considered separately, allowing calibration approaches which are valid for targets at different illumination angles. The calibration methods presented make use of the model linearity to provide tests for the radar model accuracy and for SAR data quality. X-band polarimetric SAR are used to validate the theory and illustrate the calibration approach. The extension of the model and calibration method to other radar systems is discussed 相似文献
14.
This paper investigates a novel approach based on the deramping technique for squinted sliding spotlight Synthetic Aperture Radar (SAR) imaging to resolve the azimuth spectrum aliasing problem. First of all, the properties of the azimuth spectrum and the squint angle impacts on the azimuth spectrum aliasing problem are analyzed. Based on the analysis result, an operation of filtering is added to the azimuth preprocessing step of traditional Two-Step Focusing Approach (TSPA) to resolve the azimuth folding problem and remove the influence of the squint angle on the azimuth spectrum aliasing problem. Then, a modified Range Migration Algorithm (RMA) is performed to obtain the precise focused image. Furthermore, the focused SAR image folding problem of traditional TSPA is illuminated in this paper. An azimuth post-processing step is proposed to unfold the aliased SAR image. Simulation experiment results prove that the proposed approach can solve the spectrum aliasing problem and process squinted sliding spotlight data efficiently. 相似文献
15.
Olga M Kosheleva Bryan E Usevitch Sergio D Cabrera Edward Vidal 《IEEE transactions on image processing》2006,15(8):2106-2112
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity. 相似文献
16.
This paper first studies the phase errors for fine-resolution spotlight mode SAR imaging and decomposes the phase errors into two kinds, one is caused by translation and the other by rotation.Mathematical analysis and computer simulations show the sbove mentioned motion kinds and their corresponding damages on spotlight mode SAR imaging.Based on this analysis, a single PPP is introduced for spotlight mode SAR imaging with the PFA on the assumption that relative rotation between APC and imaged sceme is uniform.The selected single point is used first to correct the quadratic and higher order phase errors and then to adjust the linear errors.After this compensation, the space-invariant phase errors caused by translation are almost corrected.Finally results are presented with the simulated data. 相似文献
17.
While serial concatenated codes were designed to provide good overall performance with reasonable system complexity, they may arise naturally in certain cases, such as the interface between two networks. In this work we consider the problem of constrained rate allocation between nonsystematic block codes in a serial concatenated coding system with either ideal or no interleaving between the codes. Given constraints on system parameters, such as a limit on the overall rate, analytic guidelines for the selection of good inner code rates are found by using an upper bound on the average system block error rate. 相似文献
18.
JPEG compression history estimation for color images 总被引:5,自引:0,他引:5
Neelamani R. de Queiroz R. Zhigang Fan Dash S. Baraniuk R.G. 《IEEE transactions on image processing》2006,15(6):1365-1378
We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size. 相似文献
19.
Gavish A. Lempel A. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1996,42(5):1375-1380
We investigate uniquely decodable match-length functions (MLFs) in conjunction with Lempel-Ziv (1977) type data compression. An MLF of a data string is a function that associates a nonnegative integer with each position of the string. The MLF is used to parse the input string into phrases. The codeword for each phrase consists of a pointer to the beginning of a maximal match consistent with the MLF value at that point. We propose several sliding-window variants of LZ compression employing different MLF strategies. We show that the proposed methods are asymptotically optimal for stationary ergodic sources and that their convergence compares favorably with the LZ1 variant of Wyner and Ziv (see Proc. IEEE, vol.82, no.6, p.872, 1994) 相似文献
20.
Arithmetic coding for data compression 总被引:17,自引:0,他引:17
Howard P.G. Vitter J.S. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1994,82(6):857-865
Arithmetic coding provides an effective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an efficient implementation that uses table lookup as a first alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible effect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding 相似文献