首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
合成孔径雷达原始数据幅相压缩算法   总被引:8,自引:3,他引:8  
针对现代合成孔径雷达(SAR)对相位信息越来越高的要求,该文根据SAR原始数据幅值和相位统计独立的分布特点,提出了对SAR原始数据的幅度和相位分别进行量化的幅度相位压缩算法,即AP(Amplitude & Phase)算法;介绍了算法原理和实现流程图;在实验基础上分析了算法的性能;并将该算法与BAQ算法相比较,验证了AP算法能够很好地保留原始数据的相位信息的特点。  相似文献   

2.
Experiments with wavelets for compression of SAR data   总被引:3,自引:0,他引:3  
Wavelet transform coding is shown to be an effective method for compression of both detected and complex synthetic aperture radar (SAR) imagery. Three different orthogonal wavelet transform filters are examined for use in SAR data compression; the performances of the filters are correlated with mathematical properties such as regularity and number of vanishing moments. Excellent quality reconstructions are obtained at data rates as low as 0.25 bpp for detected data and as low as 0.5 bits per element (or 2 bpp) for complex data  相似文献   

3.
The authors propose a new wavelet image coding technique for synthetic aperture radar (SAR) data compression called a progressive space-frequency quantization (PSFQ). PSFQ performs spatial quantization via rate distortion-optimized zerotree pruning of wavelet coefficients that are coded using a progressive subband coding technique. They compared the performances of zerotree-based methods: EZW, SPIHT, SFQ, and PSFQ with the classical wavelet-based method (CWM), which uses uniform scalar quantization of subbands followed by recency rank coding. The performances of the methods based on zerotree quantization were better than the CWM in the rate distortion sense. The embedded coding techniques perform better SNR results than the methods using scalar quantization. However, the probability density function (PDF) of the reconstructed amplitude SAR data compressed using CWM, better corresponded to the PDF of the original data than the PDF of the reconstructed data compressed using the zerotree based methods. The amplitude PDF of the reconstructed data obtained using PSFQ compression algorithm better corresponded to the original PDF than the amplitude PDF of the data obtained using the multilook method  相似文献   

4.
This paper presents a simple but eifective algorithm to speed up the codebook search in a vector quantization scheme of SAR raw data when a minimum square error(MSE) criterion is used. A considerable reduction in the number of operations is achieved.  相似文献   

5.
Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall performance measure. This paper proposes a new technology to design the decoder codebook, which is different from the encoder codebook to optimise the overall performance. The performance improvement is achieved with no effect on encoding complexity, both storage and time consuming, but a modest increase in storage complexity of decoder.  相似文献   

6.
A comparison of several algorithms for SAR raw data compression   总被引:16,自引:0,他引:16  
Proposes new algorithms for synthetic aperture radar (SAR) raw data compression and compares the resulting image quality with the quality achieved by commonly used methods. The compression is carried out in time and frequency domain, with statistic, crisp, and fuzzy methods. The algorithms in the time domain lead to high resolution and a good signal-to-noise ratio, but they do not optimize the performance of the compression according to the frequency envelope of the signal power in both range and azimuth directions. The hardware requirements for the compression methods in the frequency domain are significant, but a higher performance is obtained. Even with a data rate of 3 bits/sample, a satisfactory phase accuracy is achieved which is an essential parameter for polarimetric and interferometric applications. Preliminary analysis concerning the suitability of the proposed algorithms for different SAR applications shows that the compression ratio should be adaptively selected according to the specific application  相似文献   

7.
SAR image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. The authors propose a compression scheme driven by texture analysis, homogeneity mapping and speckle noise reduction within the wavelet framework. The image compressibility and interpretability are improved by incorporating speckle reduction into the compression scheme. The authors begin with the classical set partitioning in hierarchical trees (SPIHT) wavelet compression scheme, and modify it to control the amount of speckle reduction, applying different encoding schemes to homogeneous and nonhomogeneous areas of the scene. The results compare favorably with the conventional SPIHT wavelet and the JPEG compression methods  相似文献   

8.
聚束模式是合成孔径雷达(SAR)的一种比较特殊的工作方式,可以获得普通条带模式难以达到的超高分辨率图像。本文以高分辨率星载聚束SAR系统顶层参数设计为主要出发点,结合详细的系统设计与仿真分析过程,分析了聚束模式下大距离徙动、脉冲重复频率(PRF)选取、天线方向图设计等多方面的系统设计影响因素,能够为星载聚束SAR顶层参数设计提供有效参考。  相似文献   

9.
《信息技术》2015,(12):138-142
极坐标格式算法(PFA)是传统聚束SAR的经典成像算法,但用于斜视聚束成像存在二维频域插值计算量巨大、插值精度受插值核函数长度制约以及最终图像旋转带来成像质量下降等问题。针对上述问题,文中提出一种基于尺度变化的快速PFA算法,不仅能避免斜视成像后的图像旋转操作,还具有计算效率高的优势。快速PFA算法只需要FFT和复乘运算即可完成,与直接插值PFA算法相比计算量降低到原来的30%~50%。仿真实验验证了文中方法的有效性。  相似文献   

10.
基于条带式SAR与聚束式SAR内在联系的SAR成像研究   总被引:1,自引:0,他引:1  
针对条带式合成孔径雷达(stripmap sar)与聚束式合成孔径雷达(spotlight sar)的方位向频谱结构,讨论了两种模式SAR之间的区别与联系,利用两者之间的内在联系,提出斜视条件下将条带式SAR数据分块,进行聚束式处理的方法,并对聚束式成像区域大小参数的选择进行了分析.对于相同尺寸的成像区域,对条带式SAR进行聚束式处理可以减小运算量.采用空间频率插值成像算法实现了条带式SAR与聚束式SAR成像算法上的统一,最后应用外场实测数据完成成像,成像结果证实了理论分析的正确性.  相似文献   

11.
建立一般构型机载双站聚束合成孔径雷达(SAR)的通用几何模型,给出雷达回波信号的时频域表达式,以观测场景中心的目标回波为参考信号做差频处理,对得到的差分距离做泰勒展开,利用远场假设条件下的一阶近似推导出双站聚束SAR的三维空间分辨率公式。通过数值仿真验证了分辨率公式的正确性,并结合一种发射雷达平飞-接收雷达斜上飞的一般双站构型,分析了机载双站聚束SAR空间分辨特性随双站雷达空间构型及其运动参数的变化规律。  相似文献   

12.
星载聚束式SAR子孔径成像处理方法   总被引:2,自引:0,他引:2  
该文分析了星载聚束式SAR的信号特性。在分析比较几种常见聚束式SAR成像算法的基础上,引入适用于星载聚束式SAR成像的改进chirp scaling算法。为解决高分辨率星载聚束式SAR具有的脉冲重复频率(PRF)过高,多普勒带宽中心频率时变性等问题,深入分析了采用子孔径方法的必要性及其实现方法,并将子孔径方法结合到chirp scaling算法中。最后,通过计算机仿真,验证了该算法适用于高分辨率星载聚束式SAR成像处理。  相似文献   

13.
A general polarimetric model for orbital and Earth synthetic aperture radar (SAR) systems that explicitly includes key radar architecture elements and is not dependent on the reciprocity assumption is developed. The model includes systems whose receiving configuration is independent of the transmitted polarization (one configuration), as well as systems with two distinct receiving configurations, depending on the commanded transmitted polarization (H or V). Parameters that are independent of target illumination angle and those with illumination angle dependence are considered separately, allowing calibration approaches which are valid for targets at different illumination angles. The calibration methods presented make use of the model linearity to provide tests for the radar model accuracy and for SAR data quality. X-band polarimetric SAR are used to validate the theory and illustrate the calibration approach. The extension of the model and calibration method to other radar systems is discussed  相似文献   

14.
This paper investigates a novel approach based on the deramping technique for squinted sliding spotlight Synthetic Aperture Radar (SAR) imaging to resolve the azimuth spectrum aliasing problem. First of all, the properties of the azimuth spectrum and the squint angle impacts on the azimuth spectrum aliasing problem are analyzed. Based on the analysis result, an operation of filtering is added to the azimuth preprocessing step of traditional Two-Step Focusing Approach (TSPA) to resolve the azimuth folding problem and remove the influence of the squint angle on the azimuth spectrum aliasing problem. Then, a modified Range Migration Algorithm (RMA) is performed to obtain the precise focused image. Furthermore, the focused SAR image folding problem of traditional TSPA is illuminated in this paper. An azimuth post-processing step is proposed to unfold the aliased SAR image. Simulation experiment results prove that the proposed approach can solve the spectrum aliasing problem and process squinted sliding spotlight data efficiently.  相似文献   

15.
Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.  相似文献   

16.
This paper first studies the phase errors for fine-resolution spotlight mode SAR imaging and decomposes the phase errors into two kinds, one is caused by translation and the other by rotation.Mathematical analysis and computer simulations show the sbove mentioned motion kinds and their corresponding damages on spotlight mode SAR imaging.Based on this analysis, a single PPP is introduced for spotlight mode SAR imaging with the PFA on the assumption that relative rotation between APC and imaged sceme is uniform.The selected single point is used first to correct the quadratic and higher order phase errors and then to adjust the linear errors.After this compensation, the space-invariant phase errors caused by translation are almost corrected.Finally results are presented with the simulated data.  相似文献   

17.
While serial concatenated codes were designed to provide good overall performance with reasonable system complexity, they may arise naturally in certain cases, such as the interface between two networks. In this work we consider the problem of constrained rate allocation between nonsystematic block codes in a serial concatenated coding system with either ideal or no interleaving between the codes. Given constraints on system parameters, such as a limit on the overall rate, analytic guidelines for the selection of good inner code rates are found by using an upper bound on the average system block error rate.  相似文献   

18.
JPEG compression history estimation for color images   总被引:5,自引:0,他引:5  
We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.  相似文献   

19.
We investigate uniquely decodable match-length functions (MLFs) in conjunction with Lempel-Ziv (1977) type data compression. An MLF of a data string is a function that associates a nonnegative integer with each position of the string. The MLF is used to parse the input string into phrases. The codeword for each phrase consists of a pointer to the beginning of a maximal match consistent with the MLF value at that point. We propose several sliding-window variants of LZ compression employing different MLF strategies. We show that the proposed methods are asymptotically optimal for stationary ergodic sources and that their convergence compares favorably with the LZ1 variant of Wyner and Ziv (see Proc. IEEE, vol.82, no.6, p.872, 1994)  相似文献   

20.
Arithmetic coding for data compression   总被引:17,自引:0,他引:17  
Arithmetic coding provides an effective mechanism for removing redundancy in the encoding of data. We show how arithmetic coding works and describe an efficient implementation that uses table lookup as a first alternative to arithmetic operations. The reduced-precision arithmetic has a provably negligible effect on the amount of compression achieved. We can speed up the implementation further by use of parallel processing. We discuss the role of probability models and how they provide probability information to the arithmetic coder. We conclude with perspectives on the comparative advantages and disadvantages of arithmetic coding  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号