首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the image-based relighting (IBL), tremendous reference images are needed to provide a high quality rendering. Therefore, a data compression is a must for its real applications. In this paper, two global analysis methods, the principal component analysis (PCA) and the independent component analysis (ICA), are used to compress the huge IBL data by exploiting its correlation properties. Both approaches approximate the raw data with a small number of global base images, and they follow a similar algorithm structure: base images extraction, raw data representation, and further compression on the base images and the representing coefficients. What differs is that PCA only removes the second-order data correlation, but ICA reduces nearly all order statistics data dependence, which should benefit the data compression. Simulations are given to evaluate their performance. Comparisons are also made between them and JPEG2000 and MPEG. The evaluation results show that both approaches are superior to JPEG2000 and MPEG. Although ICA tends to remove higher order dependence than PCA, it is a little inferior to PCA in terms of compression ratio/reconstruction error performance.  相似文献   

2.
In image-based rendering with adjustable illumination, the data set contains a large number of pre-captured images under different sampling lighting directions. Instead of individually compressing each pre-captured image, we propose a two-level compression method. Firstly, we use a few spherical harmonic (SH) coefficients to represent the plenoptic property of each pixel. The classical discrete summation method for extracting SH coefficient requires that the sampling lighting directions should be uniformly distributed on the whole spherical surface. It cannot handle the case that the sampling lighting directions are irregularly distributed. A constrained least-squares algorithm is proposed to handle this case. Afterwards, embedded zero-tree wavelet coding is used for removing the spatial redundancy in SH coefficients. Simulation results show our approach is much superior to the JPEG, JPEG2000, MPEG2, and 4D wavelet compression method. The way to allow users to interactively control the lighting condition of a scene is also discussed.  相似文献   

3.
An approach for filling-in blocks of missing data in wireless image transmission is presented. When compression algorithms such as JPEG are used as part of the wireless transmission process, images are first tiled into blocks of 8 /spl times/ 8 pixels. When such images are transmitted over fading channels, the effects of noise can destroy entire blocks of the image. Instead of using common retransmission query protocols, we aim to reconstruct the lost data using correlation between the lost block and its neighbors. If the lost block contained structure, it is reconstructed using an image inpainting algorithm, while texture synthesis is used for the textured blocks. The switch between the two schemes is done in a fully automatic fashion based on the surrounding available blocks. The performance of this method is tested for various images and combinations of lost blocks. The viability of this method for image compression, in association with lossy JPEG, is also discussed.  相似文献   

4.
The authors propose an improved version of JPEG coding for compressing remote sensing images obtained by optical sensors onboard microsatellites. The approach involves expanding cloud features to include their cloud-land transitions, thereby simplifying their coding and subsequent compression. The system is fully automatic and appropriate for onboard implementation. Its improvement in coding stems from the realization that a large number of bits are used for coding the blocks that contain the transition regions between bright clouds, if present in the image, and the dark background. A fully automatic cloud-segmentation algorithm is therefore used to identify the external boundaries of the clouds, then smooth the corresponding blocks prior to coding. Further gains are also achieved by modifying the quantization table used for coding the coefficients of the discrete cosine transform. Compared to standard JPEG, at the same level of reconstruction quality, the new method can achieve compression ratio improvement by 13-161%, depending upon the context and the amount of cloud present in the specific image. The results are demonstrated with the help of several real images obtained by the University of Surrey, U.K., satellites  相似文献   

5.
Multispectral satellites that measure the reflected energy from the different regions on the Earth generate the multispectral (MS) images continuously. The following MS image for the same region can be acquired with respect to the satellite revisit period. The images captured at different times over the same region are called multitemporal images. Traditional compression methods generally benefit from spectral and spatial correlation within the MS image. However, there is also a temporal correlation between multitemporal images. To this end, we propose a novel generative adversarial network (GAN) based prediction method called MultiTempGAN for compression of multitemporal MS images. The proposed method defines a lightweight GAN-based model that learns to transform the reference image to the target image. Here, the generator parameters of MultiTempGAN are saved for the reconstruction purpose in the receiver system. Due to MultiTempGAN has a low number of parameters, it provides efficiency in multitemporal MS image compression. Experiments were carried out on three Sentinel-2 MS image pairs belonging to different geographical regions. We compared the proposed method with JPEG2000-based conventional compression methods and three deep learning methods in terms of signal-to-noise ratio, mean spectral angle, mean spectral correlation, and laplacian mean square error metrics. Additionally, we have also evaluated the change detection performances and visual maps of the methods. Experimental results demonstrate that MultiTempGAN not only achieves the best metric values among the other methods at high compression ratios but also presents convincing performances in change detection applications.  相似文献   

6.
纪强  石文轩  田茂  常帅 《红外与激光工程》2016,45(2):228004-0228004(7)
鉴于卫星拍摄的遥感图像的空间分辨率和光谱分辨率越来越高,在一些应用中,常会对多光谱图像进行压缩。为了提高多光谱图像的压缩质量,提出了联合相位相关和仿射变换的图像配准方法,有效提高了图像谱段之间的相关性。针对多光谱图像压缩,提出了结合Karhunen-Love,KL变换去除谱间相关和嵌入式二维小波编码方法。相比JPEG2000谱段图像独立压缩方法,提出方法解压图像的Peak Signal to Noise Ratio,PSNR值平均提高2.1 dB。实验结果表明:所提出的方法能在相同的压缩率下获得比JPEG2000谱段图像独立压缩方法更好的图像质量。  相似文献   

7.
多光谱图像的信息分析及数据压缩   总被引:1,自引:1,他引:0  
蒋青松  王建宇 《红外技术》2004,26(1):44-47,51
首先利用条件熵对成像光谱仪多光谱图像在空间维和光谱维方向的信息冗余度进行了分析,结果表明成像光谱图像在空间维具有很强的相关性,而在光谱维方向,图像信息不平稳,相关性略小。然后对标准的基于JPEG量化表进行改进,提出了一种既能有效保真图像边缘信息,又能提高压缩倍数的改进JPEG压缩算法-I-JPEG。随后又提出了I-JPEG/DPCM压缩算法。这种方法在I-JPEG基础上,利用局部化特性较好的无损DPCM方法去除图像在光谱维的相关性,使压缩性能得到进一步提高。  相似文献   

8.
由于无线带宽的有限性,基于红外序列图像自身独特的特点,工程需要对红外序列图像进行无损压缩.通过比较JPEG 2000、SPIHT、JPEG + DPCM和JPEG_LS这4种算法的压缩性能和硬件实现复杂度,选取了性能最优易于硬件实现的JPEG_LS作为核心压缩算法,以DSP和FPGA作为硬件平台核心.为验证系统的可行性...  相似文献   

9.
High dynamic range (HDR) image requires a higher number of bits per color channel than traditional images. This brings about problems to storage and transmission. Color space quantization has been extensively studied to achieve bit encodings for each pixel and still yields prohibitively large files. This paper explores the possibility of further compressing HDR images quantized in color space. The compression schemes presented in this paper extends existing lossless image compression standards to encode HDR images. They separate HDR images in their bit encoding formats into images in grayscale or RGB domain, which can be directly compressed by existing lossless compression standards such as JPEG, JPEG 2000 and JPEG-LS. The efficacy of the compression schemes is illustrated by presenting extensive results of encoding a series of synthetic and natural HDR images. Significant bit savings of up to 53% are observed when comparing with original HDR formats and HD Photo compressed version. This is beneficial to the storage and transmission of HDR images.  相似文献   

10.
针对合成JPEG图像的小区域移位JPEG双压缩(SD-JPEG压缩)篡改问题,提出一种基于条件共生概率矩阵(CCPM)的SD-JPEG压缩篡改检测算法。为了减小图像内容的影响,增强SD-JPEG压缩效应,首先对JPEG量化的离散余弦变换(DCT)系数的幅度矩阵进行水平、垂直、主对角和副对角4个方向差分和阈值化处理,然后使用CCPM对这4个阈值化的差分矩阵进行建模,选取CCPM的元素作为特征数据,并用主分量分析(PCA)对其降维处理,最后通过支持向量机(SVM)技术判决图像块是否经过SD-JPEG压缩。实验结果验证了本文算法的有效性。  相似文献   

11.
Two algorithms are presented for compressing image documents, with a high compression ratio for both colour and monochromatic compound document images. The proposed algorithms apply a new method of segmentation to separate the text from the image in a compound document in which the text overlaps the background. The segmentation method classifies document images into three planes: the text plane, the background (non-text) plane and the text's colour plane, each of which are processed using different compression techniques. The text plane is compressed using the pattern matching technique, called JB2. Wavelet transform and zerotree coding are used to compress the background plane and the text's colour plane. Assigning bits for different planes yields high-quality compound document images with both a high compression ratio and well presented text. The proposed algorithms greatly outperform two well known image compression methods, JPEG and DjVu, and enable the effective extraction of the text from a complex background, achieving a high compression ratio for compound document images.  相似文献   

12.
In this paper, we propose an algorithm for evaluating the quality of JPEG compressed images, called the psychovisually based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression. The PIQE evaluates the image quality using two psychovisually based fidelity criteria: blockiness and similarity. The blockiness is an index that measures the patterned square artifact created as a by-product of the lossy DCT-based compression technique used by JPEG and MPEG. The similarity measures the perceivable detail remaining after compression. The blockiness and similarity are combined into a single PIQE index used to assess quality. The PIQE model is tuned by using subjective assessment results of five subjects evaluating six sets of images. To demonstrate the robustness of the model, a set of validation experiments is conducted by repeating the subject assessment procedure with four new subjects evaluating five new image sets. The PIQE model is most accurate when the JPEG quantization factor is in the range for which JPEG compression is most effective.  相似文献   

13.
14.
At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach.  相似文献   

15.
Common image compression techniques suitable for general purpose may be less effective for such specific applications as video surveillance. Since a stationed surveillance camera always targets at a fixed scene, its captured images exhibit high consistency in content or structure. In this paper, we propose a surveillance image compression technique via dictionary learning to fully exploit the constant characteristics of a target scene. This method transforms images over sparsely tailored over-complete dictionaries learned directly from image samples rather than a fixed one, and thus can approximate an image with fewer coefficients. A set of dictionaries trained off-line is applied for sparse representation. An adaptive image blocking method is developed so that the encoder can represent an image in a texture-aware way. Experimental results show that the proposed algorithm significantly outperforms JPEG and JPEG 2000 in terms of both quality of reconstructed images and compression ratio as well.  相似文献   

16.
A new segmentation-based lossless compression method is proposed for colour images. The method exploits the correlation existing among the three colour planes by treating each pixel as a vector of three components, and performing region growing and difference operations using the vectors. The method performs better than the JPEG standard by an average of 0.68 bit/pixel with a 12 image database  相似文献   

17.
Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value.  相似文献   

18.
陈伟  曾勇  陈都峰 《信息技术》2004,28(12):14-18
简要分析了有损JPEG压缩算法和几种经典的边缘提取算法;经对原始的和有损JPEG压缩的MRI图像的编程实验,分析了图像边缘的视觉效果和相异率,说明MRI图像经有损JPEG压缩后对图像的边缘和噪声都有所平滑改变和丢失;由于有损JPEG具有低通滤波的性质,所分析的几种经典的边缘提取算法具有高通滤波的性质,使MRI图像的边缘和噪声在JPEG压缩前后的差异随门限值的增大而变小。  相似文献   

19.
Lossless image compression with multiscale segmentation   总被引:1,自引:0,他引:1  
  相似文献   

20.
We present an implementable three dimensional terrain adaptive transform based bandwidth compression technique for multispectral imagery. The algorithm exploits the inherent spectral and spatial correlations in the data. The compression technique is based on Karhunen-Loeve transformation for spectral decorrelation followed by the standard JPEG algorithm for coding the resulting spectrally decorrelated eigen images. The algorithm is conveniently parameterized to accommodate reconstructed image fidelities ranging from near-lossless at about 5:1 CR to visually lossy beginning at about 30:1 CR. The novelty of this technique lies in its unique capability to adaptively vary the characteristics of the spectral correlation transformation as a function of the variation of the local terrain. The spectral and spatial modularity of the algorithm architecture allows the JPEG to be replaced by a alternate spatial coding procedure. The significant practical advantage of this proposed approach is that it is based on the standard and highly developed JPEG compression technology  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号