首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
张秀  周巍  段哲民  魏恒璐 《红外与激光工程》2019,48(6):626002-0626002(8)
为了进一步提高图像超分辨率重建的质量,针对非局部集中稀疏表示算法中重建图像的噪声问题,提出了一种基于专家场先验模型的图像超分辨率重建改进算法。首先,利用专家场模型从图像训练集中学习整幅图像的先验知识建立全局先验模型;然后将学习到的先验信息用于非局部集中稀疏表示模型求解最优稀疏表示系数;最后,得到高分辨率图像估计。该算法在超分辨率重建迭代运算的同时,同步更新专家场模型参数,因此在不显著增加运算复杂度的情况下,通过选取合适的先验约束,有效地增强了图像重建的效果。实验结果表明:相比非局部集中稀疏表示算法,文中算法对无噪和有噪降质图像均能取得较好的峰值信噪比结果,并且能够进一步提高有噪图像的去噪效果。  相似文献   

2.
It has been well established that critically sampled boundary pre-/postfiltering operators can improve the coding efficiency and mitigate blocking artifacts in traditional discrete cosine transform-based block coders at low bit rates. In these systems, both the prefilter and the postfilter are square matrices. This paper proposes to use undersampled boundary pre- and postfiltering modules, where the pre-/postfilters are rectangular matrices. Specifically, the prefilter is a "fat" matrix, while the postfilter is a "tall" one. In this way, the size of the prefiltered image is smaller than that of the original input image, which leads to improved compression performance and reduced computational complexities at low bit rates. The design and VLSI-friendly implementation of the undersampled pre-/postfilters are derived. Their relations to lapped transforms and filter banks are also presented. Two design examples are also included to demonstrate the validity of the theory. Furthermore, image coding results indicate that the proposed undersampled pre-/postfiltering systems yield excellent and stable performance in low bit-rate image coding.  相似文献   

3.
A hybrid scheme for low bit-rate coding of stereo images   总被引:1,自引:0,他引:1  
We propose a hybrid scheme to implement an object driven, block based algorithm to achieve low bit-rate compression of stereo image pairs. The algorithm effectively combines the simplicity and adaptability of the existing block based stereo image compression techniques with an edge/contour based object extraction technique to determine appropriate compression strategy for various areas of the right image. Unlike the existing object-based coding such as MPEG-4 developed in the video compression community, the proposed scheme does not require any additional shape coding. Instead, the arbitrary shape is reconstructed by the matching object inside the left frame, which has been encoded by standard JPEG algorithm and hence made available at the decoding end for those shapes in right frames. Yet the shape reconstruction for right objects incurs no distortion due to the unique correlation between left and right frames inside stereo image pairs and the nature of the proposed hybrid scheme. Extensive experiments carried out support that significant improvements of up to 20% in compression ratios are achieved by the proposed algorithm in comparison with the existing block-based technique, while the reconstructed image quality is maintained at a competitive level in terms of both PSNR values and visual inspections.  相似文献   

4.
An efficient approach for face compression is introduced. Restricting a family of images to frontal facial mug shots enables us to first geometrically deform a given face into a canonical form in which the same facial features are mapped to the same spatial locations. Next, we break the image into tiles and model each image tile in a compact manner. Modeling the tile content relies on clustering the same tile location at many training images. A tree of vector-quantization dictionaries is constructed per location, and lossy compression is achieved using bit-allocation according to the significance of a tile. Repeating this modeling/coding scheme over several scales, the resulting multiscale algorithm is demonstrated to compress facial images at very low bit rates while keeping high visual qualities, outperforming JPEG-2000 performance significantly.  相似文献   

5.
An algorithm for lost signal restoration in block-based still image and video sequence coding is presented. Problems arising from imperfect transmission of block-coded images result in lost blocks. The resulting image is flawed by the absence of square pixel regions that are notably perceived by human vision, even in real-time video sequences. Error concealment is aimed at masking the effect of missing blocks by use of temporal or spatial interpolation to create a subjectively acceptable approximation to the true error-free image. This paper presents a spatial interpolation algorithm that addresses concealment of lost image blocks using only intra-frame information. It attempts to utilize spatially correlated edge information from a large local neighborhood of surrounding pixels to restore missing blocks. The algorithm is a Gerchberg (1974) type spatial domain/spectral domain constraint-satisfying iterative process, and may be viewed as an alternating projections onto convex sets method.  相似文献   

6.
Focusing on the problem that differential spatial modulation (DSM) couldn’t obtain transmit diversity and has high decoding complexity,a new differential spatial modulation scheme based on the orthogonal space-time block code was proposed and the proposed scheme is called OSTBC-DSM.There were two matrices in this scheme:the spatial modulation matrix and the symbol matrix.The former was aimed to activate different transmit antennas by setting the position of nonzero elements,and the latter structured symbolic matrix by using orthogonal space-time block codes (OSTBC) as the basic code block.The proposed scheme could obtain full transmit diversity and higher spectral efficiency compared with the conventional DSM schemes.Moreover,the OSTBC-DSM supported linear maximum likelihood (ML) decoding.The simulation results show that under different spectral efficiencies,the proposed OSTBC-DSM scheme has better bit error rate (BER) performance than other schemes.  相似文献   

7.
Sonar images are usually suffering from speckle noise which results in poor visual quality. In order to improve the sonar imaging quality, removing or reducing these speckle noises is a very important and arduous task. In this paper, the imaging principle and noise characteristics of the side-scan sonar (SSS) are analyzed, and five typical probability distribution functions are used to fit the seabed reverberation. Through experiment comparison, the Gamma distribution is selected to simulate the noise of the SSS image caused by the reverberation. Simultaneously, the fields of experts denoising algorithm based on the Gamma distribution (Gamma FoE) is proposed for SSS image denoising. In order to perceive and measure the denoising effect better, evaluation indexes of Fast Noise Variance Estimation (FNVE, an image noise estimation method) and Blind Referenceless Image Spatial Quality Evaluator (BRISQUE, an image quality evaluation method) are selected for image quality perception. The final results of the SSS image denoise experiment show that the Gamma FoE denoise algorithm has a better effect on SSS image denoise application than other denoise algorithms.  相似文献   

8.
由于特征有限,传统基于欧式距离的压缩域检索性能并不理想。本文引入距离度量学习技术,研究压缩域图像检索,提出了一种基于距离度量学习的离散余弦变换(DCT)域联合图像专家小组(JPEG)图像检索方法。首先,提出了一种更有效的 DCT 域特征提取方法;其次,运用距离度量学习技术训练出一个更加有效的度量矩阵进行检索。在 Corel5000上的图像检索实验表明,新方法有效提高了检索准确度。  相似文献   

9.
基于离散余弦变换和主线分块能量的模糊掌纹识别   总被引:3,自引:3,他引:3  
林森  苑玮琦  吴微  方婷 《光电子.激光》2012,(11):2200-2206
针对非接触式掌纹采集时离焦状态导致的图像模糊问题,提出一种新颖的识别方法。使用离散余弦变换(DCT)在频域内提取低频系数作为稳定特征,使用改进的局部灰度极小值法提取空域内的稳定特征即主线,再使用分块方法计算主线能量形成特征向量,然后将频域和空域内的稳定特征进行融合,最后利用向量之间的欧式距离进行识别。在SUT-D模糊掌纹库上的测试结果表明,与融合之前及其他典型识别方法比较,本文算法识别率最高可达96.057 8%,表明本文方法在识别性能上具备有效性和优越性,为解决模糊掌纹的识别问题提供了一条可行途径。  相似文献   

10.
Multidimensional Systems and Signal Processing - To overcome the limitations of the traditional fields of experts (FoE) model, which will blur image edges and texture during the denoising...  相似文献   

11.
研究了空时分组码多载波码分多址(MC-CDMA)系统的信号检测。对接收数据矢量进行解相关滤波,得到对应于每个用户奇数和偶数时间的无多址干扰(MAI)的降维信号,由白化滤波后的降维数据组成的矩阵的秩1近似来得到盲信道估计,再利用用户发射信息符号的有限码集特性来确定复标量模糊。在估计出所有用户的信道响应的基础上,用最小二乘算法进行信号检测。仿真结果验证了所提算法的有效性。  相似文献   

12.
密文图像的可逆数据隐藏技术既能保证载体内容不被泄露,又能传递附加信息。本文提出了一种基于块容量标签(block capacity label, BCL)的高容量密文图像可逆数据隐藏算法。该方案在图像加密之前进行预处理,首先将图像分为两个区域:参考像素区域和预测像素区域。然后将预测像素区域分为不重叠的块,根据所提出的算法确定分块的BCL,在对图像进行加密之后嵌入BCL,生成加密图像;在秘密数据嵌入阶段,根据BCL和数据隐藏密钥嵌入秘密数据。实验测试了BOWS-2数据集,平均嵌入容量为3.806 8 bpp,与现有方法相比,该方法可以获得更高的秘密数据嵌入容量,并可以实现原始图像的完美重建。  相似文献   

13.
At present, almost all digital images are stored and transferred in their compressed format in which discrete cosine transform (DCT)-based compression remains one of the most important data compression techniques due to the efforts from JPEG. In order to save the computation and memory cost, it is desirable to have image processing operations such as feature extraction, image indexing, and pattern classifications implemented directly in the DCT domain. To this end, we present in this paper a generalized analysis of spatial relationships between the DCTs of any block and its sub-blocks. The results reveal that DCT coefficients of any block can be directly obtained from the DCT coefficients of its sub-blocks and that the interblock relationship remains linear. It is useful in extracting global features in the compressed domain for general image processing tasks such as those widely used in pyramid algorithms and image indexing. In addition, due to the fact that the corresponding coefficient matrix of the linear combination is sparse, the computational complexity of the proposed algorithms is significantly lower than that of the existing methods  相似文献   

14.
Text extraction is an important initial step in digitizing the historical documents. In this paper, we present a text extraction method for historical Tibetan document images based on block projections. The task of text extraction is considered as text area detection and location problem. The images are divided equally into blocks and the blocks are filtered by the information of the categories of connected components and corner point density. By analyzing the filtered blocks’ projections, the approximate text areas can be located, and the text regions are extracted. Experiments on the dataset of historical Tibetan documents demonstrate the effectiveness of the proposed method.  相似文献   

15.
This paper proposes a new high-capacity reversible data hiding scheme in encrypted images. The content owner first divides the cover image into blocks. Then, the block permutation and the bitwise stream cipher processes are applied to encrypt the image. Upon receiving the encrypted image, the data hider analyzes the image blocks and adaptively decides an optimal block-type labeling strategy. Based on the adaptive block encoding, the image is compressed to vacate the spare room, and the secret data are encrypted and embedded into the spare space. According to the granted authority, the receiver can restore the cover image, extract the secret data, or do both. Experimental results show that the embedding capacity of the proposed scheme outperforms state-of-the-art schemes. In addition, security level and robustness of the proposed scheme are also investigated.  相似文献   

16.
We investigate maximum-likelihood (ML) sequence estimation for space-time block coded systems without assuming channel knowledge. The quadratic form of the ML receiver in this case does not readily lend itself to efficient implementation. However, under quasi-static channel conditions, the likelihood function reduces to a simple form similar to the classical correlation receiver in matrix notation. It also allows the development of a recursive expression that can be easily implemented by a Viterbi-type algorithm with a reasonable complexity. Although the receiver is suboptimum for the nonstatic case, its performance is close to the optimum for a range of signal-to-noise ratios.  相似文献   

17.
In using a simple loop-antenna for the measurement of low-frequency magnetic fields, systematic errors (of the order of 20%) could arise if a low impedance (50 Ω) meter is substituted in lieu of a high-impedance type. Nonconsideration of this error in recommended measurement practices and procedures (such as the SAE specification) for radiated emissions is indicated. Theoretical results indicating the systematic error computed over a frequency range of 30 Hz to 25 KHz are presented. Supporting experimental data concerning the loop impedance measurements and the difference in loop-antenna output voltage noticed with low and high impedance meters confirm the theoretical elucidations  相似文献   

18.
Image plays an irreplaceable role compared with the text and sound in the underwater data collection and transmission researches. However, it suffers from the limited bandwidth of the underwater acoustic communication which cannot afford the large image data. Compressing the image data before transmission is an inevitable process in the underwater image communication. As usual, the natural image compression methods are directly applied to the underwater scene. As we all know, underwater image has different degradation from the natural one due to the optical transmission property. Low illumination in underwater will cause more seriously blurring and color fading than that in the air. It is a great challenge to decrease the bit-rate of the underwater image while preserving the compressed image quality as much as possible. In this paper, the Human Visual System (HVS) is taken into account during the compressing and the evaluating stages for the underwater image communication. We present a new methodology for underwater image compression. Firstly, by taking the human visual system into account, the chrominance perception operator is proposed in this paper to neglect the imperceptible chrominance shift which is widely exited in the underwater imaging to improve the image compression rate. Secondly, depth of field(DOF) of underwater image is usually shallow and most of the usable image has targets in it. An ROI extraction algorithm based on Boolean map detection is then used for the underwater image compression so as to reduce the bitrate of the compressed image. Furthermore, the underwater image is grainy and low contrast, that means the degradation happens in some regions of the image would not be perceived. Just notice difference(JND) sensing algorithm based on the spatial and frequency domain masking feature of HVS is also considered in the image processing. By combining the three aspects above, hybrid wavelet and asymmetric coding are used together to promote the underwater image compression, so that the image can have better quality and less redundancy. Experiments show that the proposed method can make full use of the inherent characteristics of underwater images, and maximize the visual redundancy of underwater images without reducing the visual perception quality of reconstructed images.  相似文献   

19.
In this paper, a sampling adaptive for block compressed sensing with smooth projected Landweber based on edge detection (SA-BCS-SPL-ED) image reconstruction algorithm is presented. This algorithm takes full advantage of the characteristics of the block compressed sensing, which assigns a sampling rate depending on its texture complexity of each block. The block complexity is measured by the variance of its texture gradient, big variance with high sampling rates and small variance with low sampling rates. Meanwhile, in order to avoid over-sampling and sub-sampling, we set up the maximum sampling rate and the minimum sampling rate for each block. Through iterative algorithm, the actual sampling rate of the whole image approximately equals to the set up value. In aspects of the directional transforms, discrete cosine transform (DCT), dual-tree discrete wavelet transform (DDWT), discrete wavelet transform (DWT) and Contourlet (CT) are used in experiments. Experimental results show that compared to block compressed sensing with smooth projected Landweber (BCS-SPL), the proposed algorithm is much better with simple texture images and even complicated texture images at the same sampling rate. Besides, SA-BCS-SPL-ED-DDWT is quite good for the most of images while the SA-BCS-SPL-ED-CT is likely better only for more-complicated texture images.  相似文献   

20.
The shape-adaptive discrete cosine transform ISA-DCT) transform can be computed on a support of arbitrary shape, but retains a computational complexity comparable to that of the usual separable block-DCT (B-DCT). Despite the near-optimal decorrelation and energy compaction properties, application of the SA-DCT has been rather limited, targeted nearly exclusively to video compression. In this paper, we present a novel approach to image filtering based on the SA-DCT. We use the SA-DCT in conjunction with the Anisotropic Local Polynomial Approximation-Intersection of Confidence Intervals technique, which defines the shape of the transform's support in a pointwise adaptive manner. The thresholded or attenuated SA-DCT coefficients are used to reconstruct a local estimate of the signal within the adaptive-shape support. Since supports corresponding to different points are in general overlapping, the local estimates are averaged together using adaptive weights that depend on the region's statistics. This approach can be used for various image-processing tasks. In this paper, we consider, in particular, image denoising and image deblocking and deringing from block-DCT compression. A special structural constraint in luminance-chrominance space is also proposed to enable an accurate filtering of color images. Simulation experiments show a state-of-the-art quality of the final estimate, both in terms of objective criteria and visual appearance. Thanks to the adaptive support, reconstructed edges are clean, and no unpleasant ringing artifacts are introduced by the fitted transform.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号