首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对锂电池X射线图像存在清晰度低、对比度差、图像电极轮廓模糊不清晰等问题,提出一种基于改进多尺度Retinex的锂电池X射线图像增强算法。首先,在传统多尺度Retinex算法中,使用双边滤波估计照度分量,同时利用基于平均对数亮度值进行全局自适应的图像动态范围压缩。然后使用改进的MSR算法提取图像的反射分量,利用sobel算子获取反射分量的纵向梯度,再与反射分量进行梯度信息融合,增强图像细节信息,再对融合图像使用CLAHE算法进行对比度增强,最后再使用双边滤波去噪声,得到最终增强图像。在自主构建的数据集上进行了实验研究,实验结果表明提出的方法显著提高锂电池X射线图像的清晰度和对比度,图像阴极线边缘轮廓有明显增强,在突出锂电池X射线图像边缘细节信息和增强图像对比度上,都要明显优于传统多尺度Retinex算法。  相似文献   

2.
This paper presents a new CNN‐based architecture for real‐time video coding applications. The proposed approach, by exploiting object‐oriented CNN algorithms and MPEG encoding capabilities, enables low bit‐rate encoder/decoder to be designed. Simulation results using Claire video sequence show the effectiveness of the proposed scheme. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
为了解决当前遥感图像融合算法因忽略了区域中像素点的边缘特征而导致融合图像中存在块效应以及模糊效应的不足,在非下采样Shearlet变换的基础上,设计了基于边缘制约模型的遥感图像融合算法。首先,将多光谱(MS)图像经过IHS分解,提取相应的亮度分量。然后,通过非下采样Shearlet变换,将全色(PAN)图像与亮度分量进行分解,获取各自的高频系数与低频系数。再通过图像的空间频率特征,建立低频系数的融合函数,对低频系数进行融合。并利用图像的区域平均梯度特征与图像区域中像素点的边缘能量特征,构造了边缘制约模型,对高频系数进行融合。最后,将融合后低频系数、高频系数经非下采样Shearlet逆变换和IHS逆变换,获取融合图像。实验结果显示,与当前遥感图像融合方法相比,所提算法的融合图像具有更高的清晰度,更好地保持了图像的光谱特性,消除了块效应以及模糊效应。  相似文献   

4.
Image compression offers a good representation of images while using the least quantity of bits. Several lossy image coders are designed without considering the image nature. The image important information (e.g., edges) can be discarded at the coding quantization stage; that information is needed for image understanding and recognition. The possibility of saving storage space and preserving image important information in a joint way becomes imperative in areas such as medicine, mobile devices, and pattern recognition systems. This article addresses the design of an edge-preserving lossy image coder by means of wavelets and contourlets. The results obtained demonstrate the superior performance of the proposed coder against the traditional edge-preserving coders. The proposed coder ensures that the edges of an image are always preserved even at very low bit rates and the obtained decompressed images can be successfully used for future pattern recognition tasks.  相似文献   

5.
The evolution of digital mobile communications along with the increase of integrated circuit complexity has resulted in frequent use of error control coding to protect information against transmission errors. Soft decision decoding offers better error performance compared to hard decision decoding but on the expense of decoding complexity. The maximum a posteriori (MAP) decoder is a decoding algorithm which processes soft information and aims at minimizing bit error probability. In this paper, a matrix approach is presented which analytically describes MAP decoding of linear block codes in an original domain and a corresponding spectral domain. The trellis‐based decoding approach belongs to the class of forward‐only recursion algorithms. It is applicable to high rate block codes with a moderate number of parity bits and allows a simple implementation in the spectral domain in terms of storage requirements and computational complexity. Especially, the required storage space can be significantly reduced compared to conventional BCJR‐based decoding algorithms. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
一种基于Curvelet变换的指纹图像增强方法   总被引:1,自引:0,他引:1  
Curvelet变换具有各向异性和多方向性等良好特性,是表示分段平滑曲线边缘的最优基。为了确保指纹特征提取算法的鲁棒性,需要对原始含噪指纹图像进行预处理以增强纹线的清晰度,增加脊线和谷线的对比度,减少伪信息。本文提出了一种基于Curvelet变换的指纹图像增强方法,该方法对Curvelet变换后的低频系数实现线性灰度拉伸,以增强其对比度,而高频系数采用阈值去噪。仿真实验表明,该方法优于常用的空间域直方图均衡化和小波域图像增强法,具有良好的视觉效果。  相似文献   

7.
We propose a novel paradigm for cellular neural networks (CNNs), which enables the user to simultaneously calculate up to four subband images and to implement the integrated wavelet decomposition and a subsequent function into a single CNN. Two sets of experiments were designated to test the performance of the paradigm. In the first set of experiments, the effects of seven different wavelet filters and five feature extractors were studied in the extraction of critical features from the down‐sampled low‐low subband image using the paradigm. Among them, the combination of DB53 wavelet filter and the r=2 halftoning processor was determined to be most appropriate for low‐resolution applications. The second set of experiments demonstrated the capacity of the paradigm in the extraction of features from multi‐subband images. CNN edge detectors were embedded in a two‐subband digital wavelet transformation processor to extract the horizontal and vertical line features from the LH and HL subband images, respectively. A CNN logic OR operator proceeds to combine the results from the two subband line‐edge detectors. The proposed edge detector is capable of delineating more subtle details than using typical CNN edge detector alone, and is more robust in dealing with low‐contrast images than traditional edge detectors. The results demonstrate the proposed paradigm as a powerful and efficient scheme for image processing using CNN. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
JPEG图像压缩编码及其MATLAB仿真实现   总被引:2,自引:1,他引:2  
首先介绍了基于离散余弦变换的JPEG图像压缩编码算法,接着用MATLAB6.5对标准灰度图像进行仿真,并对同一幅Lena图像做不同的压缩,绘制了率失真曲线.实验结果表明,在很大的压缩范围内,在不同的压缩比和编码比特率下,重建图像的峰值信噪比都在30 dB以上,仍然能满足人们的视觉需要.对图像做不同的压缩,满足了不同的场合、不同的控制码率下要求不同的图像质量的实际需要.用MATLAB做仿真实验,方法简单而且误差小,大大提高了图像压缩的效率和精度.  相似文献   

9.
把图像处理技术应用到复合绝缘子憎水性的测量中,通过对绝缘子图像的对比度增强、图像的灰度、梯度及度的信息测度的综合考虑,结合Hausdorff距离(HD)和形态学对憎水性图像进行边缘提取,获取大水珠(或水迹)边缘信息,最后用改进形状因子法判定绝缘子的憎水性等级.试验结果表明,该方法能准确地测量绝缘子的憎水性,其准确性满足测量的要求.  相似文献   

10.
针对工业生产线薄壁零件识别中存在的零件轮廓识别受光照影响较大的问题,将颜色恒常性技术应用到工业生产线轮廓识别中,基于Retinex、HSI及边缘检测算法的基本原理,提出了一种薄壁零件复杂光照情况下的轮廓特征识别算法用于薄壁零件的图像恢复和轮廓识别.该方法首先使用HSI颜色空间对图像的亮度进行提取,然后利用改进Retin...  相似文献   

11.
该文基于模糊分类和像素方向性提出对图像边缘进行二次确定,同时参数控制去噪及锐化真正边缘。该算法计算量小,边缘明显变细,噪声和伪边缘减少,对边缘细微图像检测效果理想。  相似文献   

12.
Example‐based methods are widely adopted in image super‐resolution (SR) to generate clear, high‐resolution (HR) images from low‐resolution (LR) images. This paper reports our study of example‐based algorithms on LR facial images that exploit the relationship between LR and HR image patch pairs in a database using a Markov random field (MRF) model. We aimed to restore each part in a face with patches related to the parts. We generated patches with information on their original positions from a set of normalized facial images where their facial feature points were approximately in the same position. The nearer candidate patch's position to the target position yielded higher compatibility of the facial parts. Our algorithm restored LR images by combining the proposed facial parts' compatibility function with the conventional function to find the best set of estimated HR patches. The final SR results were obtained by stitching the inferred HR patches from an iteration process. An experiment on a set of facial images demonstrates that the combined compatibility function achieves the best quality in the resulting image, i.e. 30.39 [dB], in terms of the peak signal‐to‐noise ratio (PSNR) compared to the previously achieved quality of 29.65 [dB]. © 2017 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.  相似文献   

13.
针对医学CT图像对比度和可见度较低导致不利于人眼观察及后期图像处理的问题,本文提出一种基于多尺度曝光融合框架的医学CT图像对比度增强算法,对医学CT图像实现增强。首先对原始图像进行拉普拉斯金字塔分解、重构,降低图像噪声对其的干扰,同时对图像细节增强。然后通过曝光合算法计算重构之后的图像的权重估计矩阵、曝光率以及图像的亮度转换函数,以此来对图像进行增强,使用该算法可以在图像对比度增强的同时,也提高了图像的可见度。实验表明,相比其他传统图像增强算法,该方法对图像的增强效果明显更优,对于医学CT图像的增强有着显著的增强效果。  相似文献   

14.
One of the main tools for early diagnosis of breast cancer is digital mammography. These images require large storage space and are difficult to be transmitted over communication links. In this paper we propose a context‐based method for lossless compression of these images. Some modifications are performed to customize the activity level classification model (ALCM) predictor to work best in mammograms. The function of the modified predictor changes for different main regions of these images. Also, best qualities of two other predictors are exploited and the results of the fittest predictor are selected adaptively for the prediction of a pixel. Moreover, context modeling is used for a better categorization of the prediction errors. The proposed algorithm was tested using images from a well‐known database and the results were compared with two standard compression methods of lossless mode of JPEG2000 and JPEG‐LS. The proposed method was proved to produce better compression results than those of the standard algorithms. © 2013 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.  相似文献   

15.
A new method for fast bilateral filtering with texture‐preserving properties is presented. In general, the texture area is composed of several tiny regions that have almost similar intensity, and therefore bilateral filtering combines similar gray scales of the texture area to form a smooth region. Adaptive boundary filtering solves this problem by using image segmentation to define a new weighting boundary at each pixel filtering. With the new type of boundary, the output image is smoothed while preserving both edges and textures details. The drawback of this algorithm is that it is time consuming. In this paper, a histogram‐based filtering is proposed to improve the speed of this algorithm. However, as texture details are blurred when a small number of bins are used, it is unable to increase speed by using a small number of bins. Deciles‐based algorithm is therefore applied to overcome this limitation, as it can increase the speed while preserving the texture details. The new algorithm, named fast adaptive boundary filtering, increases the speed by more than 40% when compared with adaptive boundary filtering at a boundary threshold T = 40. The output image of new algorithm shows that it is similar to the output of adaptive boundary filtering. Moreover, the speed of new algorithm is compared with the normal 256‐bin histogram‐based bilateral filtering. From the experimental results, it is seen that the new algorithm has higher speed when it is processed with a small to moderate radius window. In addition, a better output image can be obtained from the new algorithm as it preserves both edges and textures. © 2015 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.  相似文献   

16.
针对现有边缘检测技术难以同时消除图像中噪声和工件表面划痕对边缘检测的影响,并保持图像边缘的清晰度和连续性,提出了一种基于二阶微分算子和数学形态学的改进边缘检测技术。首先利用数学形态学理论,设计了一种形态学开闭运算处理图像的方法,为去除工件表面划痕做好预处理;然后用二阶微分Laplace算子对预处理后的图像进行边缘检测;最后改进了一种高斯与双边滤波结合的算法,强化去噪效果,并对最终算法进行实验验证。实验结果表面,改进的算法在去除工件表面划痕方面效果明显,并与传统微分算子比较,边缘清晰度、峰值信噪比(PSNR)都有大幅提高,为提高工件识别精度打好基础。  相似文献   

17.
针对现有边缘检测技术难以同时消除图像中噪声和工件表面划痕对边缘检测的影响,并保持图像边缘的清晰度和连续性,提出了一种基于二阶微分算子和数学形态学的改进边缘检测技术。首先利用数学形态学理论,设计了一种形态学开闭运算处理图像的方法,为去除工件表面划痕做好预处理;然后用二阶微分Laplace算子对预处理后的图像进行边缘检测;最后改进了一种高斯与双边滤波结合的算法,强化去噪效果,并对最终算法进行实验验证。实验结果表面,改进的算法在去除工件表面划痕方面效果明显,并与传统微分算子比较,边缘清晰度、峰值信噪比(PSNR)都有大幅提高,为提高工件识别精度打好基础。  相似文献   

18.
合成孔径雷达(SAR)系统工作时产生的大量数据需要压缩传输,多级树集合分裂(SPIHT)算法性能优越但对信道噪声敏感,需要必要的容错机制.提出了一种噪声信道中传输SAR幅度压缩图像的容错算法,称为LLCH(LL coefficients hiding)算法.该算法对SPIHT编码后的码流进行基于编解码过程的数据分组,保证发生误码后解码端可重新同步;将码流中低频、次低频系数分组嵌入高频各过程末尾数据分组进行数据隐藏,达到保护重要系数的目的;对损伤或丢失的高频系数,利用父子节点间系数的相关性进行线性内插恢复.实验结果表明该算法在较小的冗余开销下,有效提升了复原图像的质量.  相似文献   

19.
介绍几种典型的边缘检测算法 ,对它们各自的特点进行比较。讨论边缘检测在电厂锅炉炉膛火焰检测方面的应用。对一幅原始火焰图像 ,用两种边缘检测算法提取火焰图像的边缘 ,并对提取后的边缘图像进行分析。最后对边缘检测算法的发展方向进行展望。  相似文献   

20.
This study addresses the problem of speech quality enhancement by adaptive and nonadaptive filtering algorithms. The well‐known two‐microphone forward blind source separation (TM‐FBSS) structure has been largely studied in the literature. Several two‐microphone algorithms combined with TM‐FBSS have been recently proposed. In this study, we propose 2 contributions: In the first, a new two‐microphone Gauss‐Seidel pseudo affine projection (TM‐GSPAP) algorithm is combined with TM‐FBSS. In the second, we propose to use the new TM‐GSPAP algorithm in speech enhancement. Furthermore, we show the efficiency of the proposed TM‐GSPAP algorithm in speech enhancement when highly noisy observations are available. To validate the good performances of our algorithm, we have evaluated the adaptive filtering properties in computational complexity and convergence speed performance by system mismatch criteria. A fair comparison with adaptive and nonadaptive noise reduction algorithms are also presented. The adaptive algorithms are the well‐known two‐microphone normalized least mean square algorithm, and the recently published two‐microphone pseudo affine projection algorithm. The nonadaptive algorithms are the one‐microphone spectral subtraction and the two‐microphone Wiener filter algorithm. We evalute the quality of the output speech signal in each algorithm by several objective and subjective criteria as the segmental signal‐to‐noise ratio, cepstral distance, perceptual evaluation of speech quality, and the mean opinion score. Finally, we validate the superior performances of the proposed algorithm with physically measured signals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号