首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
李晓阳 《电视技术》2014,38(3):22-24
针对医学融合图像可视化中存在病灶区域特征难以分辨这一问题,首先采用灰度阈值选择法实现对病灶区域的分割,然后在选定病灶区域的前提下,对融合图像做伪彩增强处理。实验采用的原始图像为可见光与红外源图像,融合是由采用小波变换图像融合法实现的,实现融合的区域为灰度图像。在医学图像中,由于肉眼对灰度图像不敏感,不方便清楚地观察病症部位,所以需要对病灶区添加明显的颜色特征。在对病灶区域的分割中采用了3种方法做对比,实验结果表明使用灰度阈值选择法得到的图像最为理想,由于灰度阈值选择法中阈值的选定比较繁琐,在该算法基础上结合最佳阈值分割迭代解法较准确地完成了阈值的选定。最后运用MATLAB实现仿真,实验表明该方法切实有效。  相似文献   

2.
超弱发光成像的分水岭法光子信息提取   总被引:2,自引:2,他引:0  
对电子倍增CCD(EMCCD)超弱光子成像图像灰度的特性分析发现,一个像素的灰度值与相邻的8个像素的灰度值有强相关性。这一特征说明,简单地使用阈值法对其进行光子信号的数量和位置信息提取和图像处理,将会有较大的失真性。在超弱发光的条件下(如生物超弱发光),一幅图像中的光子信号数远小于像素数,光子信号在图像中呈现散粒状态。据此,在阈值法的基础上引入数学形态学分水岭法,将处理后光子灰度图像中孤立的峰值区作为光子信号响应。新算法克服阈值法只能对单个像素进行判断的局限,考虑到局部像素群的整体特征。结果表明,新算法的处理结果更加真实可靠,稳定性更好。通过绿豆芽的超弱自发光成像实验,验证了新算法的灵敏度和可靠性。  相似文献   

3.
针对现有算法无法有效解决混合噪声图像的阈值分割的问题,该文提出3维最小误差阈值法。该方法充分考虑图像像元点的灰度分布信息及像元点之间的灰度相关信息,结合图像灰度、均值和中值信息,构造出3维观测空间。然后基于相对熵定义出3维最佳阈值判别式。为了提高该算法的处理速度,给出相应的快速递推算法,其时间复杂度为O(L3)。实验结果表明,在不同噪声环境及非均匀光照条件下,尤其对混合噪声图像,与现有方法相比,文中算法均取得了更好的分割效果。  相似文献   

4.
基于改进型灰度阈值分割法的爆炸图像研究   总被引:1,自引:0,他引:1  
董玉才  史宏涛  杜健  王东兴 《信息技术》2010,34(8):19-21,83
爆炸图像的轮廓曲线中蕴含了丰富的爆炸物理信息和几何信息,是分析爆炸抛散过程的重要基础。在灰度阈值法的基础之上,结合SFAE爆炸图像的几何特征,提出了一种改进型的灰度阈值法,并在MATLAB环境中进行了模拟仿真,最后利用连通成分的评价方法对分割效果进行了评估。实验结果表明,该方法比普通的灰度阈值法提取效果好,适合于爆炸图像的预处理。  相似文献   

5.
针对帧差分法易产生空洞以及背景减法不能检测出与背景灰度接近的目标的问题,提出了一种将背景减和帧差法相结合的运动目标检测算法.首先利用连续两帧图像进行背景减法得到两种差分图像,并用最大类间与类内方差比法得到合适的阈值将这两种差分图像二值化,然后将得到的两种二值化图像进行或运算,最后利用图像形态学滤波得到准确的运动目标.实验结果表明,该算法简单、易实现、实时性强.  相似文献   

6.
范九伦  任静 《电子学报》2011,39(10):2277-2281
基于图像共生矩阵的阈值法是图像分割的一种基本方法,本文基于“灰度—灰度”对称共生矩阵,通过定义目标区域和背景区域的均值,提出了一个阈值分割方法.该方法和常见的基于对称共生矩阵阈值法相比,对图像的适应性更强,仿真实验验证了本文方法的有效性.  相似文献   

7.
快速检测轴承表面缺陷方法的研究   总被引:1,自引:0,他引:1  
轴承表面缺陷可分为两种,一种是低于目标灰度的缺陷,另一种是高于目标灰度的缺陷.针对单阈值分割算法只能检测轴承前一种缺陷的局限性,而多阈值分割算法仅能检测高于目标灰度缺陷的情况,提出了一种基于OSTU的多次阈值分割图像快速检测轴承表面缺陷的方法,该算法相对于多阈值算法复杂度小,且根据轴承图像的特点缩小了阈值取值的范围,较好满足了在线检测轴承表面缺陷的实际需求.  相似文献   

8.
张婕  王颖 《半导体光电》2016,37(1):126-130
针对红外图像由于目标和背景边界模糊,采用单一熵阈值法进行图像分割结果不理想,提出了一种基于距离灰度补偿的红外图像增强方法,利用距离作为空间信息对灰度进行补偿,改善了目标和背景边界模糊对图像分割的不利影响;然后提出了一种基于交叉熵约束的最大熵阈值图像分割方法,在交叉熵约束保证类间差异的前提下利用类内均匀性进行图像分割,避免了单一熵方法阈值的局限性.实验结果表明,对小目标复杂背景和复杂目标大背景的红外图像,所提出的方法得到了准确的图像分割结果.  相似文献   

9.
为了快速定量的检测同步碎石封层施工的碎石撒布覆盖率,提出了一种基于边缘检测的数字图像处理方法.通过Retinex算法对图像进行增强,排除了施工环境对采集图像的影响,处理结果表明图像质量得到了很大改善.利用拉普拉斯高斯算子和加权系数法统计石料边缘区域的灰度信息以取得图像阈值,对图像进行分割后得到的二值图像保留了石料的形状特征.利用最大类间方差法对图像进行边缘修补,修正了被误分割的石料边缘区域.通过计算二值图像中石料面积与图像总面积比值获得的碎石撒布覆盖率与实际数值吻合.  相似文献   

10.
一种基于Otsu法和平台直方图均衡的红外图像增强算法   总被引:1,自引:0,他引:1  
陈峥  吉书鹏 《激光与红外》2010,40(4):438-441
针对红外吊舱图像的特点,提出了一种基于Otsu法和平台直方图均衡的红外图像增强算法。首先分析图像的直方图,通过改进的Otsu法把图像分割为天空背景区和地面目标区,背景区以背景峰值与分割后背景区残留目标的局部峰值为依据,采用了一种自适应的方法来产生平台阈值,然后在背景区和目标区的灰度范围内进行直方图均衡化处理。该算法有效地抑制了背景区的过度增强,扩大了目标区的灰度范围,增强了细节部分。实验仿真结果验证了算法的有效性。  相似文献   

11.
陈坦  赖建军赵悦 《红外》2006,27(9):24-28
光栅投影成像法经常用于物体的非接触形状测量和形变测量。通过莫尔相移法,可以实时获得物体表面的等高轮廓线。但是在测量高速运动物体三维轮廓图像时误差较大,因为相移法需要拍摄几张经过相移后的变形光栅。在加入了DMD芯片后,可以在CCD的一帧图像时间内完成所有的相移后变形光栅的图像拍摄,有效地降低了高速运动物体三维轮廓成像的误差。  相似文献   

12.
基于同心圆光栅和契形光栅的摄像机自标定方法   总被引:2,自引:2,他引:2  
薛俊鹏  苏显渝  窦蕴甫 《中国激光》2012,39(3):308002-188
提出一种利用同心圆光栅和契形光栅作为标定模板,基于正交方向消失点摄像机自标定的新方法。新方法利用相移条纹相位提取精度高的特性来求零相位特征点,避免了传统标记点提取误差给标定结果带来的影响。要求摄像机至少从6个方位拍摄标靶,采用四步相移得到光栅的截断相位并求解零相位交点,计算消失点从而解算出摄像机的内参数。根据模拟实验得到影响消失点标定精度因素的分析结果,制作了含有7个周期的同心圆光栅和4个周期的契形光栅。实际测量实验中分别用光栅靶标和灰度同心圆靶标进行相机标定,对比重投影误差,结果证明新方法提高了基于消失点标定算法的精度和稳健性。  相似文献   

13.
The use of stereoscopic SAR images offers an alternative to interferometric SAR for the generation of digital elevation models (DEMs). The stereo radargrammetric method is robust and can generate DEMs of sufficient accuracy to geocode SAR images. Previous work has shown that ground coordinates with accuracy of four times the resolution cell can be obtained from ERS data without using any ground control points (GCPs), where the high accuracy of the orbit and satellite position of the order of metres introduce insignificant errors into the intersection procedure. The orbit data for RADARSAT is not as accurate as that for ERS, and the perpendicular relationship between the resultant velocity vector and the resultant range vector is uncertain in terms of image geometry. Hence, it is necessary to refine the method to allow for possible errors. This paper introduces a weighted space intersection algorithm based on an analysis of the predicted errors. A radargrammetric error model for observation errors is also formulated to predict the accuracy of the algorithm. The revised method can be used without any GCPs, but this can lead to systematic errors due to less accurate orbit data, and it has been found that the use of two GCPs provides a reasonable solution. The method is insensitive to the spatial distribution of GCPs, which is often critical in traditional methods. The error statistics of the results generated from 32 independent check points, distributed through the entire SAR image, approach the predicted errors and give positional accuracy of 38 m in three dimensions  相似文献   

14.
Coincident bit counting-a new criterion for image registration   总被引:1,自引:0,他引:1  
A similarity measure based on the number of coincident bits in multichannel images is presented. The similarity criterion incorporated in the image registration algorithm uses a coincident bit counting (CBC) method to obtain the number of matching bits between the frames of interest. The CBC method not only performs favorably compared with traditional techniques, but also renders simpler implementation in conventional computing machines. An image registration algorithm that incorporates the CBC criterion is proposed to determine the translation motion among sequences of images. The errors caused by noise, misregistration, and a combination of these two are analyzed. Some experimental studies using low-contrast coronary images from a digital angiographic sequence are discussed. The results compare favorably with those obtained by using other nonparametric methods.  相似文献   

15.
Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.  相似文献   

16.
We present a general wavelet-based denoising scheme for functional magnetic resonance imaging (fMRI) data and compare it to Gaussian smoothing, the traditional denoising method used in fMRI analysis. One-dimensional WaveLab thresholding routines were adapted to two-dimensional (2-D) images, and applied to 2-D wavelet coefficients. To test the effect of these methods on the signal-to-noise ratio (SNR), we compared the SNR of 2-D fMRI images before and after denoising, using both Gaussian smoothing and wavelet-based methods. We simulated a fMRI series with a time signal in an active spot, and tested the methods on noisy copies of it. The denoising methods were evaluated in two ways: by the average temporal SNR inside the original activated spot, and by the shape of the spot detected by thresholding the temporal SNR maps. Denoising methods that introduce much smoothness are better suited for low SNRs, but for images of reasonable quality they are not preferable, because they introduce heavy deformations. Wavelet-based denoising methods that introduce less smoothing preserve the sharpness of the images and retain the original shapes of active regions. We also performed statistical parametric mapping on the denoised simulated time series, as well as on a real fMRI data set. False discovery rate control was used to correct for multiple comparisons. The results show that the methods that produce smooth images introduce more false positives. The less smoothing wavelet-based methods, although generating more false negatives, produce a smaller total number of errors than Gaussian smoothing or wavelet-based methods with a large smoothing effect.  相似文献   

17.
In image communication over lossy packet networks (e.g., cell phone communication), packet loss errors lead to damaged images. Damaged images can be repaired with passive error concealment methods, which use neighboring coefficient or pixel values to estimate the missing ones. Neighboring image data should, thus, be spread over different packets. This paper presents a novel robust packetization method for the transmission of image content in lossy packet networks. We first define novel criteria for a good packetization. Based on these properties, we propose a cost function for packetization masks. We then use stochastic optimization to calculate optimal packetization masks. We test our packetization technique on both wavelet coding and DCT coding. Compared to other packetization techniques, we are able to achieve the same or better mean quality of the reconstructed images but with less fluctuation in quality, which is important for the viewer experience. In this way, we significantly increase the worst case quality, especially for high packet loss rates. This leads to visually more pleasing images in case of a passive reconstruction.  相似文献   

18.
We consider a joint voice-data packet reservation multiple-access (PRMA) system with transmission errors. Two analysis methods are presented. The first is a combined Markov and equilibrium point analysis. In this method, equilibrium point analysis is used to obtain the equilibrium number of backlogged data terminals, while Markov analysis is used to compute the stationary state probability distribution, assuming the number of backlogged data terminals is equal to the equilibrium value. The second is a Markov analysis using an approximate marginal distribution of backlogged data terminals. System performance measures, namely, voice packet dropping probability, data packet delay, and throughput, are derived in the presence of transmission errors. Computation complexity and accuracy are compared for the two analysis methods. Simulation results are also presented for comparison with analysis results, and good agreement is observed, especially when the packet header error rate is small  相似文献   

19.
A compressed video bitstream is sensitive to errors that may severely degrade the reconstructed images even when the bit error rate is small. One approach to combat the impact of such errors is the use of error concealment at the decoder without increasing the bit rate or changing the encoder. For spatial‐error concealment, we propose a method featuring edge continuity and texture preservation as well as low computation to reconstruct more visually acceptable images. Aiming at temporal error concealment, we propose a two‐step algorithm based on block matching principles in which the assumption of smooth and uniform motion for some adjacent blocks is adopted. As simulation results show, the proposed spatial and temporal methods provide better reconstruction quality for damaged images than other methods.  相似文献   

20.
A new statistical method is proposed for reduction of truncation artifacts when reconstructing a function by a finite number of its Fourier series coefficients. Following the Bayesian approach, it is possible to take into account both the errors induced by the truncation of the Fourier series and some specific characteristics of the function. A suitable Markov random field is used for modeling these characteristics. Furthermore, in applications like Magnetic Resonance Imaging, where these coefficients are the measured data, the experimental random noise in the data can also be taken into account. Monte Carlo Markov chain methods are used to make statistical inference. Parameter selection in the Bayesian model is also addressed and a solution for selecting the parameters automatically is proposed. The method is applied successfully to both simulated and real magnetic resonance images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号