首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
王志明 《工程科学学报》2015,37(9):1218-1224
提出一种基于图像分割的噪声方差两步估计算法.第一步,对含有噪声的图像进行平滑,再利用统计区域归并算法对图像进行分割,并计算每个区域的方差,根据统计规律选择适当的区域估计图像中噪声方差.第二步,利用初始估计的方差,修正平滑滤波、图像分割及噪声估计的参数,进行新一轮的平滑、分割和方差估计,得出更为准确的估计结果.在大量图像和不同噪声情况下的实验结果表明,该算法可以快速、准确地估计图像中噪声方差.   相似文献   

2.
本文提出了一种全新的基于图像片的模糊C均值聚类的图像分割方法.将图像片的思想引入聚类分割中,提出IPFCM方法,用局部的图像片来代替聚类分割中的像素点,从而增大不同类别之间的差异,并对隶属度更新函数进行改造使隶属度函数分布具有单峰值性.实验结果表明,本文方法具有较强的抗噪性和较高的分割精度,图像的隶属度函数与理想隶属度函数十分接近.同时无需过多控制参数,具有较强的可靠性和适应性.另一方面,本文将聚类中心的每一个成员扩展为一个向量,并给出了向量聚类中心的更新公式,为日后将多种图像特征加入FCM对图像进行分割提供了充分的理论基础.  相似文献   

3.
矿石图像分割是基于机器视觉的矿石粒度分布检测的重要组成部分。针对复合矿山中颜色多样、纹理复杂且边缘粘连的多种类矿石图像难以识别与分割的问题,提出了一种基于FCM-WA联合算法的矿石图像分割方法。首先对矿石图像进行形态学优化,利用双边滤波、直方图均衡化和形态学重构来优化矿石图像的几何特征,减少噪声对分割效果的影响,提高图像对比度;然后将模糊C均值聚类(FCM)算法与分水岭(WA)算法相结合,利用FCM算法进行聚类迭代,计算出合适的分割阈值并对矿石图像进行分割,输出二值化图像;再利用基于距离变换的WA算法优化FCM算法的分割结果,对FCM算法输出的矿石图像边缘粘连部分进行分割,以获取最佳的分割图像。研究结果表明:(1)利用形态学优化流程处理矿石图像能够减少噪声并增强边缘信息,从而提高对比度;(2)相比传统的大津法和遗传算法,本文所提FCM-WA方法的稳健性更强、分割效果更好,对多种类的矿石图像像素分割准确率和矿石粒度识别准确率均可达到92%以上;(3)通过试验验证,FCM-WA方法能够精确地分割颜色多样、纹理特征复杂及边缘粘连的多种类矿石图像,分割结果满足粒度分布检测的要求;(4)FCM-...  相似文献   

4.
目前很多图像分割方法不能满足图像信息生成的随机性以及面对应用的实时性.提出一种基于高斯统计模型的快速图像区域分割算法,与已有的一些图像区域分割算法相比,该方法不用图像去噪,直接利用噪声建立高斯统计模型,还采用最大最小值的方法,提出初始分割的思想.实验结果表明新方法增强了分割结果的稳定性,提高了分割速度,更加符合了图像要求精确性和实时性的要求.  相似文献   

5.
将改进遗传算法应用于雾天图像的区域分割,从而使图像清晰化.该方法首先用遗传算法求出近景和远景的分割阈值将图像分割出来,然后通过移动模板对整个图像进行相应的清晰化处理,防止区域边界效应的产生,最后对获取的图像进行信息融合,进一步提高图像质量.通过试验证实了该算法能有效改善雾天图像的退化现象,提高图像的清晰度.  相似文献   

6.
在研究已有基于流形排序图像检索算法存在问题的基础上,提出一种基于重选择流形排序的图像检索算法,此算法可以在节约时间的同时,进一步提高检索结果的精度,并在实际图像数据库中的实验结果验证了此算法的有效性.  相似文献   

7.
传统Live Wire算法易受伪轮廓干扰,并且算法执行速度较慢.针对这些问题,提出一种基于PSO的Live Wire交互式图像分割算法.算法首先构造新的代价函数,引入相邻节点间梯度幅值变化函数来减轻伪轮廓的干扰,提高了算法的分割精度;其次,为了提高算法的执行效率,应用粒子群算法求取图像中任意两点间最短路径来定位目标边界,并与经典的基于Dijkstra动态规划图搜索的Live Wire算法进行比较.实验结果表明,与传统方法相比,所提算法在分割精度和执行效率上都有很大提高.  相似文献   

8.
针对基于统计模型的水平集SAR图像分割中参数估计耗时较多的问题,提出了一种有监督的高分辨SAR图像分割方法.该方法将Fisher分布和Gamma分布分别作为高分辨SAR图像的目标和背景统计模型,结合水平集方法推导了SAR图像分割水平集函数的能量泛函模型,通过最小化能量泛函得到曲线演化偏微分方程,实现对高分辨SAR图像的分割.试验结果表明,该方法对高分辨SAR图像具有强散射点的目标分割更完整,并且比无监督统计模型分割方法分割速度更快.  相似文献   

9.
《中国钼业》2012,(3):17-17
本发明涉及基于视觉感知的交互式乳腺钼靶图像检索方法。本发明方法首先读人一幅待处理的乳腺钼靶图像,利用图像分割算法标志疑似病灶,将疑似病灶区域的特征抽取出来(包括面积、偏心率、紧凑度、不变矩、Gabor特征、分维数),在病灶数据库中自动寻找一组与疑似病灶区域的特征(包括面积、偏心率、  相似文献   

10.
针对带钢表面缺陷的特点,提出了1种基于图像预处理消除光照不均等的干扰且用神经网络进行缺陷识别的检测方法.带钢缺陷的检测分为3步:首先,对采集的图像进行预处理,通过图像零均值化以消除光照对检测的影响,分别利用维纳滤波和sobel算子对图像进行滤波除噪和锐化处理;其次,通过最大类间方差法进行图像分割以及计算面积来判断是否存在缺陷;最后,在提取图像特征的基础上,通过设计人工神经网络识别带钢缺陷类型.实验表明,采用的方法能够有效抑制图像背景干扰,能够有效地实现带钢缺陷的快速检测.  相似文献   

11.
Microscopy is a basic tool for cell biologists. Recent progress of electronics and computer science made powerful methodologies for digital processing of microscopic images easily available. These methods allowed impressive increase of the power of conventional microscopy. Dramatic image enhancement may be achieved by combination of filtering techniques, computer-based deblurring and contrast enhancement. Quantitative treatment of digitized images allows absolute determination of the density of different components of the observed sample, including antigens, intracellular calcium and pH. Morphometric studies are also greatly facilitated by image processing techniques. The capture of fast phenomena may be performed by transfer of small portion of microscopic images into computer memory as well as particular use of confocal microscopy. Finally, improved display of experimental data through coded colors or other procedures may enhance the amount of information that can be conveyed by visual examination of microscopical images. The purpose of the present review is to describe the basic principles of image processing and exemplify the power of this approach with a variety of illustrated applications to conventional, fluorescence or electron microscopy as well as confocal microscopy.  相似文献   

12.
One prerequisite for standard clinical use of intravascular ultrasound imaging is rapid evaluation of the data. The main quantities to be extracted from the data are the size and the shape of the lumen. Until now, no accurate, robust and reproducible method to obtain the lumen boundaries from intravascular ultrasound images has been described. In this study, 21 different (semi-)automated binary-segmentation methods for determining the lumen are compared with manual segmentation to find an alternative for the laborious and subjective procedure of manual editing. After a preprocessing step in which the catheter area is filled with lumen-like grey values, all approaches consist of two steps: (i) smoothing the images with different filtering methods and (ii) extracting the lumen by an object definition method. The combination of different filtering methods and object definition methods results in a total of 21 methods and 80 experiments. The results are compared with a reference image, obtained from manual editing, by use of four different quality parameters--two based on squared distances and two based on Mahalanobis distances. The evaluation has been carried out on 15 images, of which seven are obtained before balloon dilation and eight after balloon dilation. While for the post-dilation images no definite conclusions can be drawn, an automated contour model applied to images smoothed with a large kernel appears to be a good alternative to manual contouring. For pre-dilation images a fully automated active contour model, initialized by thresholding, preceded by filtering with a small-scale median filter is the best alternative for manual delineation. The results of this method are even better than manual segmentation, i.e. they are consistently closer to the reference image than the average distance of all individual manual segmentations.  相似文献   

13.
We review and discuss different classes of image segmentation methods. The usefulness of these methods is illustrated by a number of clinical cases. Segmentation is the process of assigning labels to pixels in 2D images or voxels in 3D images. Typically the effect is that the image is split up into segments, also called regions or areas. In medical imaging it is essential for quantification of outlined structures and for 3D visualization of relevant image data. Based on the level of implemented model knowledge we have classified these methods into (1) manual delineation, (2) low-level segmentation, and (3) model-based segmentation. Pure manual delineation of structures in a series of images is time-consuming and user-dependent and should therefore be restricted to quick experiments. Low-level segmentation analyzes the image locally at each pixel in the image and is practically limited to high-contrast images. Model-based segmentation uses knowledge of object structure such as global shape or semantic context. It typically requires an initialization, for example in the form of a rough approximation of the contour to be found. In practice it turns out that the use of high-level knowledge, e.g. anatomical knowledge, in the segmentation algorithm is quite complicated. Generally, the number of clinical applications decreases with the level and extent of prior knowledge needed by the segmentation algorithm. Most problems of segmentation inaccuracies can be overcome by human interaction. Promising segmentation methods for complex images are therefore user-guided and thus semi-automatic. They require manual intervention and guidance and consist of fast and accurate refinement techniques to assist the human operator.  相似文献   

14.
In recent years, there has been much interest in the clinical application of attenuation compensation to myocardial perfusion single photon emission computed tomography (SPECT) with the promise that accurate quantitative images can be obtained to improve clinical diagnoses. The different attenuation compensation methods that are available create confusion and some misconceptions. Also, attenuation-compensated images reveal other image-degrading effects including collimator-detector blurring and scatter that are not apparent in uncompensated images. This article presents basic concepts of the major factors that degrade the quality and quantitative accuracy of myocardial perfusion SPECT images, and includes a discussion of the various image reconstruction and compensation methods and misconceptions and pitfalls in implementation. The differences between the various compensation methods and their performance are demonstrated. Particular emphasis is directed to an approach that promises to provide quantitative myocardial perfusion SPECT images by accurately compensating for the 3-dimensional (3-D) attenuation, collimator-detector response, and scatter effects. With advances in the computer hardware and optimized implementation techniques, quantitatively accurate and high-quality myocardial perfusion SPECT images can be obtained in clinically acceptable processing time. Examples from simulation, phantom, and patient studies are used to demonstrate the various aspects of the investigation. We conclude that quantitative myocardial perfusion SPECT, which holds great promise to improve clinical diagnosis, is an achievable goal in the near future.  相似文献   

15.
The paper deals with the iterative three-dimensional (3D) smoothing of tomograms acquired by fast Magnetic Resonance (MR) imaging methods. The smoothing method explored, which is aimed basically at the improvement of 3D visualization quality, uses the physical concept of geometry-driven diffusion with a variable conductance function, based on a specific measure of the 3D neighborhood homogeneity. A novel stopping criterion is proposed for iterative 3D diffusion processing. A study of the transition from 2D to 3D algorithms is carried out. The main structure of the program implementation of the smoothing algorithms developed is described. Three smoothing/filtering methods, aimed at the improvement of 3D visualization of MR tomograms of the brain, are quantitatively and visually compared using real 3D MR images. The results of computer simulations with 3D smoothing, segmentation and visualization are presented and discussed.  相似文献   

16.
The lattice imaging technique of high-resolution electron microscopy has been applied to study spinodal decomposition in a Cu-Ni-Cr alloy. The reliability of various methods for processing the image data is discussed. The important parameters describing a spinodal microstructure (viz, composition amplitude, wavelength of modulations, and relative volume fractions of the two phases) can all be obtained from the lattice image, even though the maximum change of lattice parameters between the two phases is only ~1 pct. Interesting interfacial microstructural features have also been revealed from microoptical diffractograms of the images during later stages of aging.  相似文献   

17.
The investigation of neurohistological specimens by image analysis has become an important tool in morphological neuroscience. The problems which arise during the processing of these images are non-trivial, especially if a pattern recognition of cells in the imaged tissue is intended. One of the major problems faced concerns the segmentation of structures of interest, whether cells or other histologic structures. The segmentation problem is often the result of an inappropriate staining procedure. For serious image analysis to be performed, the material under investigation must be optimally prepared. Spatially complex patterns, e.g. fuzzy-like neighbouring neurons, are easy to recognize for humans. But the integrative and associative performance of current artificial neuronal network schemes is too low to achieve the same recognition quality as humans do. Therefore, a general analysis of staining characteristics was performed, especially with respect to those stains which are relevant to object segmentation. Although most image analytical investigations of tissues are based on stained samples, a study of this type has not been previously conducted. Of the stains and procedures evaluated, the gallocyanin chrome alum combination staining provided the best stain contrast. Furthermore, this staining method shows sufficient constancy within different parts of the human brain. Even the fine nuclear textures are differentiable and can be used for further pattern recognition procedures.  相似文献   

18.
NMR microscopy is currently being used as an investigational tool for the evaluation of micromorphometric parameters of trabecular bone as a possible means to assess its strength. Since, typically, the image voxel size is not significantly smaller than individual trabecular elements, partial volume blurring can be a major complication for accurate tissue classification. In this paper, a Bayesian segmentation technique is reported that achieves improved subvoxel tissue classification. Each voxel is subdivided either into eight subvoxels twice the original resolution, or up to four subvoxels along the transaxial direction and the subvoxels optimally classified as volume blurring, the likelihood for the number of marrow subvoxels in each voxel can be computed on the basis of its measured signal. To resolve the ambiguity of the location of the marrow subvoxels, a Gibbs distribution is introduced to model the interaction between the subvoxels. Neighboring subvoxel pairs with the same tissue label are encouraged, and pairs with distinct labels are penalized. The segmentation is achieved by maximizing the a posteriori probability of the label image using the block ICM (iterative conditional mode) algorithm. The potential of the proposed technique is demonstrated in real and synthetic NMR microscopic images.  相似文献   

19.
白志程  李擎  陈鹏  郭立晴 《工程科学学报》2020,42(11):1433-1448
文本检测在自动驾驶和跨模态图像检索中具有极为广泛的应用。该技术也是基于光学字符的文本识别任务中重要的前置环节。目前,复杂场景下的文本检测仍极具挑战性。本文对自然场景文本检测进行综述,回顾了针对该问题的主要技术和相关研究进展,并对研究现状进行分析。首先对问题进行概述,分析了自然场景中文本检测的主要特点;接着,介绍了经典的基于连通域分析、基于滑动检测窗的自然场景文本检测技术;在此基础上,综述了近年来较为常用的深度学习文本检测技术;最后,对自然场景文本检测未来可能的研究方向进行展望。   相似文献   

20.
聚焦于矿石勘探和将矿石破碎筛分后的皮带运输两个环节,系统总结了深度学习技术在矿石图像处理中的主要应用,包括矿石分类、粒度分析和异物识别等任务,并分门别类地梳理了完成以上三大任务的常用算法及其优缺点。其中,矿石分类在地质勘探中起着重要作用;粒度分析能为破碎机和传送皮带的控制提供参考依据,还能识别出给矿皮带上过大尺寸的矿石,防止处于给矿皮带和受矿皮带之间的转运缓冲仓内发生堵料事故;异物识别能将皮带上混在矿石中的有害物品检测出来。   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号