首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
为了更好地利用红外热成像技术对电气设备故障进行识别和诊断的问题,提出了一种基于红外图像特征和种子区域生长法的设备温升检测方法。采用邻域平均法减少了红外图像的噪声干扰,提取出了红外图像RGB空间中的红色分量图及绿色分量图,应用种子区域生长法分别对两分量图进行了分割,先通过寻找局部最高温点划分区域,再计算各区域内形态学梯度筛选出有故障的高温点,并将该点所在区域作为种子区域从而实现种子区域的自动选取,将像素点4个方向上的最大梯度及像素点与种子点的灰度差作为种子生长的判定条件,用交集的方法将分割后的红色分量图和绿色分量图融合,提取出了设备温升过高区域。实验及研究结果表明,该方法能确定高温升区域,且轮廓清晰,为电气设备温升故障诊断提供依据。  相似文献   

2.
Image analysis is an important tool for characterizing nano/micro network structures. To understand the connection, organization and proper alignment of network structures, the knowledge of the segments that represent the materials inside the image is very necessary. Image segmentation is generally carried out using statistical methods. In this study, we developed a simple and reliable masking method that improves the performance of the indicator kriging method by using entropy. This method selectively chooses important pixels in an image (optical or electron microscopy image) depending on the degree of information required to assist the thresholding step. Reasonable threshold values can be obtained by selectively choosing important pixels in a complex network image composed of extremely large numbers of thin and narrow objects. Thus, the overall image segmentation can be improved as the number of disconnected objects in the network is minimized. Moreover, we also proposed a new method for analyzing high‐pixel resolution images on a large scale and optimized the time‐consuming steps such as covariance estimation of low‐pixel resolution image, which is rescaled by performing the affine transformation on high‐pixel resolution images. Herein, image segmentation is executed in the original high‐pixel resolution image. This entropy‐based masking method of low‐pixel resolution significantly decreases the analysis time without sacrificing accuracy.  相似文献   

3.
一种合成孔径声呐图像目标分割方法   总被引:3,自引:0,他引:3       下载免费PDF全文
合成孔径声呐图像的信噪比低于普通光学图像,使图像分割成为合成孔径声呐图像处理中的重要环节。本文研究了表示合成孔径声呐图像数据分布的瑞利混合模型,结合马尔科夫随机场模型,将其应用于声呐图像水下目标(亮区)分割;通过最大期望算法分别估计目标和背景的瑞利混合模型参数,并利用该参数使用Graph cut方法进行马尔科夫随机场图像分割,通过重复迭代,最后形成稳定的目标分割结果;对实际的声呐图像进行了数据分析及目标分割,结果表明瑞利混合模型在描述合成孔径声呐声图上有良好的性能,可以改善目标分割的效果。  相似文献   

4.
New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10‐fold cross‐validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross‐validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time‐sequence data sets, for a total of 17 479 images. This method is implemented as an open‐source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/ .  相似文献   

5.
Background and noise impair image quality by affecting resolution and obscuring image detail in the low intensity range. Because background levels in unprocessed confocal images are frequently at about 30% maximum intensity, colocalization analysis, a typical segmentation process, is limited to high intensity signal and prone to noise‐induced, false‐positive events. This makes suppression or removal of background crucial for this kind of image analysis. This paper examines the effects of median filtering and deconvolution, two image‐processing techniques enhancing the signal‐to‐noise ratio (SNR), on the results of colocalization analysis in confocal data sets of biological specimens. The data show that median filtering can improve the SNR by a factor of 2. The technique eliminates noise‐induced colocalization events successfully. However, because filtering recovers voxel values from the local neighbourhood false‐negative (‘dissipation’ of signal intensity below threshold value) as well as false‐positive (‘fusion’ of noise with low intensity signal resulting in above threshold intensities), results can be generated. In addition, filtering involves the convolution of an image with a kernel, a procedure that inherently impairs resolution. Image restoration by deconvolution avoids both of these disadvantages. Such routines calculate a model of the object considering various parameters that impair image formation and are able to suppress background down to very low levels (< 10% maximum intensity, resulting in a SNR improved by a factor 3 as compared to raw images). This makes additional objects in the low intensity but high frequency range available to analysis. In addition, removal of noise and distortions induced by the optical system results in improved resolution, which is of critical importance in cases involving objects of near resolution size. The technique is, however, sensitive to overestimation of the background level. In conclusion, colocalization analysis will be improved by deconvolution more than by filtering. This applies especially to specimens characterized by small object size and/or low intensities.  相似文献   

6.
如何减小分割结果与实际地理对象间的差异,是目前高分辨遥感影像分割中面临的一个难点问题。为此,构建了一种新的对象置信度(OC)指标来衡量任意区域与地理对象间的匹配程度,进而提出了一种面向地理对象的多尺度分割算法。该算法主要包括两个步骤:首先,通过对影像进行过分割来构建初始种子区域集合,并确定尺度参数集合;而后,通过跟踪对象置信度指标OC的尺度间变化来指引多尺度区域合并过程,使区域合并结果逐步逼近实际的地理对象。多组实验结果表明,所提出的算法能够显著改善过分割及欠分割问题,准确识别建筑物、道路等地理对象的完整轮廓,在定性分析及定量精度评价中均显著优于商业软件e Congnition及传统多尺度分割算法。  相似文献   

7.
In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and ‘the best’ method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross‐section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category – biological samples – is shown.  相似文献   

8.
Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio‐Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.  相似文献   

9.
Common methods for quantification of colocalization in fluorescence microscopy typically require cross-talk free images or images where cross-talk has been eliminated by image processing, as they are based on intensity thresholding. Quantification of colocalization includes not only calculating a global measure of the degree of colocalization within an image, but also a classification of each image pixel as showing colocalized signals or not. In this paper, we present a novel, automated method for quantification of colocalization and classification of image pixels. The method, referred to as SpecDec, is based on an algorithm for spectral decomposition of multispectral data borrowed from the field of remote sensing. Pixels are classified based on hue rather than intensity. The hue distribution is presented as a histogram created by a series of steps that compensate for the quantization noise always present in digital image data, and classification rules are thereafter based on the shape of the angle histogram. Detection of colocalized signals is thus only dependent on the hue, making it possible to classify also low-intensity objects, and decoupling image segmentation from detection of colocalization. Cross-talk will show up as shifts of the peaks of the histogram, and thus a shift of the classification rules, making the method essentially insensitive to cross-talk. The method can also be used to quantify and compensate for cross-talk, independent of the microscope hardware.  相似文献   

10.
Micro (µ‐) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of µ‐axial tomography is an effective improvement of the precision of distance measurements between point‐like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi‐perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature‐based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer‐generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano‐particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the experimental performance (e.g. mechanical precision of the tilting). In practice, the key application of the method is an improvement of the effective spatial (3D) resolution, because the well‐known spatial anisotropy in light microscopy can be overcome. This allows more precise distance measurements between point‐like objects.  相似文献   

11.
Image segmentation aims to determine structures of interest inside a digital picture in biomedical sciences. State‐of‐the art automatic methods however still fail to provide the segmentation quality achievable by humans who employ expert knowledge and use software to mark target structures on an image. Manual segmentation is time‐consuming, tedious and suffers from interoperator variability, thus not serving the requirements of daily use well. Therefore, the approach presented here abandons the goal of full‐fledged segmentation and settles for the localization of circular objects in photographs (10 training images and 20 testing images with several hundreds of nuclei each). A fully trainable softcore interaction point process model was hence fit to the most likely locations of nuclei of meningioma cells. The Broad Bioimage Benchmark Collection/SIMCEP data set of virtual cells served as controls. A ‘colour deconvolution’ algorithm was integrated to determine (based on anti‐Ki67 immunohistochemistry) which real cells might have the potential to proliferate. In addition, a density parameter of the underlying Bayesian model was estimated. Immunohistochemistry results were ‘simulated'for the virtual cells. The system yielded true positive (TP) rates in the detection and classification of real nuclei and their virtual counterparts. These hits outnumbered those obtained from the public domain image processing software ImageJ by 10%. The method introduced here can be trained to function not only in medicine and morphology‐based systems biology but in other application domains as well. The algorithm lends itself to an automated approach that constitutes a valuable tool which is easy to use and generates acceptable results quickly.  相似文献   

12.
刘肖  李宏  葛立敏 《机电一体化》2009,15(8):38-40,94
彩色图像分割是彩色图像处理中的重要问题。传统的彩色图像分割都是基于灰度分割算法,而忽略了彩色的空间域视觉效果及噪声污染问题。文章提出一种新的基于小波去噪和种子区域生长的一种改进方法:首先,应用小波去噪技术,强化图像边缘特征,抑制噪声,提高原始图像的信噪比;其次,将RGB彩色图像转化到HIS空间进行边缘检测,对图像进行抖动处理以减少彩色图像中的颜色数目,然后对不同分量进行序列阀值分割;最后对分割结果再进行一种新的基于区域生长的颜色相似性的聚合。仿真结果表明该算法更加符合人眼的视觉特性。  相似文献   

13.
Zebrafish is an invaluable vertebrate model in life science research and has been widely used in biological pathway analysis, molecular screening and disease modelling, among others. As a result, microscopic imaging has become an essential step in zebrafish phenotype analysis, and image segmentation thus plays an important role in the zebrafish microscopy analysis. Due to the nonuniform distribution of intensity and weak boundary in zebrafish microscope images, the traditionally used segmentation methods may lead to unsatisfactory result. Here, a novel hybrid method that integrates region and boundary information into active contour model is proposed to segment zebrafish embryos from the background, which performs better than traditional segmentation models. Meanwhile, how to utilize the gradient information effectively in image segmentation is still an open problem. In this paper, we propose to improve the aforementioned hybrid method in two aspects. Firstly, the mean grey value of background is estimated by the expectation maximization (EM) algorithm to constrain the active curve evolution. Secondly, an edge stopping function sensitive to gradient information is designed to stop curve evolution when the active curve reaches the embryo boundary. Experimental results show that the proposed methods can provide superior segmentation results compared to existing algorithms.  相似文献   

14.
针对复杂多变的肝脏图像,提出了一种基于先验稀疏字典和空洞填充的三维肝脏图像分割方法。对腹部CT图像进行Gabor特征提取,并分别在Gabor图像和灰度图像的肝脏金标准边界上选择大小相同的图像块作为两组训练集,利用训练集得到两种查询字典及稀疏编码。将金标准图像与待分割图像配准,并将配准后的肝脏边界作为待分割图像的肝脏初始边界;在初始边界点上的十邻域内选择大小相同的两组图像块作为测试样本,利用测试样本与查询字典计算稀疏编码及重构误差,并选择重构误差最小的图像块的中心作为待分割肝脏的边界点;最后,设计一种空洞填充方法对肝脏边界进行补全和平滑处理,得到最终分割结果。利用医学图像计算和计算机辅助介入国际会议中提供的肝脏数据进行了实验验证。结果表明,该方法对肝脏分割图像具有较好的适用性和鲁棒性,并获得了较高的分割精度。其中,平均体积重叠率误差为(5.21±0.45)%,平均相对体积误差为(0.72±0.12)%,平均对称表面距离误差为(0.93±0.14)mm。  相似文献   

15.
浅地层探地雷达目标探测和定位新方法   总被引:1,自引:1,他引:1  
提出基于模板匹配的探地雷达目标自动探测与定位新方法,和图像分割方法相结合以减少模板匹配的计算量。将雷达图像分割成可能存在目标的区域和背景数据区域,对可能存在目标的区域进行模板匹配计算。模板根据待匹配点的深度生成。如果探测到金属目标则对模板倒相,实现金属目标的准确定位。所提方法包括图像分割、模板生成、匹配算法、金属目标识别和模板倒相算法。对实测数据处理的结果表明,所提方法探测和定位准确、鲁棒性好。  相似文献   

16.
基于前列腺磁共振图像(MRI)特征信息及其病变好发特定区域等先验知识,针对前列腺内外轮廓全分割问题,提出基于边缘距离调整水平集演化(DRLSE)的前列腺MRI两步分割方法。在构建统一水平集能量函数的基础上,第1步基于前列腺MR的T1(纵向弛豫时间)图像实现其外轮廓分割,第2步在外轮廓约束限定条件下,基于前列腺MR的T2(横向弛豫时间)图像实现前列腺的内部轮廓分割,进而完成前列腺内外轮廓的全面有效分割。设计了前列腺分割的人机交互界面,对10个前列腺病例MR图像(含正常、增生和癌变共30幅)进行了分割实验研究,并采用Dice相似性系数(DSC)对分割结果进行评价分析,DSC值达到90%以上。实验结果表明,所提出的基于边缘DRLSE的前列腺MRI两步分割方法能够有效地实现前列腺内外轮廓的全面分割,非常接近于临床专家手动分割的理想结果,对前列腺疾病的临床诊断和治疗有较好的参考价值。  相似文献   

17.
王兵  瑚琦  卞亚林 《光学仪器》2023,45(2):46-54
图像语义分割需要精细的细节信息和丰富的语义信息,然而在特征提取阶段,连续下采样操作会导致图像中物体的空间细节信息丢失。为解决该问题,提出一种双分支结构语义分割算法,在特征提取阶段既能有效获取丰富的语义信息又能减少物体细节信息的丢失。该算法的一个分支使用浅层网络保留高分辨率细节信息有助于物体的边缘分割,另一个分支使用深层网络进行下采样获取语义信息有助于物体的类别识别,再将两种信息有效融合可以生成精确的像素预测。通过Cityscapes数据集和CamVid数据集上的实验验证,与现有语义分割算法相比,所提算法在较少的参数条件下,获得了较好的分割效果。  相似文献   

18.
Comparative evaluation of retrospective shading correction methods   总被引:1,自引:0,他引:1  
Because of the inherent imperfections of the image formation process, microscopical images are often corrupted by spurious intensity variations. This phenomenon, known as shading or intensity inhomogeneity, may have an adverse affect on automatic image processing, such as segmentation and registration. Shading correction methods may be prospective or retrospective. The former require an acquisition protocol tuned to shading correction, whereas the latter can be applied to any image, because they only use the information already present in an image. Nine retrospective shading correction methods were implemented, evaluated and compared on three sets of differently structured synthetic shaded and shading‐free images and on three sets of real microscopical images acquired by different acquisition set‐ups. The performance of a method was expressed quantitatively by the coefficient of joint variations between two different object classes. The results show that all methods, except the entropy minimization method, work well for certain images, but perform poorly for others. The entropy minimization method outperforms the other methods in terms of reduction of true intensity variations and preservation of intensity characteristics of shading‐free images. The strength of the entropy minimization method is especially apparent when applied to images containing large‐scale objects.  相似文献   

19.
Clusters or clumps of cells or nuclei are frequently observed in two dimensional images of thick tissue sections. Correct and accurate segmentation of overlapping cells and nuclei is important for many biological and biomedical applications. Many existing algorithms split clumps through the binarization of the input images; therefore, the intensity information of the original image is lost during this process. In this paper, we present a curvature information, gray scale distance transform, and shortest path splitting line‐based algorithm which can make full use of the concavity and image intensity information to find out markers, each of which represents an individual object, and detect accurate splitting lines between objects using shortest path and junction adjustment. The proposed algorithm is tested on both synthetic and real nuclei images. Experiment results show that the performance of the proposed method is better than that of marker‐controlled watershed method and ellipse fitting method.  相似文献   

20.
In this paper, we present an automatic segmentation method that detects virus particles of various shapes in transmission electron microscopy images. The method is based on a statistical analysis of local neighbourhoods of all the pixels in the image followed by an object width discrimination and finally, for elongated objects, a border refinement step. It requires only one input parameter, the approximate width of the virus particles searched for. The proposed method is evaluated on a large number of viruses. It successfully segments viruses regardless of shape, from polyhedral to highly pleomorphic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号