首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clusters or clumps of cells or nuclei are frequently observed in two dimensional images of thick tissue sections. Correct and accurate segmentation of overlapping cells and nuclei is important for many biological and biomedical applications. Many existing algorithms split clumps through the binarization of the input images; therefore, the intensity information of the original image is lost during this process. In this paper, we present a curvature information, gray scale distance transform, and shortest path splitting line‐based algorithm which can make full use of the concavity and image intensity information to find out markers, each of which represents an individual object, and detect accurate splitting lines between objects using shortest path and junction adjustment. The proposed algorithm is tested on both synthetic and real nuclei images. Experiment results show that the performance of the proposed method is better than that of marker‐controlled watershed method and ellipse fitting method.  相似文献   

2.
Background: High content screening (HCS) via automated fluorescence microscopy is a powerful technology for generating cellular images that are rich in phenotypic information. RNA interference is a revolutionary approach for silencing gene expression and has become an important method for studying genes through RNA interference‐induced cellular phenotype analysis. The convergence of the two technologies has led to large‐scale, image‐based studies of cellular phenotypes under systematic perturbations of RNA interference. However, existing high content screening image analysis tools are inadequate to extract content regarding cell morphology from the complex images, thus they limit the potential of genome‐wide RNA interference high content screening screening for simple marker readouts. In particular, over‐segmentation is one of the persistent problems of cell segmentation; this paper describes a new method to alleviate this problem. Methods: To solve the issue of over‐segmentation, we propose a novel feedback system with a hybrid model for automated cell segmentation of images from high content screening. A Hybrid learning model is developed based on three scoring models to capture specific characteristics of over‐segmented cells. Dead nuclei are also removed through a statistical model. Results: Experimental validation showed that the proposed method had 93.7% sensitivity and 94.23% specificity. When applied to a set of images of F‐actin‐stained Drosophila cells, 91.3% of over‐segmented cells were detected and only 2.8% were under‐segmented. Conclusions: The proposed feedback system significantly reduces over‐segmentation of cell bodies caused by over‐segmented nuclei, dead nuclei, and dividing cells. This system can be used in the automated analysis system of high content screening images.  相似文献   

3.
There is no segmentation method that performs perfectly with any dataset in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of three‐dimensional (3D) image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z‐stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z‐stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state‐of‐the‐art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate ‘ground truth’ of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations and (3) minimizing human labour needed to create surrogate ‘truth’ by approximating z‐stack segmentations with 2D contours from three orthogonal z‐stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average, 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z‐stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z‐stack segmentation.  相似文献   

4.
Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio‐marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre‐processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two‐step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. Microsc. Res. Tech. 77:547–559, 2014. © 2014 Wiley Periodicals, Inc.  相似文献   

5.
Serial block face scanning electron microscopy (SBF‐SEM) is a relatively new technique that allows the acquisition of serially sectioned, imaged and digitally aligned ultrastructural data. There is a wealth of information that can be obtained from the resulting image stacks but this presents a new challenge for researchers – how to computationally analyse and make best use of the large datasets produced. One approach is to reconstruct structures and features of interest in 3D. However, the software programmes can appear overwhelming, time‐consuming and not intuitive for those new to image analysis. There are a limited number of published articles that provide sufficient detail on how to do this type of reconstruction. Therefore, the aim of this paper is to provide a detailed step‐by‐step protocol, accompanied by tutorial videos, for several types of analysis programmes that can be used on raw SBF‐SEM data, although there are more options available than can be covered here. To showcase the programmes, datasets of skeletal muscle from foetal and adult guinea pigs are initially used with procedures subsequently applied to guinea pig cardiac tissue and locust brain. The tissue is processed using the heavy metal protocol developed specifically for SBF‐SEM. Trimmed resin blocks are placed into a Zeiss Sigma SEM incorporating the Gatan 3View and the resulting image stacks are analysed in three different programmes, Fiji, Amira and MIB, using a range of tools available for segmentation. The results from the image analysis comparison show that the analysis tools are often more suited to a particular type of structure. For example, larger structures, such as nuclei and cells, can be segmented using interpolation, which speeds up analysis; single contrast structures, such as the nucleolus, can be segmented using the contrast‐based thresholding tools. Knowing the nature of the tissue and its specific structures (complexity, contrast, if there are distinct membranes, size) will help to determine the best method for reconstruction and thus maximize informative output from valuable tissue.  相似文献   

6.
Segmentation of medical images is a complex problem owing to the large variety of their characteristics. In the automated analysis of breast cancers, two image classes may be distinguished according to whether one considers the quantification of DNA (grey level images of isolated nuclei) or the detection of immunohistochemical staining (colour images of histological sections). The study of these image classes generally involves the use of largely different image processing techniques. We therefore propose a new algorithm derived from the watershed transformation enabling us to solve these two segmentation problems with the same general approach. We then present visual and quantitative results to validate our method.  相似文献   

7.
基于规则划分和RJMCMC的可变类图像分割   总被引:1,自引:0,他引:1       下载免费PDF全文
王玉  李玉  赵泉华 《仪器仪表学报》2015,36(6):1388-1396
为了自动确定遥感图像分割中的类别数,提出了一种结合规则划分和逆跳马尔科夫链蒙特卡洛(RJMCMC)算法的可变类图像分割方法。首先,将图像域划分成若干个不同的规则子块,并假设每个子块内的像素满足同一独立的多值Gaussian分布;在此基础上,根据贝叶斯定理,建立基于区域的图像分割模型;然后利用RJMCMC算法模拟该分割模型,以自动确定图像类别数并实现区域分割;为了进一步提高分割精度,设计了精细化操作。利用本文提出的方法,分别对合成及彩色遥感图像进行可变图像分割,实验结果表明,提出的方法不仅能自动确定图像类别数,还可以实现区域分割,从而验证提出算法的可行性及有效性。  相似文献   

8.
Medical image segmentation demands higher segmentation accuracy especially when the images are affected by noise. This paper proposes a novel technique to segment medical images efficiently using an intuitionistic fuzzy divergence–based thresholding. A neighbourhood‐based membership function is defined here. The intuitionistic fuzzy divergence–based image thresholding technique using the neighbourhood‐based membership functions yield lesser degradation of segmentation performance in noisy environment. Its ability in handling noisy images has been validated. The algorithm is independent of any parameter selection. Moreover, it provides robustness to both additive and multiplicative noise. The proposed scheme has been applied on three types of medical image datasets in order to establish its novelty and generality. The performance of the proposed algorithm has been compared with other standard algorithms viz. Otsu's method, fuzzy C‐means clustering, and fuzzy divergence–based thresholding with respect to (1) noise‐free images and (2) ground truth images labelled by experts/clinicians. Experiments show that the proposed methodology is effective, more accurate and efficient for segmenting noisy images.  相似文献   

9.
With the rapid advance of three-dimensional (3D) confocal imaging technology, more and more 3D cellular images will be available. Segmentation of intact cells is a critical task in automated image analysis and quantification of cellular microscopic images. One of the major complications in the automatic segmentation of cellular images arises due to the fact that cells are often closely clustered. Several algorithms are proposed for segmenting cell clusters but most of them are 2D based. In other words, these algorithms are designed to segment 2D cell clusters from a single image. Given 2D segmentation methods developed, they can certainly be applied to each image slice with the 3D cellular volume to obtain the segmented cell clusters. Apparently, in such case, the 3D depth information with the volumetric images is not really used. Often, 3D reconstruction is conducted after the individualized segmentation to build the 3D cellular models from segmented 2D cellular contours. Such 2D native process is not appropriate as stacking of individually segmented 2D cells or nuclei do not necessarily form the correct and complete 3D cells or nuclei in 3D. This paper proposes a novel and efficient 3D cluster splitting algorithm based on concavity analysis and interslice spatial coherence. We have taken the advantage of using the 3D boundary points detected using higher order statistics as an input contour for performing the 3D cluster splitting algorithm. The idea is to separate the touching or overlapping cells or nuclei in a 3D native way. Experimental results show the efficiency of our algorithm for 3D microscopic cellular images.  相似文献   

10.
A region growing algorithm for segmentation of human intestinal gland images is presented. The initial seeding regions are identified based on the large vacant regions (lumen) inside the intestinal glands by fitting with a very large moving window. The seeding regions are then expanded by repetitive application of a morphological dilate operation with a much smaller round window structure set. False gland regions (nongland regions initially misclassified as gland regions) are removed based on either their excessive ages of active growth or inadequate thickness of dams formed by the strings of goblet cell nuclei sitting immediately outside the grown regions. The goblet cell nuclei are then identified and retained in the image. The gland contours are detected by applying a large moving round window fitting to the enormous empty exterior of the goblet cell nucleus chains in the image. The assumptions based on real intestinal gland images include the closed chain structured goblet cell nuclei that sit side-by-side with only small gaps between the neighbouring nuclei and that the lumens enclosed by the goblet cell nucleus chains are most vacant with only occasional run-away nuclei. The method performs well for most normal and abnormal intestinal gland images although it is less applicable to cancer cases. The experimental results show that the segmentations of the real microscopic intestinal gland images are satisfactorily accurate based on the visual evaluations.  相似文献   

11.
In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and ‘the best’ method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross‐section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category – biological samples – is shown.  相似文献   

12.
Focused ion beam tomography has proven to be capable of imaging porous structures on a nano‐scale. However, due to shine‐through artefacts, common segmentation algorithms often lead to severe dislocation of individual structures in z‐direction. Recently, a number of approaches have been developed, which take into account the specific nature of focused ion beam‐scanning electron microscope images for porous media. In the present study, we analyse three of these approaches by comparing their performance based on simulated focused ion beam‐scanning electron microscope images. Performance is measured by determining the amount of misclassified voxels as well as the fidelity of structural characteristics. Based on this analysis we conclude that each algorithm has certain strengths and weaknesses and we determine the scenarios for which each approach might be the best choice  相似文献   

13.
Image‐based, high throughput genome‐wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time‐consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome‐wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale‐adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi‐channel image screening data.  相似文献   

14.
In order to develop an objective grading system for nuclear atypia in breast cancer, an image analysis technique has been applied for the automated recognition of enlarged and hyperchromatic nuclei in cytology specimens. The image segmentation algorithm, based on the ‘top hat’ image transformation developed in mathematical morphology, is implemented on the LEYTAS automated microscope system. The performance of the segmentation algorithm has been evaluated for fifty malignant and eighty-five benign breast lesions by visual inspection of the displayed ‘flagged’ objects. The mean number of flagged objects per 1600 image fields for breast cancers was 887 (range 0–7920) of which 87% consisted of single, atypical nuclei. For benign lesions the mean number was 30 (range 0–307) of which 20% were single nuclei. By adaptation of the ‘top hat’ parameter values, a more extreme subpopulation of atypical nuclei could be discriminated. The large interspecimen variation in the breast cancer results was related to differences in DNA content distribution and mean nuclear area, determined independently with scanning cytophotometry, and to some extent with the histological type.  相似文献   

15.
Segmentation of objects from a noisy and complex image is still a challenging task that needs to be addressed. This article proposed a new method to detect and segment nuclei to determine whether they are malignant or not (determination of the region of interest, noise removal, enhance the image, candidate detection is employed on the centroid transform to evaluate the centroid of each object, the level set [LS] is applied to segment the nuclei). The proposed method consists of three main stages: preprocessing, seed detection, and segmentation. Preprocessing stage involves the preparation of the image conditions to ensure that they meet the segmentation requirements. Seed detection detects the seed point to be used in the segmentation stage, which refers to the process of segmenting the nuclei using the LS method. In this research work, 58 H&E breast cancer images from the UCSB Bio‐Segmentation Benchmark dataset are evaluated. The proposed method reveals the high performance and accuracy in comparison to the techniques reported in literature. The experimental results are also harmonized with the ground truth images.  相似文献   

16.
Combining scanning electron microscopy with serial slicing by a focused ion beam yields spatial image data of materials structures at the nanometer scale. However, the depth of field of the scanning electron microscopic images causes unwanted effects when highly porous structures are imaged. Proper spatial reconstruction of such porous structures from the stack of microscopic images is a tough and in general yet unsolved segmentation problem. Recently, machine learning methods have proven to yield solutions to a variety of image segmentation problems. However, their use is hindered by the need of large amounts of annotated data in the training phase. Here, we therefore replace annotated real image data by simulated image stacks of synthetic structures – realizations of stochastic germ–grain models and random packings. This strategy yields the annotations for free, but shifts the effort to choosing appropriate stochastic geometry models and generating sufficiently realistic scanning electron microscopic images.  相似文献   

17.
Segmentation of intact cell nuclei from three-dimensional (3D) images of thick tissue sections is an important basic capability necessary for many biological research studies. However, segmentation is often difficult because of the tight clustering of nuclei in many specimen types. We present a 3D segmentation approach that combines the recognition capabilities of the human visual system with the efficiency of automatic image analysis algorithms. The approach first uses automatic algorithms to separate the 3D image into regions of fluorescence-stained nuclei and unstained background. This includes a novel step, based on the Hough transform and an automatic focusing algorithm to estimate the size of nuclei. Then, using an interactive display, each nuclear region is shown to the analyst, who classifies it as either an individual nucleus, a cluster of multiple nuclei, partial nucleus or debris. Next, automatic image analysis based on morphological reconstruction and the watershed algorithm divides clusters into smaller objects, which are reclassified by the analyst. Once no more clusters remain, the analyst indicates which partial nuclei should be joined to form complete nuclei. The approach was assessed by calculating the fraction of correctly segmented nuclei for a variety of tissue types: Caenorhabditis elegans embryos (839 correct out of a total of 848), normal human skin (343/362), benign human breast tissue (492/525), a human breast cancer cell line grown as a xenograft in mice (425/479) and invasive human breast carcinoma (260/335). Furthermore, due to the analyst's involvement in the segmentation process, it is always known which nuclei in a population are correctly segmented and which not, assuming that the analyst's visual judgement is correct.  相似文献   

18.
New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10‐fold cross‐validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross‐validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time‐sequence data sets, for a total of 17 479 images. This method is implemented as an open‐source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/ .  相似文献   

19.
A semi‐automated imaging system is described to quantitate estrogen and progesterone receptor immunoreactivity in human breast cancer. The system works for any conventional method of image acquisition using microscopic slides that have been processed for immunohistochemical analysis of the estrogen receptor and progesterone receptor. Estrogen receptor and progesterone receptor immunohistochemical staining produce colorimetric differences in nuclear staining that conventionally have been interpreted manually by pathologists and expressed as percentage of positive tumoral nuclei. The estrogen receptor and progesterone receptor status of human breast cancer represent important prognostic and predictive markers of human breast cancer that dictate therapeutic decisions but their subjective interpretation result in interobserver, intraobserver and fatigue variability. Subjective measurements are traditionally limited to a determination of percentage of tumoral nuclei that show positive immunoreactivity. To address these limitations, imaging algorithms utilizing both colorimetric (RGB) as well as intensity (gray scale) determinations were used to analyze pixels of the acquired image. Image acquisition utilized either scanner or microscope with attached digital or analogue camera capable of producing images with a resolution of 20 pixels /10 μ. Areas of each image were screened and the area of interest richest in tumour cells manually selected for image processing. Images were processed initially by JPG conversion of SVS scanned virtual slides or direct JPG photomicrograph capture. Following image acquisition, images were screened for quality, enhanced and processed. The algorithm‐based values for estrogen receptor and progesterone receptor percentage nuclear positivity both strongly correlated with the subjective measurements (intraclass correlation: 0.77; 95% confidence interval: 0.59, 0.95) yet exhibited no interobserver, intraobserver or fatigue variability. In addition the algorithms provided measurements of nuclear estrogen receptor and progesterone receptor staining intensity (mean, mode and median staining intensity of positive staining nuclei), parameters that subjective review could not assess. Other semi‐automated image analysis systems have been used to measure estrogen receptor and progesterone receptor immunoreactivity but these either have required proprietary hardware or have been based on luminosity differences alone. By contrast our algorithms were independent of proprietary hardware and were based on not just luminosity and colour but also many other imaging features including epithelial pattern recognition and nuclear morphology. These features provide a more accurate, versatile and robust imaging analysis platform that can be fully automated in the near future. Because of all these properties, our semi‐automated imaging system ‘adds value’ as a means of measuring these important nuclear biomarkers of human breast cancer.  相似文献   

20.
We present a region‐based segmentation method in which seeds representing both object and background pixels are created by combining morphological filtering of both the original image and the gradient magnitude of the image. The seeds are then used as starting points for watershed segmentation of the gradient magnitude image. The fully automatic seeding is done in a generous fashion, so that at least one seed will be set in each foreground object. If more than one seed is placed in a single object, the watershed segmentation will lead to an initial over‐segmentation, i.e. a boundary is created where there is no strong edge. Thus, the result of the initial segmentation is further refined by merging based on the gradient magnitude along the boundary separating neighbouring objects. This step also makes it easy to remove objects with poor contrast. As a final step, clusters of nuclei are separated, based on the shape of the cluster. The number of input parameters to the full segmentation procedure is only five. These parameters can be set manually using a test image and thereafter be used on a large number of images created under similar imaging conditions. This automated system was verified by comparison with manual counts from the same image fields. About 90% correct segmentation was achieved for two‐ as well as three‐dimensional images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号