首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The presence of systematic noise in images in high‐throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non‐homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high‐content screen readouts due to software‐based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real‐world high‐throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z′‐factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high‐content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post‐hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open‐source image analysis pipelines publicly available. This software‐based solution has the potential to improve outcomes for a wide‐variety of image‐based HTS experiments.  相似文献   

2.
Image‐based, high throughput genome‐wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time‐consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome‐wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale‐adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi‐channel image screening data.  相似文献   

3.
With the availability of high‐throughput imaging machines and a large number of zebrafish embryos, zebrafish are clearly among the most cost‐effective vertebrate systems for high‐throughput or high‐content screens with applications in drug discovery and biological pathway analysis. With the tremendous volume of images generated from large numbers of zebrafish screens, computerized image analysis for accurate and efficient data interpretation becomes essential. This paper presents an automated algorithm for a high‐throughput screening pipeline for quantification of zebrafish somite. First, the main body is segmented using the level set method; then the head is removed; after that, the body is aligned and a coherence‐enhancing filter is carried out so as to facilitate the somite detection. Finally, the somites can be easily extracted. Preliminary evaluation results are reported to demonstrate the good performance of the algorithm.  相似文献   

4.
Background: High content screening (HCS) via automated fluorescence microscopy is a powerful technology for generating cellular images that are rich in phenotypic information. RNA interference is a revolutionary approach for silencing gene expression and has become an important method for studying genes through RNA interference‐induced cellular phenotype analysis. The convergence of the two technologies has led to large‐scale, image‐based studies of cellular phenotypes under systematic perturbations of RNA interference. However, existing high content screening image analysis tools are inadequate to extract content regarding cell morphology from the complex images, thus they limit the potential of genome‐wide RNA interference high content screening screening for simple marker readouts. In particular, over‐segmentation is one of the persistent problems of cell segmentation; this paper describes a new method to alleviate this problem. Methods: To solve the issue of over‐segmentation, we propose a novel feedback system with a hybrid model for automated cell segmentation of images from high content screening. A Hybrid learning model is developed based on three scoring models to capture specific characteristics of over‐segmented cells. Dead nuclei are also removed through a statistical model. Results: Experimental validation showed that the proposed method had 93.7% sensitivity and 94.23% specificity. When applied to a set of images of F‐actin‐stained Drosophila cells, 91.3% of over‐segmented cells were detected and only 2.8% were under‐segmented. Conclusions: The proposed feedback system significantly reduces over‐segmentation of cell bodies caused by over‐segmented nuclei, dead nuclei, and dividing cells. This system can be used in the automated analysis system of high content screening images.  相似文献   

5.
Robotic, high‐throughput microscopy is a powerful tool for small molecule screening and classifying cell phenotype, proteomic and genomic data. An important hurdle in the field is the automated classification and visualization of results collected from a data set of tens of thousands of images. We present a method that approaches these problems from the perspective of flow cytometry with supporting open‐source code. Image analysis software was created that allowed high‐throughput microscopy data to be analysed in a similar manner as flow cytometry. Each cell on an image is considered an object and a series of gates similar to flow cytometry is used to classify and quantify the properties of cells including size and level of fluorescent intensity. This method is released with open‐source software and code that demonstrates the method's implementation. Accuracy of the software was determined by measuring the levels of apoptosis in a primary murine myoblast cell line after exposure to staurosporine and comparing these results to flow cytometry.  相似文献   

6.
P. JIN  X. LI 《Journal of microscopy》2015,260(3):268-280
Continuous research on small‐scale mechanical structures and systems has attracted strong demand for ultrafine deformation and strain measurements. Conventional optical microscope cannot meet such requirements owing to its lower spatial resolution. Therefore, high‐resolution scanning electron microscope has become the preferred system for high spatial resolution imaging and measurements. However, scanning electron microscope usually is contaminated by distortion and drift aberrations which cause serious errors to precise imaging and measurements of tiny structures. This paper develops a new method to correct drift and distortion aberrations of scanning electron microscope images, and evaluates the effect of correction by comparing corrected images with scanning electron microscope image of a standard sample. The drift correction is based on the interpolation scheme, where a series of images are captured at one location of the sample and perform image correlation between the first image and the consequent images to interpolate the drift–time relationship of scanning electron microscope images. The distortion correction employs the axial symmetry model of charged particle imaging theory to two images sharing with the same location of one object under different imaging fields of view. The difference apart from rigid displacement between the mentioned two images will give distortion parameters. Three‐order precision is considered in the model and experiment shows that one pixel maximum correction is obtained for the employed high‐resolution electron microscopic system.  相似文献   

7.
New microscopy technologies are enabling image acquisition of terabyte‐sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21 000×21 000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user‐set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re‐adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference data set with a 10‐fold cross‐validation method. EGT segments cells or colonies with resulting Dice accuracy index measurements above 0.92 for all cross‐validation data sets. EGT results has also been visually verified on a much larger data set that includes bright field and Differential Interference Contrast (DIC) images, 16 cell lines and 61 time‐sequence data sets, for a total of 17 479 images. This method is implemented as an open‐source plugin to ImageJ as well as a standalone executable that can be downloaded from the following link: https://isg.nist.gov/ .  相似文献   

8.
Comparative evaluation of retrospective shading correction methods   总被引:1,自引:0,他引:1  
Because of the inherent imperfections of the image formation process, microscopical images are often corrupted by spurious intensity variations. This phenomenon, known as shading or intensity inhomogeneity, may have an adverse affect on automatic image processing, such as segmentation and registration. Shading correction methods may be prospective or retrospective. The former require an acquisition protocol tuned to shading correction, whereas the latter can be applied to any image, because they only use the information already present in an image. Nine retrospective shading correction methods were implemented, evaluated and compared on three sets of differently structured synthetic shaded and shading‐free images and on three sets of real microscopical images acquired by different acquisition set‐ups. The performance of a method was expressed quantitatively by the coefficient of joint variations between two different object classes. The results show that all methods, except the entropy minimization method, work well for certain images, but perform poorly for others. The entropy minimization method outperforms the other methods in terms of reduction of true intensity variations and preservation of intensity characteristics of shading‐free images. The strength of the entropy minimization method is especially apparent when applied to images containing large‐scale objects.  相似文献   

9.
Image analysis is an important tool for characterizing nano/micro network structures. To understand the connection, organization and proper alignment of network structures, the knowledge of the segments that represent the materials inside the image is very necessary. Image segmentation is generally carried out using statistical methods. In this study, we developed a simple and reliable masking method that improves the performance of the indicator kriging method by using entropy. This method selectively chooses important pixels in an image (optical or electron microscopy image) depending on the degree of information required to assist the thresholding step. Reasonable threshold values can be obtained by selectively choosing important pixels in a complex network image composed of extremely large numbers of thin and narrow objects. Thus, the overall image segmentation can be improved as the number of disconnected objects in the network is minimized. Moreover, we also proposed a new method for analyzing high‐pixel resolution images on a large scale and optimized the time‐consuming steps such as covariance estimation of low‐pixel resolution image, which is rescaled by performing the affine transformation on high‐pixel resolution images. Herein, image segmentation is executed in the original high‐pixel resolution image. This entropy‐based masking method of low‐pixel resolution significantly decreases the analysis time without sacrificing accuracy.  相似文献   

10.
Segmentation of crossing fibres is a complex problem of image processing. In the present paper, various solutions are presented basing on tools of morphological image processing. Two new image transforms are introduced – the lineal distance transform and the chord length transform. Both transforms are applied to two‐dimensional images and their results are three‐dimensional images. Thus, the segmentation problem originally formulated for crossing fibres observed in a two‐dimensional image can be reformulated as a segmentation problem in a three‐dimensional image. This can be solved by a segmentation in the three‐dimensional image. Algorithms for the lineal distance transform and the chord length transform are given and their use in image analysis is demonstrated. Furthermore, the chord length distribution function of the foreground of a binary image can efficiently be estimated via the chord length transform.  相似文献   

11.
Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio‐marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre‐processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two‐step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. Microsc. Res. Tech. 77:547–559, 2014. © 2014 Wiley Periodicals, Inc.  相似文献   

12.
The width of the emission spectrum of a common fluorophore allows only for a limited number of spectral distinct fluorescent markers in the visible spectrum, which is also the regime where CCD-cameras are used in microscopy. For imaging of cells or tissues, it is required to obtain an image from which the morphology of the whole cell can be extracted. This is usually achieved by differential interference contrast (DIC) microscopy. These images have a pseudo-3D appearance, easily interpreted by the human brain. In the age of high throughput and high content screening, manual image processing is not an option. Conventional algorithms for image processing often use threshold-based criteria to identify objects of interest. These algorithms fail for DIC images as they have a range from dim to bright with an intermediate intensity equal to the background, so as to produce no clear object boundary. In this article we compare different reconstruction methods for up to 100 MB-large DIC images and implement a new iterative reconstruction method based on the Hilbert Transform that enables identification of cell boundaries with standard threshold algorithms.  相似文献   

13.
Because of its high spatial resolution, energy-filtering transmission electron microscopy (EFTEM) has become widely used for the analysis of the chemical composition of nanostructures. To obtain the best spatial resolution, the precise correction of instrumental influences and the optimization of the data acquisition procedure are very important. In this publication, we discuss a modified image acquisition procedure that optimizes the acquisition process of the EFTEM images, especially for long exposure times and measurements that are affected by large spatial drift. To alleviate the blurring of the image caused by the spatial drift, we propose to take several EFTEM images with a shorter exposure time (sub-images) and merge these sub-images afterwards. To correct for the drift between these sub-images, elastically filtered images are acquired between two subsequent sub-images. These elastically filtered images are highly suitable for spatial drift correction based on the cross-correlation method. The use of the drift information between two elastically filtered images permits to merge the drift-corrected sub-images automatically and with high accuracy, resulting in sharper edges and an improved signal intensity in the final EFTEM image. Artefacts that are caused by prominent noise-peaks in the dark reference image have been suppressed by calculating the dark reference image from three images. Furthermore, using the information given by the elastically filtered images, it is possible to drift-correct a set of EFTEM images already during the acquisition. This simplifies the post-processing for elemental mapping and offers the possibility for active drift correction using the image shift function of the microscope, leading to an increased field of view.  相似文献   

14.
Background: The most commonly used molecular cytogenetic technique is fluorescence in situ hybridization (FISH). It has been widely applied in many areas of diagnosis and research, including pre‐natal and post‐natal screening of chromosomal aberrations, pre‐implantation genetic diagnosis, cancer cytogenetics, gene mapping, molecular pathology and developmental molecular biology. The analysis of FISH images consists of detecting fluorescent dots, after which the number of dots per cell can be counted or their relative positions can be measured. A major impediment in the analysis of FISH specimens is signal (dot) quality, which is influenced by the hybridization efficiency and/or the sensitivity of the camera that records the images. Method: In this paper, we present an approach to improve the efficiency of detecting fluorescent signals in FISH images by recovering the radiance map of the camera. This allows us to generate a high‐dynamic‐range image wherein an extended range of the sample radiance captured by the camera can be visualized at distinct intensity values. The resulting higher‐order numeric complexity of the transformed image is adjusted (or simplified) by examining the intensity distribution in each of the three colour channels (red, green and blue), and remapping the intensity values to generate a high‐contrast image with a lower‐order (compressed) dynamic range. The remapping is based on a criterion that optimizes the detection of the hybridized signals, allowing attenuation of saturated intensity values while amplifying low‐intensity signals. Results: A simple dot‐counting algorithm is used to automatically process 2000 FISH images. The images are taken for lymphocytes from cultured blood specimens for cytogenetic testing. Images are manually analyzed by an expert to obtain ground truth for dot counts. A quantitative analysis is performed by comparing results of automated dot detection on images before and after enhancement with the developed algorithms. In addition, common errors in dot counting due to split dots, dust, poor segmentation and overlapping signals are analyzed and the robustness of the developed approach against these errors evaluated. It is observed that dot‐detection efficiency is increased by an average of 9% across all colour channels while reducing errors in missed and false dot counts. Conclusions: Our proposed method and results demonstrate that dot‐counting specificity and sensitivity can be improved by pre‐processing and enhancing the image using the radiance curve of the camera and generating a high‐contrast, remapped high‐dynamic‐range image prior to using any algorithm for dot counting.  相似文献   

15.
Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method.  相似文献   

16.
In this paper, a probabilistic technique for compensation of intensity loss in confocal microscopy images is presented. For single-colour-labelled specimen, confocal microscopy images are modelled as a mixture of two Gaussian probability distribution functions, one representing the background and another corresponding to the foreground. Images are segmented into foreground and background by applying Expectation Maximization algorithm to the mixture. Final intensity compensation is carried out by scaling and shifting the original intensities with the help of parameters estimated for the foreground. Since foreground is separated to calculate the compensation parameters, the method is effective even when image structure changes from frame to frame. As intensity decay function is not used, complexity associated with estimation of the intensity decay function parameters is eliminated. In addition, images can be compensated out of order, as only information from the reference image is required for the compensation of any image. These properties make our method an ideal tool for intensity compensation of confocal microscopy images that suffer intensity loss due to absorption/scattering of light as well as photobleaching and the image can change structure from optical/temporal section-to-section due to changes in the depth of specimen or due to a live specimen. The proposed method was tested with a number of confocal microscopy image stacks and results are presented to demonstrate the effectiveness of the method.  相似文献   

17.
This paper addresses the problem of intensity correction of fluorescent confocal laser scanning microscope images. Confocal laser scanning microscope images are frequently used in medicine for obtaining 3D information about specimen structures by imaging a set of 2D cross sections and performing 3D volume reconstruction afterwards. However, 2D images acquired from fluorescent confocal laser scanning microscope images demonstrate significant intensity heterogeneity, for example, due to photo‐bleaching and fluorescent attenuation in depth. We developed an intensity heterogeneity correction technique that (a) adjusts the intensity heterogeneity of 2D images, (b) preserves fine structural details and (c) enhances image contrast, by performing spatially adaptive mean‐weight filtering. Our solution is obtained by formulating an optimization problem, followed by filter design and automated selection of filtering parameters. The proposed filtering method is experimentally compared with several existing techniques by using four quality metrics: contrast, intensity heterogeneity (entropy) in a low frequency domain, intensity distortion in a high frequency domain and saturation. Based on our experiments and the four quality metrics, the developed mean‐weight filtering outperforms other intensity correction methods by at least a factor of 1.5 when applied to fluorescent confocal laser scanning microscope images.  相似文献   

18.
A fluorescence image calibration method is presented based on the use of standardized uniformly fluorescing reference layers. It is demonstrated to be effective for the correction of non‐uniform imaging characteristics across the image (shading correction) as well as for relating fluorescence intensities between images taken with different microscopes or imaging conditions. The variation of the illumination intensity over the image can be determined on the basis of the uniform bleaching characteristics of the layers. This permits correction for the latter and makes bleach‐rate‐related imaging practical. The significant potential of these layers for calibration in quantitative fluorescence microscopy is illustrated with a series of applications. As the illumination and imaging properties of a microscope can be evaluated separately, the methods presented are also valuable for general microscope testing and characterization.  相似文献   

19.
针对卷积神经图像风格艺术化过程中出现的图像语义内容扭曲,前后景边界模糊的问题,我们提出了一种抑制图像扭曲的卷积神经艺术风格化算法.首先用VGG-19网络模型对输入的内容图像和风格图像提取特征图并进行内容重建和风格重建.然后把输入的内容图像和风格图像到输出图像的变换约束在色彩空间局部仿射变换中,在输入图像RGB通道上构建Laplacian抠图矩阵,对于每一个输出区块,仿射变换将输入图像的RGB值映射到对应的输出及的位置上,实现了语义内容的约束和空间布局的控制.最后,将合成的图像叠加至白噪声图像上,并用反向传播算法迭代更新至损失函数最小,完成图像的风格化.实验结果表明,该方法生成的图像前后景边缘明显、纹理清楚,抑制了语义内容扭曲,实现了迁移图像的空间约束和颜色映射,风格化图像视觉上令人满意.  相似文献   

20.
We present a new method for segmenting phase contrast images of NIH 3T3 fibroblast cells that is accurate even when cells are physically in contact with each other. The problem of segmentation, when cells are in contact, poses a challenge to the accurate automation of cell counting, tracking and lineage modelling in cell biology. The segmentation method presented in this paper consists of (1) background reconstruction to obtain noise‐free foreground pixels and (2) incorporation of biological insight about dividing and nondividing cells into the segmentation process to achieve reliable separation of foreground pixels defined as pixels associated with individual cells. The segmentation results for a time‐lapse image stack were compared against 238 manually segmented images (8219 cells) provided by experts, which we consider as reference data. We chose two metrics to measure the accuracy of segmentation: the ‘Adjusted Rand Index’ which compares similarities at a pixel level between masks resulting from manual and automated segmentation, and the ‘Number of Cells per Field’ (NCF) which compares the number of cells identified in the field by manual versus automated analysis. Our results show that the automated segmentation compared to manual segmentation has an average adjusted rand index of 0.96 (1 being a perfect match), with a standard deviation of 0.03, and an average difference of the two numbers of cells per field equal to 5.39% with a standard deviation of 4.6%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号