首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The change in morphology, diameter, branching pattern or tortuosity of retinal blood vessels is an important indicator of various clinical disorders of the eye and the body. This paper reports an automated method for segmentation of blood vessels in retinal images. A unique combination of techniques for vessel centerlines detection and morphological bit plane slicing is presented to extract the blood vessel tree from the retinal images. The centerlines are extracted by using the first order derivative of a Gaussian filter in four orientations and then evaluation of derivative signs and average derivative values is performed. Mathematical morphology has emerged as a proficient technique for quantifying the blood vessels in the retina. The shape and orientation map of blood vessels is obtained by applying a multidirectional morphological top-hat operator with a linear structuring element followed by bit plane slicing of the vessel enhanced grayscale image. The centerlines are combined with these maps to obtain the segmented vessel tree. The methodology is tested on three publicly available databases DRIVE, STARE and MESSIDOR. The results demonstrate that the performance of the proposed algorithm is comparable with state of the art techniques in terms of accuracy, sensitivity and specificity.  相似文献   

2.
Pattern Analysis and Applications - Retinal vessels’ segmentation is challenging to detect blood vessels for diagnosing diseases such as hypertension, diabetes, and glaucoma. Retinal vessel...  相似文献   

3.
针对现有视网膜血管图像提取细小血管准确率较低的问题,提出了一种基于多尺度线性检测器与局部和全局增强相结合的视网膜血管分割方法.对多尺度线检测器进行研究,将其分为小尺度和大尺度两部分;利用小尺度对局部增强后的图像与大尺度对全局增强后的图像分别进行检测,得到不同尺度下的响应函数;将不同尺度下的响应函数进行融合,得到最终的视网膜血管结构.在STARE和DRIVE两个数据库上进行实验,结果表明:该算法得到的平均血管准确率分别达到96.62%和96.45%,平均真阳性率分别达到75.52%和83.07%,分割准确率高,能够得到较好的血管分割结果.  相似文献   

4.
薛寺中  谈锐  陈秀宏 《计算机应用》2012,32(8):2235-2244
为能有效捕捉数据的非线性特征,特提出一种新的非线性数据降维算法——核半监督局部保留投影(KSSLPP)。该方法利用标记样本的标记信息及所有训练样本的结构重新定义了类间相似度和类内相似度,然后将原始数据映射到高维核空间,在核空间中最大化类间分离度,最小化类内分离度。该方法在核空间保持了数据的局部结构和全局结构,以及数据的标签信息。在Olivetti人脸库和UCI数据库中的对比实验验证了该算法的有效性。  相似文献   

5.
Detection of blood vessels in retinal fundus image is the preliminary step to diagnose several retinal diseases. There exist several methods to automatically detect blood vessels from retinal image with the aid of different computational methods. However, all these methods require lengthy processing time. The method proposed here acquires binary vessels from a RGB retinal fundus image in almost real time. Initially, the phase congruency of a retinal image is generated, which is a soft-classification of blood vessels. Phase congruency is a dimensionless quantity that is invariant to changes in image brightness or contrast; hence, it provides an absolute measure of the significance of feature points. This experiment acquires phase congruency of an image using Log-Gabor wavelets. To acquire a binary segmentation, thresholds are applied on the phase congruency image. The process of determining the best threshold value is based on area under the relative operating characteristic (ROC) curve. The proposed method is able to detect blood vessels in a retinal fundus image within 10 s on a PC with (accuracy, area under ROC curve) = (0.91, 0.92), and (0.92, 0.94) for the STARE and the DRIVE databases, respectively.  相似文献   

6.
Segmentation of vessels from mammograms using a deformable model   总被引:3,自引:0,他引:3  
Vessel extraction is a fundamental step in certain medical imaging applications such as angiograms. Different methods are available to segment vessels in medical images, but they are not fully automated (initial vessel points are required) or they are very sensitive to noise in the image. Unfortunately, the presence of noise, the variability of the background, and the low and varying contrast of vessels in many imaging modalities such as mammograms, makes it quite difficult to obtain reliable fully automatic or even semi-automatic vessel detection procedures. In this paper a fully automatic algorithm for the extraction of vessels in noisy medical images is presented and validated for mammograms. The main issue in this research is the negative influence of noise on segmentation algorithms. A two-stage procedure was designed for noise reduction. First, a global approach phase including edge detection and thresholding is applied. Then, the local approach phase performs vessel segmentation using a deformable model with a new energy term that reduces the noise still remaining in the image from the first stage. Experimental results on mammograms show that this method has an excellent performance level in terms of accuracy, sensitivity, and specificity. The computation time also makes it suitable for real-time applications within a clinical environment.  相似文献   

7.
This paper presents a semi-automatic tool, called IGAnn (Interactive image ANNotation), that assists users in annotating textual labels with images. IGAnn performs an interactive retrieval-like procedure: the system presents the user with images that have higher confidences, and then the user determines which images are actually relevant or irrelevant for a specified label. By collecting relevant and irrelevant images of iterations, a hierarchical classifier associated with the specified label is built using our proposed semi-supervised approach to compute confidence values of unlabeled images. This paper describes the system interface of IGAnn and also demonstrates quantitative experiments of our proposed approach.  相似文献   

8.

Diseases of the eye require manual segmentation and examination of the optic disc by ophthalmologists. Though, image segmentation using deep learning techniques is achieving remarkable results, it leverages on large-scale labeled datasets. But, in the field of medical imaging, it is challenging to acquire large labeled datasets. Hence, this article proposes a novel deep learning model to automatically segment the optic disc in retinal fundus images by using the concepts of semi-supervised learning and transfer learning. Initially, a convolutional autoencoder (CAE) is trained to automatically learn features from a large number of unlabeled fundus images available from the Kaggle’s diabetic retinopathy (DR) dataset. The autoencoder (AE) learns the features from the unlabeled images by reconstructing the input images and becomes a pre-trained network (model). After this, the pre-trained autoencoder network is converted into a segmentation network. Later, using transfer learning, the segmentation network is trained with retinal fundus images along with their corresponding optic disc ground truth images from the DRISHTI GS1 and RIM-ONE datasets. The trained segmentation network is then tested on retinal fundus images from the test set of DRISHTI GS1 and RIM-ONE datasets. The experimental results show that the proposed method performs on par with the state-of-the-art methods achieving a 0.967 and 0.902 dice score coefficient on the test set of the DRISHTI GS1 and RIM-ONE datasets respectively. The proposed method also shows that transfer learning and semi-supervised learning overcomes the barrier imposed by the large labeled dataset. The proposed segmentation model can be used in automatic retinal image processing systems for diagnosing diseases of the eye.

  相似文献   

9.
The problem of determining directions of blood vessels in the optic disk is considered. The proposed method for estimating the vessel directions is based on analyzing local minima of gray-scale profile of the eye-ground image. Results of tests on real images are presented. Mikhail Anan’in. Born 1984. Graduated from the Samara State Aerospace University in 2007 and is currently a post-graduate student at the same university. From 2006 to present is a junior researcher at the Image Processing Systems Institute, Russian Academy of Sciences. Scientific interests: image processing, image reconstruction, pattern recognition, wavelet analysis, and differential geometry. Authored more than ten papers. Nataliya Il’yasova. Born 1966. Graduated from the Samara State Aerospace University in 1991, where in 1997 she received candidate’s degree (Eng.). Currently a senior s researcher at the Image Processing Systems Institute, Russian Academy of Sciences and a senior lecturer at Samara State Aerospace University. Scientific interests: digital image processing and recognition, pattern recognition, information systems in biomedical applications, computer-aided systems for monitoring eye fundus microvascular morphology, and analysis of cardiac coronary vessels. Author of more than 60 papers in the field of image processing and pattern recognition. Aleksandr Kupriyanov. Born 1978. Graduated from the Samara State Aerospace University in 1991 and in 1997 received candidate’s degree (Eng.) from the same university. Currently has a position of researcher at the Image Processing Systems Institute, Russian Academy of Sciences. Scientific interests: digital image processing and recognition, pattern recognition, information systems in biomedical applications, computer-aided systems for monitoring eye fundus microvascular morphology, analysis of cardiac coronary vessels, evaluation of diagnostic features, and retinal image analysis. Author of more than 30 papers in the field of image processing and pattern recognition.  相似文献   

10.
Literature on supervised Machine-Learning (ML) approaches for classifying text-based safety reports for the construction sector has been growing. Recent studies have emphasized the need to build ML approaches that balance high classification accuracy and performance on management criteria, such as resource intensiveness. However, despite being highly accurate, the extensively focused, supervised ML approaches may not perform well on management criteria as many factors contribute to their resource intensiveness. Alternatively, the potential for semi-supervised ML approaches to achieve balanced performance has rarely been explored in the construction safety literature. The current study contributes to the scarce knowledge on semi-supervised ML approaches by demonstrating the applicability of a state-of-the-art semi-supervised learning approach, i.e., Yet, Another Keyword Extractor (YAKE) integrated with Guided Latent Dirichlet Allocation (GLDA) for construction safety report classification. Construction-safety-specific knowledge is extracted as keywords through YAKE, relying on accessible literature with minimal manual intervention. Keywords from YAKE are then seeded in the GLDA model for the automatic classification of safety reports without requiring a large quantity of prelabeled datasets. The YAKE-GLDA classification performance (F1 score of 0.66) is superior to existing unsupervised methods for the benchmark data containing injury narratives from Occupational Health and Safety Administration (OSHA). The YAKE-GLDA approach is also applied to near-miss safety reports from a construction site. The study demonstrates a high degree of generality of the YAKE-GLDA approach through a moderately high F1 score of 0.86 for a few categories in the near-miss data. The current research demonstrates that, unlike the existing supervised approaches, the semi-supervised YAKE-GLDA approach can achieve a novel possibility of consistently achieving reasonably good classification performance across various construction-specific safety datasets yet being resource-efficient. Results from an objective comparative and sensitivity analysis contribute to much-required knowledge-contesting insights into the functioning and applicability of the YAKE-GLDA. The results from the current study will help construction organizations implement and optimize an efficient ML-based knowledge-mining strategy for domains beyond safety and across sites where the availability of a pre-labeled dataset is a significant limitation.  相似文献   

11.
The analysis of retina blood vessels in clinics indices is one of the most efficient methods employed for diagnosing diseases such as diabetes, hypertension and arthrosclerosis. In this paper, an efficient algorithm is proposed that introduces a higher ability of segmentation by employing Skeletonization and a threshold selection based on Fuzzy Entropy. In the first step, the blurring noises caused by hand shakings during ophthalmoscopy and color photography imageries are removed by a designed Wiener’s filter. Then, in the second step, a basic extraction of the blood vessels from the retina based on an adaptive filtering is obtained. At the last step of the proposed method, an optimal threshold for discriminating main vessels of the retina from other parts of the tissue is achieved by employing fuzzy entropy. Finally, an assessment procedure based on four different measurement techniques in the terms of retinal fundus colors is established and applied to DRIVE and STARE database images. Due to the evaluation comparative results, the proposed extraction of retina blood vessels enables specialists to determine the progression stage of potential diseases, more accurate and in real-time mode.  相似文献   

12.
针对视网膜眼底图像获取过程中眼球转动的问题,提出一种基于节点最近邻结构的具有旋转和平移不变性的视网膜血管形态识别方法.该方法利用节点的周边结构稳定性的特点来进行节点结构特征提取,进行图像相关结构匹配的判定.实验结果表明了该识别算法的有效性和可靠性,且不需要进行方位对准处理,具有较好地识别灵活性和实用性,正确识别率达到98.57%.  相似文献   

13.
Phonemes are the smallest distinguishable unit of speech signal. Segmentation of a phoneme from its word counterpart is a fundamental and crucial part in speech processing because an initial phoneme is used to activate words starting with that phoneme. This work describes an artificial neural network-based algorithm developed for segmentation and classification of consonant phoneme of the Assamese language. The algorithm uses weight vectors, obtained by training self-organising map (SOM) with different number of iterations, as a segment of different phonemes constituting the word whose linear prediction coefficients samples are used for training. The algorithm shows an abrupt rise in success rate than the conventional discrete wavelet-based speech segmentation. A two-class probabilistic neural network problem carried out with clean Assamese phoneme is used to identify phoneme segment. The classification of the phoneme segment is alone as per the consonant phoneme structure of the Assamese language which consists of six phoneme families. Experimental results establish the superiority of the SOM-based segmentation over the discrete wavelet transform-based approach.  相似文献   

14.
An efficient two-level algorithm is developed for parameter estimation using the multiple projection approach. The optimal minimum variance estimate is achieved using a fixed number of iterations. Both the recursive and non-recursive versions of the algorithm are presented. Simulation results of two examples have indicated that the new two-level algorithm provides accurate estimates whilst needing a reduced amount of computational effort.  相似文献   

15.
Semi-supervised clustering is gaining importance these days since neither supervised nor unsupervised learning methods in a stand-alone manner provide satisfactory results. Existing semi-supervised clustering techniques are mostly based on pair-wise constraints, which could be misleading. These semi-supervised clustering algorithms also fail to address the problem of dealing with attributes having different weights. In most of the real-life applications, all attributes do not have equal importance and hence same weights cannot be assigned for each attribute. In this paper, a novel distance-based semi-supervised clustering algorithm has been proposed, which uses functional link neural network (FLNN) for finding weights for attributes with small amount of labeled data for further use in parametric Minkowski’s model for clustering. In FLNN, the nonlinearity is captured by enhancing the input using orthonormal basis functions. The effectiveness of the approach has been illustrated over a number of datasets taken from UCI machine learning repository. Comparative performance evaluation demonstrates that the proposed approach outperforms the existing semi-supervised clustering algorithms. The proposed approach has also been successfully used to cluster the crime locations and to find crime hot spots in India on the data provided by National Crime Records Bureau (NCRB).  相似文献   

16.
目的 典型相关分析是一种经典的多视图学习方法。为了提高投影方向的判别性能,现有典型相关分析方法通常采用引入样本标签信息的策略。然而,获取样本的标签信息需要付出大量的人力与物力,为此,提出了一种联合标签预测与判别投影学习的半监督典型相关分析算法。方法 将标签预测与模型构建相融合,具体地说,将标签预测融入典型相关分析框架中,利用联合学习框架学得的标签矩阵更新投影方向,进而学得的投影方向又重新更新标签矩阵。标签预测与投影方向的学习过程相互依赖、交替更新,预测标签不断地接近其真实标签,有利于学得最优的投影方向。结果 本文方法在AR、Extended Yale B、Multi-PIE和ORL这4个人脸数据集上分别进行实验。特征维度为20时,在AR、Extended Yale B、Multi-PIE和ORL人脸数据集上分别取得87%、55%、83%和85%识别率。取训练样本中每人2(3,4,5)幅人脸图像为监督样本,提出的方法识别率在4个人脸数据集上均高于其他方法。训练样本中每人5幅人脸图像为监督样本,在AR、Extended Yale B、Multi-PIE和ORL人脸数据集上分别取得94.67%、68%、83%和85%识别率。实验结果表明在训练样本标签信息较少情况下以及特征降维后的维数较低的情况下,联合学习模型使得降维后的数据最大限度地保存更加有效的信息,得到较好的识别结果。结论 本文提出的联合学习方法提高了学习的投影方向的判别性能,能够有效地处理少量的有标签样本和大量的无标签样本的情况以及解决两步学习策略的缺陷。  相似文献   

17.
Emergence of MapReduce (MR) framework for scaling data mining and machine learning algorithms provides for Volume, while handling of Variety and Velocity needs to be skilfully crafted in algorithms. So far, scalable clustering algorithms have focused solely on Volume, taking advantage of the MR framework. In this paper we present a MapReduce algorithm—data aware scalable clustering (DASC), which is capable of handling the 3 Vs of big data by virtue of being (i) single scan and distributed to handle Volume, (ii) incremental to cope with Velocity and (iii) versatile in handling numeric and categorical data to accommodate Variety. DASC algorithm incrementally processes infinitely growing data set stored on distributed file system and delivers quality clustering scheme while ensuring recency of patterns. The up-to-date synopsis is preserved by the algorithm for the data seen so far. Each new data increment is processed and merged with the synopsis. Since the synopsis itself may grow very large in size, the algorithm stores it as a file. This makes DASC algorithm truly scalable. Exclusive clusters are obtained on demand by applying connected component analysis (CCA) algorithm over the synopsis. CCA presents subtle roadblock to effective parallelism during clustering. This problem is overcome by accomplishing the task in two stages. In the first stage, hyperclusters are identified based on prevailing data characteristics. The second stage utilizes this knowledge to determine the degree of parallelism, thereby making DASC data aware. Hyperclusters are distributed over the available compute nodes for discovering embedded clusters in parallel. Staged approach for clustering yields dual advantage of improved parallelism and desired complexity in \(\mathcal {MRC}^0\) class. DASC algorithm is empirically compared with incremental Kmeans and Scalable Kmeans++ algorithms. Experimentation on real-world and synthetic data with approximately 1.2 billion data points demonstrates effectiveness of DASC algorithm. Empirical observations of DASC execution are in consonance with the theoretical analysis with respect to stability in resources utilization and execution time.  相似文献   

18.
为实现宫颈液基细胞图像中异常细胞核的准确分割,提出一种新的自适应局部细胞核分割方法:在自适应阶段,采用一种利用灰度和纹理信息的快速自适应阈值算法大致检测出细胞核区域;在局部阶段,对每一个粗分割得到的连通区域,在其局部邻域内,使用一种利用边界和区域信息的、基于泊松概率分布的图割法修正分割结果。将此方法用于苏木素&伊红染色的宫颈液基细胞图像,结果显示本文所提出方法的平均计算时间为1.6秒/图,且比文献[3]的方法在细胞核检测率、和异常细胞核分割精度上均提高了19.7%。  相似文献   

19.
In this paper we describe a new approach for segmentation of liver from CT images and further the segmentation of liver vessels to create a visualization model for surgical purposes. Since usual approaches, based on density models or edge detection, don’t work well for liver, we investigate the texture of the liver to classify each pixel, whether it lies on the liver-background boundary or outside it. The classifier outputs the boundaries of the liver in each slice, which are used then to create the organ volume. Vessels are segmented then inside the liver volume using a single automatically selected threshold. The result is morphologically closed and smoothed by a Gaussian kernel then.  相似文献   

20.
Segmentation of three-dimensional retinal image data   总被引:1,自引:0,他引:1  
We have combined methods from volume visualization and data analysis to support better diagnosis and treatment of human retinal diseases. Many diseases can be identified by abnormalities in the thicknesses of various retinal layers captured using optical coherence tomography (OCT). We used a support vector machine (SVM) to perform semi-automatic segmentation of retinal layers for subsequent analysis including a comparison of layer thicknesses to known healthy parameters. We have extended and generalized an older SVM approach to support better performance in a clinical setting through performance enhancements and graceful handling of inherent noise in OCT data by considering statistical characteristics at multiple levels of resolution. The addition of the multi-resolution hierarchy extends the SVM to have "global awareness." A feature, such as a retinal layer, can therefore be modeled.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号