首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this article, we develop an automatic detection method for non-isolated pulmonary nodules as part of a computer-aided diagnosis (CAD) system for lung cancers in chest X-ray computed tomography (CT) images. An essential core of the method is to separate non-isolated nodules from connecting structures such as the chest wall and blood vessels. The isolated nodules can be detected more easily by the CAD systems developed previously. To this end, we propose a preprocessing technique for nodule candidate detection by using double-threshold binarization. We evaluate the performance using the receiver operating characteristic (ROC) analysis in clinical chest CT images. The results suggest that the detection rate for non-isolated nodules by the proposed method is superior to that by the conventional preprocessing methods.  相似文献   

2.
A computer-aided diagnostic (CAD) system for effective and accurate pulmonary nodule detection is required to detect the nodules at early stage. This paper proposed a novel technique to detect and classify pulmonary nodules based on statistical features for intensity values using support vector machine (SVM). The significance of the proposed technique is, it uses the nodules features in 2D & 3D and also SVM for the classification that is good to classify the nodules extracted from the image. The lung volume is extracted from Lung CT using thresholding, background removal, hole-filling and contour correction of lung lobe. The candidate nodules are extracted and pruned using the rules based on ground truth of nodules. The statistical features for intensity values are extracted from candidate nodules. The nodule data are up-samples to reduce the biasness. The classifier SVM is trained using data samples. The efficiency of proposed CAD system is tested and evaluated using Lung Image Consortium Database (LIDC) that is standard data-set used in CAD Systems for Lungs Nodule classification. The results obtained from proposed CAD system are good as compare to previous CAD systems. The sensitivity of 96.31% is achieved in the proposed CAD system.  相似文献   

3.
4.
据统计,肺癌在全世界范围内是发病率、致死率最高的疾病之一。随着计算机辅助诊断系统(CAD)和卷积神经网络(CNN)的成熟化,医疗领域的诊断治疗也逐渐智能化。本文提出一种基于目标检测算法的肺结节自动检测方法,并提出一套将阈值分割算法和数字形态学处理相结合的肺实质CT影像处理流程。对LUNA16数据集中的1186个肺结节进行训练和学习,观察YOLO V3模型在数据集中的评价结果来验证模型,实验结果准确率达到92.18%,每张图片平均检测时间为0.035 s。与现有的肺结节检测算法SSD、CNN、U-Net等模型进行对比试验,以验证YOLO V3模型的有效性。同时本文基于CAD技术设计肺结节辅助诊断系统,实现人机交互,为医生提供简单明了的辅助诊断工具。  相似文献   

5.
In several computer-aided diagnosis (CAD) applications of image processing, there is no sufficiently sensitive and specific method for determining what constitutes a normal versus an abnormal classification of a chest radiograph. In the case of lung nodule detection or in classifying the perfusion of pneumoconiosis, multiple radiograph readers (radiologists) are asked to examine and score specific regions of interest (ROIs). The readers provide size, shape and perfusion grades for the presence of opacities in each region and then use all the ROI grades to classify the lung as normal or abnormal. The combined grades from all readers are then used to arrive at a consensus normal or abnormal classification. In this paper, using area under the ROC curve, we evaluate new mathematical models that are based on mathematical statistics, logic functions, and several statistical classifiers to analyze reader performance in grading chest radiographs for pneumoconiosis as the first step toward applying this technique to early detection of nodules found in lung cancer. In pneumoconiosis, rounded opacities are on the order of 1-10 mm in size, while lung nodules are often not diagnosed until they reach a size on the order of 1 cm.  相似文献   

6.
孤立性肺结节的检测是肺癌早期诊断的关键。针对传统点增强滤波器虽然对结节增强具有很好的敏感性,但是却产生很多假阳性区域的问题,提出一种通过计算3维增强密度指数和判别规则来识别肺结节的方法。首先采用自适应双边滤波器对CT图像序列进行降噪和平滑处理;然后计算对应的Hessian矩阵及其特征值得到预增强系数,并获得感兴趣体区域,通过对预增强系数的分析来构造3维增强密度指数;最后应用判别规则对感兴趣体进行识别。针对两个肺部CT图像数据集对该方法进行了测试,结果表明,在识别孤立性肺结节方面该方法是有效的。  相似文献   

7.
The automatic detection of construction materials in images acquired on a construction site has been regarded as a critical topic. Recently, several data mining techniques have been used as a way to solve the problem of detecting construction materials. These studies have applied single classifiers to detect construction materials—and distinguish them from the background—by using color as a feature. Recent studies suggest that combining multiple classifiers (into what is called a heterogeneous ensemble classifier) would show better performance than using a single classifier. However, the performance of ensemble classifiers in construction material detection is not fully understood. In this study, we investigated the performance of six single classifiers and potential ensemble classifiers on three data sets: one each for concrete, steel, and wood. A heterogeneous voting-based ensemble classifier was created by selecting base classifiers which are diverse and accurate; their prediction probabilities for each target class were averaged to yield a final decision for that class. In comparison with the single classifiers, the ensemble classifiers performed better in the three data sets overall. This suggests that it is better to use an ensemble classifier to enhance the detection of construction materials in images acquired on a construction site.  相似文献   

8.
Bayesian networks are important knowledge representation tools for handling uncertain pieces of information. The success of these models is strongly related to their capacity to represent and handle dependence relations. Some forms of Bayesian networks have been successfully applied in many classification tasks. In particular, naive Bayes classifiers have been used for intrusion detection and alerts correlation. This paper analyses the advantage of adding expert knowledge to probabilistic classifiers in the context of intrusion detection and alerts correlation. As examples of probabilistic classifiers, we will consider the well-known Naive Bayes, Tree Augmented Naïve Bayes (TAN), Hidden Naive Bayes (HNB) and decision tree classifiers. Our approach can be applied for any classifier where the outcome is a probability distribution over a set of classes (or decisions). In particular, we study how additional expert knowledge such as “it is expected that 80 % of traffic will be normal” can be integrated in classification tasks. Our aim is to revise probabilistic classifiers’ outputs in order to fit expert knowledge. Experimental results show that our approach improves existing results on different benchmarks from intrusion detection and alert correlation areas.  相似文献   

9.
This paper focuses on outlier detection and its application to process monitoring. The main contribution is that we propose a dynamic ensemble detection model, of which one-class classifiers are used as base learners. Developing a dynamic ensemble model for one-class classification is challenging due to the absence of labeled training samples. To this end, we propose a procedure that can generate pseudo outliers, prior to which we transform outputs of all base classifiers to the form of probability. Then we use a probabilistic model to evaluate competence of all base classifiers. Friedman test along with Nemenyi test are used together to construct a switching mechanism. This is used for determining whether one classifier should be nominated to make the decision or a fusion method should be applied instead. Extensive experiments are carried out on 20 data sets and an industrial application to verify the effectiveness of the proposed method.  相似文献   

10.
Detection of anomalies is a broad field of study, which is applied in different areas such as data monitoring, navigation, and pattern recognition. In this paper we propose two measures to detect anomalous behaviors in an ensemble of classifiers by monitoring their decisions; one based on Mahalanobis distance and another based on information theory. These approaches are useful when an ensemble of classifiers is used and a decision is made by ordinary classifier fusion methods, while each classifier is devoted to monitor part of the environment. Upon detection of anomalous classifiers we propose a strategy that attempts to minimize adverse effects of faulty classifiers by excluding them from the ensemble. We applied this method to an artificial dataset and sensor-based human activity datasets, with different sensor configurations and two types of noise (additive and rotational on inertial sensors). We compared our method with two other well-known approaches, generalized likelihood ratio (GLR) and One-Class Support Vector Machine (OCSVM), which detect anomalies at data/feature level.  相似文献   

11.
针对目前主流恶意网页检测技术耗费资源多、检测周期长和分类效果低等问题,提出一种基于Stacking的恶意网页集成检测方法,将异质分类器集成的方法应用在恶意网页检测识别领域。通过对网页特征提取分析相关因素和分类集成学习来得到检测模型,其中初级分类器分别使用K近邻(KNN)算法、逻辑回归算法和决策树算法建立,而次级的元分类器由支持向量机(SVM)算法建立。与传统恶意网页检测手段相比,此方法在资源消耗少、速度快的情况下使识别准确率提高了0.7%,获得了98.12%的高准确率。实验结果表明,所提方法构造的检测模型可高效准确地对恶意网页进行识别。  相似文献   

12.
AdaBoost-based algorithm for network intrusion detection.   总被引:1,自引:0,他引:1  
Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.  相似文献   

13.
Machine learning techniques used in computer aided diagnosis (CAD) systems learn a hypothesis to help the medical experts make a diagnosis in the future. To learn a well-performed hypothesis, a large amount of expert-diagnosed examples are required, which places a heavy burden on experts. By exploiting large amounts of undiagnosed examples and the power of ensemble learning, the co-training-style random forest (Co-Forest) releases the burden on the experts and produces well-performed hypotheses. However, the Co-forest may suffer from a problem common to other co-training-style algorithms, namely, that the unlabeled examples may instead be wrongly-labeled examples that become accumulated in the training process. This is due to the fact that the limited number of originally-labeled examples usually produces poor component classifiers, which lack diversity and accuracy. In this paper, a new Co-Forest algorithm named Co-Forest with Adaptive Data Editing (ADE-Co-Forest) is proposed. Not only does it exploit a specific data-editing technique in order to identify and discard possibly mislabeled examples throughout the co-labeling iterations, but it also employs an adaptive strategy in order to decide whether to trigger the editing operation according to different cases. The adaptive strategy combines five pre-conditional theorems, all of which ensure an iterative reduction of classification error and an increase in the scale of new training sets under PAC learning theory. Experiments on UCI datasets and an application to small pulmonary nodules detection using chest CT images show that ADE-Co-Forest can more effectively enhance the performance of a learned hypothesis than Co-Forest and DE-Co-Forest (Co-Forest with Data Editing but without adaptive strategy).  相似文献   

14.
This paper proposes a new approach to using particle swarm optimisation (PSO) within an AdaBoost framework for object detection. Instead of using exhaustive search for finding good features to be used for constructing weak classifiers in AdaBoost, we propose two methods based on PSO. The first uses PSO to evolve and select good features only, and the weak classifiers use a simple decision stump. The second uses PSO for both selecting good features and evolving weak classifiers in parallel. These two methods are examined and compared on two challenging object detection tasks in images: detection of individual pasta pieces and detection of a face. The experimental results suggest that both approaches can successfully detect object positions and that using PSO for selecting good individual features and evolving associated weak classifiers in AdaBoost is more effective than for selecting features only. We also show that PSO can evolve and select meaningful features in the face detection task.  相似文献   

15.
Computed tomographic (CT) colonography is a promising alternative to traditional invasive colonoscopic methods used in the detection and removal of cancerous growths, or polyps in the colon. Existing computer-aided diagnosis (CAD) algorithms used in CT colonography typically employ the use of a classifier to discriminate between true and false positives generated by a polyp candidate detection system based on a set of features extracted from the candidates. However, these classifiers often suffer from a phenomenon termed the curse of dimensionality, whereby there is a marked degradation in the performance of a classifier as the number of features used in the classifier is increased. In addition, an increase in the number of features used also contributes to an increase in computational complexity and demands on storage space.This paper investigates the benefits of feature selection on a polyp candidate database, with the aim of increasing specificity while preserving sensitivity. Two new mutual information methods for feature selection are proposed in order to select a subset of features for optimum performance. Initial results show that the performance of the widely used support vector machine (SVM) classifier is indeed better with the use of a small set of features, with receiver operating characteristic curve (AUC) measures reaching 0.78-0.88.  相似文献   

16.
针对计算机断层扫描(CT)影像中肺结节尺寸变化较大、尺寸小且不规则等特点导致的检测敏感度较低的问题,提出了基于特征金字塔网络(FPN)的肺结节检测方法。首先,利用FPN提取结节的多尺度特征,并强化小目标及目标边界细节的特征;其次,在FPN的基础上设计语义分割网络(名为掩模特征金字塔网络(Mask FPN))用于快速准确地分割提取肺实质,作为目标候选区域定位图像;并且,在FPN顶层添加反卷积层,采用多尺度预测策略改进快速区域卷积神经网络(Faster R-CNN)以提高检测性能;最后,针对肺结节数据集的正负样本不平衡问题,在区域候选网络(RPN)模块采用焦点损失函数以提高结节的检出率。所提方法在公开数据集LUNA16上进行实验,结果表明,利用FPN和反卷积层改进的新网络对结节检测效果有一定的帮助,采用焦点损失函数也有一定效果。综合多种改进,当平均每个扫描件的候选结节数为46.7时,所提方法的肺结节检测敏感度指标为95.7%,与其他卷积神经网络方法如Faster R-CNN、UNet等相比,具有较高的敏感性。所提方法能够较好地提取不同尺度上的结节特征,提高CT图像肺结节检测的敏感度,同时对于较小的结节也能有效检测,能更有效地辅助肺癌的诊断治疗。  相似文献   

17.
肺癌是世界上死亡率最高的癌症,通过胸部CT影像检测肺结节对肺癌早期诊断和治疗意义重大。为了减轻放射科医生的工作量以及同时减少误诊率和漏诊率,研究人员提出了计算机辅助检测(CAD)系统辅助放射科医生检测和诊断肺结节。目前,研究人员正在尝试不同的深度学习技术,以提高计算机辅助诊断系统在基于CT图像的肺癌筛查中的性能。这项工作回顾了作为肺癌检测的CAD系统目前典型的深度学习的算法和框架,主要从数据集介绍、2D深度学习方法、3D深度学习方法、数据不平衡问题的处理、模型训练方法以及模型可解释性这六个方面进行介绍。最后,对各个方法的主要特点和算法性能进行了综合比较分析,并对如何提高结节检测性能进行了展望。  相似文献   

18.
针对计算机断层扫描(CT)影像中肺结节尺寸变化较大、尺寸小且不规则等特点导致的检测敏感度较低的问题,提出了基于特征金字塔网络(FPN)的肺结节检测方法。首先,利用FPN提取结节的多尺度特征,并强化小目标及目标边界细节的特征;其次,在FPN的基础上设计语义分割网络(名为掩模特征金字塔网络(Mask FPN))用于快速准确地分割提取肺实质,作为目标候选区域定位图像;并且,在FPN顶层添加反卷积层,采用多尺度预测策略改进快速区域卷积神经网络(Faster R-CNN)以提高检测性能;最后,针对肺结节数据集的正负样本不平衡问题,在区域候选网络(RPN)模块采用焦点损失函数以提高结节的检出率。所提方法在公开数据集LUNA16上进行实验,结果表明,利用FPN和反卷积层改进的新网络对结节检测效果有一定的帮助,采用焦点损失函数也有一定效果。综合多种改进,当平均每个扫描件的候选结节数为46.7时,所提方法的肺结节检测敏感度指标为95.7%,与其他卷积神经网络方法如Faster R-CNN、UNet等相比,具有较高的敏感性。所提方法能够较好地提取不同尺度上的结节特征,提高CT图像肺结节检测的敏感度,同时对于较小的结节也能有效检测,能更有效地辅助肺癌的诊断治疗。  相似文献   

19.
CT图像中肺结节检测一直是肺癌CAD系统的关键和难点。提出了一种孤立性肺结节自动检测算法,首先对原始CT图像进行有效、准确的肺实质分割;采用寻找局部灰度最大值方法对ROI进行初始分割;再对分割出的各ROI进行特征提取,利用SVM方法对每个特征进行定量描述,根据SVM单特征分类准确率对Mahalanobis距离进行加权改进,最后采用基于改进的Mahalanobis距离进行肺结节分类。实验结果表明,该算法可以较好地提取出CT图像中的孤立性肺结节,具有较高的灵敏度和较低的漏诊率,可以为医生诊断早期肺癌病灶提供帮助信息。  相似文献   

20.
CT图像中肺结节检测一直是肺癌CAD系统的关键和难点。提出了一种孤立性肺结节自动检测算法,首先对原始CT图像进行有效、准确的肺实质分割;采用寻找局部灰度最大值方法对ROI进行初始分割;再对分割出的各ROI进行特征提取,利用SVM方法对每个特征进行定量描述,根据SVM单特征分类准确率对Mahalanobis距离进行加权改进,最后采用基于改进的Mahalanobis距离进行肺结节分类。实验结果表明,该算法可以较好地提取出CT图像中的孤立性肺结节,具有较高的灵敏度和较低的漏诊率,可以为医生诊断早期肺癌病灶提供帮助信息。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号