首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
目的 有丝分裂细胞核计数是乳腺癌诊断和组织学分级的3个重要评分指标之一,基于深度学习的自动检测方法,可以有效辅助医生进行乳腺病理图像有丝分裂细胞核识别和计数。而当前研究中的公开数据集多为竞赛所用,由举办方联合数据提供者挑选而来,与医院临床应用中所使用的数据存在较大的差异,不利于模型性能及泛化能力的测试验证。针对以上问题,本文发布了来自中国赣州市立医院临床环境的数据集GZMH (Ganzhou municipal hospital)。方法 整理并公开发布的数据集GZMH包含55幅全视野数字切片(whole slide images,WSIs)临床乳腺癌病理图像,提供了用于有丝分裂细胞核目标检测和语义分割研究的两种标注,并由2名高年资医师对3名初级病理医师的标注进行了复核。5种主流目标检测方法和5种经典分割方法在GZMH数据集上进行了训练和测试,检验它们在临床数据集GZMH上的性能。结果 目标检测方法实验结果比较中,SSD (single shot multibox detector)模型取得了最佳的效果,F1分数为0.511;分割方法实验结果比较中,R2U-Net (recurrent rsidual convolutional neural network based on U-Net)性能最佳,F1分数为0.430。所有方法在面对较大规模的临床数据集GZMH时体现的性能都明显低于它们在一些公开数据集上的性能。结论 本文所提出的GZMH数据集能够用于有丝分裂目标检测与语义分割研究任务,且此数据集中的图像更加接近实际的应用场景,在推动乳腺病理图像有丝分裂细胞核分割的研究和临床应用方面具有较大的价值。数据集的在线发布地址为:https://doi.org/10.57760/sciencedb.08547。  相似文献   

2.
准确、高效的乳腺癌病理图像分类是计算机辅助诊断的重要研究内容之一。随着机器学习技术的发展,深度学习日渐成为一种有效的乳腺癌病理图像分类处理方法。分析了乳腺癌病理图像分类方法及目前存在的问题;介绍了四种相关的深度学习模型,对基于深度学习的乳腺癌病理图像分类方法进行梳理,并通过实验对比分析现有模型的性能;最后对乳腺癌病理图像分类的关键问题进行了总结,并讨论了未来研究的发展趋势。  相似文献   

3.
Breast cancer (BC) is a most spreading and deadly cancerous malady which is mostly diagnosed in middle-aged women worldwide and effecting beyond a half-million people every year. The BC positive newly diagnosed cases in 2018 reached 2.1 million around the world with a death rate of 11.6% of total cases. Early diagnosis and detection of breast cancer disease with proper treatment may reduce the number of deaths. The gold standard for BC detection is biopsy analysis which needs an expert for correct diagnosis. Manual diagnosis of BC is a complex and challenging task. This work proposed a deep learning-based (DL) solution for the early detection of this deadly disease from histopathology images. To evaluate the robustness of the proposed method a large publically available breast histopathology image database containing a total of 277524 histopathology images is utilized. The proposed automatic diagnosis of BC detection and classification mainly involves three steps. Initially, a DL model is proposed for feature extraction. Secondly, the extracted feature vector (FV) is passed to the proposed novel feature selection (FS) framework for the best FS. Finally, for the classification of BC into invasive ductal carcinoma (IDC) and normal class different machine learning (ML) algorithms are used. Experimental outcomes of the proposed methodology achieved the highest accuracy of 92.7% which shows that the proposed technique can successfully be implemented for BC detection to aid the pathologists in the early and accurate diagnosis of BC.  相似文献   

4.
龚磊  徐军  王冠皓  吴建中  唐金海 《计算机应用》2015,35(12):3570-3575
为了辅助病理医生快速高效诊断乳腺癌并提供乳腺癌预后信息,提出一种计算机辅助乳腺癌肿瘤病理自动分级方法。该方法使用深度卷积神经网络和滑动窗口自动检测病理图像中的细胞;随后综合运用基于稀疏非负矩阵分解的颜色分离、前景标记的分水岭算法以及椭圆拟合得到每个细胞的轮廓。基于检测到的细胞和拟合出的细胞轮廓,提取出肿瘤的组织结构特征和上皮细胞的纹理形状特征等共203维的特征,运用这些特征训练支持向量机分类器(SVM),实现对病理组织图像自动分级。17位患者的49张H&E染色的乳腺癌病理组织图像自动分级的100次十折交叉检验评估结果表明:基于病理图像的细胞形状特征与组织的空间结构特征对病理图像的高、中、低分化等级分类整体准确率为90.20%;同时对高、中、低各分化等级的区分准确率分别为92.87%、82.88%、93.61%。相比使用单一结构特征或者纹理特征的方法,所提方法具有更高的准确率,能准确地对病理组织图像中肿瘤的高级和低级分化程度自动分级,且各分级之间的准确率差异较小。  相似文献   

5.
Mitosis detection and recognition in phase-contrast microscopy image sequences is a fundamental problem in many biomedical applications. Traditionally, researchers detect all mitotic cells from these image sequences with human eyes, which is tedious and time consuming. In recent years, many computer vision technologies were proposed to help humans to achieve the mitosis detection automatically. In this paper, we present an approach which utilized the evolution of feature in the time domain to represent the feature of mitosis. Firstly, the feature of each cell image is extracted by the different method (GIST, SIFT, CNN). Secondly, we construct the levels of motorists according to the steps of mitosis. The pooling method is utilized to handle the feature fusion in each dimension and in different time segments. Third, the pooling features were combined to one vector to represent the characters of this video. Finally, tradition machine learning method SVM is used to handle the mortises recognition problem. In order to demonstrate the performance of our approach, motorists event detection is made in some microscopy image sequences. In the experiment, some classic methods as comparison method are made in this paper. The corresponding experiments also demonstrate the superiority of our approach.  相似文献   

6.
目的 针对目前水下图像质量评价方法少和现有方法存在局限性等问题,提出一种无参考并且无需手工设计特征的水下图像质量评价方法。方法 提出的水下图像质量评价方法将深度学习网络框架与随机森林回归模型相结合,首先采用深度神经网络提取水下图像的特征;然后使用提取的特征和标定的水下图像质量分数训练回归模型;最终,利用训练好的回归模型预测水下图像的质量。结果 在本文收集的水下图像数据集和水下图像清晰化算法处理结果上评测本文方法,并与多种质量评价方法进行比较,其中包括预测结果与主观质量分数比较、水下图像清晰化结果评测比较、预测结果与主观质量分数相关性比较、鲁棒性比较等。主观实验结果表明本文的评价方法可以相对准确地给出符合人类视觉感知的水下图像质量分数,并且具有更好的鲁棒性。定量实验结果表明本文方法与其他方法相比,预测的图像质量分数与主观分数具有更高的相关性。结论 提出的水下图像质量评价方法无需参考图像,省去了手工设计的特征,充分利用了深度学习网络的学习和表征能力。本文方法的准确性较好,普适性和鲁棒性较高,预测的质量分数与人类视觉感知具有较高的一致性。本方法适用于原始的水下图像和水下图像清晰化算法的处理结果。  相似文献   

7.
Transductive cost-sensitive lung cancer image classification   总被引:3,自引:3,他引:0  
Previous computer-aided lung cancer image classification methods are all cost-blind, which assume that the misdiagnosis (categorizing a cancerous image as a normal one or categorizing a normal image as a cancerous one) costs are equal. In addition, previous methods usually require experienced pathologists to label a large amount of images as training samples. To this end, a novel transductive cost-sensitive method is proposed for lung cancer image classification on needle biopsies specimens, which only requires the pathologist to label a small amount of images. The proposed method analyzes lung cancer images in the following procedures: (i) an image capturing procedure to capture images from the needle biopsies specimens; (ii) a preprocessing procedure to segment the individual cells from the captured images; (iii) a feature extraction procedure to extract features (i.e. shape, color, texture and statistical information) from the obtained individual cells; (iv) a codebook learning procedure to learn a codebook on the extracted features by adopting k-means clustering, which aims to represent each image as a histogram over different codewords; (v) an image classification procedure to predict labels for testing images using the proposed multi-class cost-sensitive Laplacian regularized least squares (mCLRLS). We evaluate the proposed method on a real-image set provided by Bayi Hospital, which contains 271 images including normal ones and four types of cancerous ones (squamous carcinoma, adenocarcinoma, small cell cancer and nuclear atypia). The experimental results demonstrate that the proposed method achieves a lower cancer-misdiagnosis rate and lower total misdiagnosis costs comparing with previous methods, which includes the supervised learning approach (kNN, mcSVM and MCMI-AdaBoost), semi-supervised learning approach (LapRLS) and cost-sensitive approach (CS-SVM). Meanwhile, the experiments also disclose that both transductive and cost-sensitive settings are useful when only a small amount of training images are available.  相似文献   

8.
Efficiently representing and recognizing the semantic classes of the subregions of large-scale high spatial resolution (HSR) remote-sensing images are challenging and critical problems. Most of the existing scene classification methods concentrate on the feature coding approach with handcrafted low-level features or the low-level unsupervised feature learning approaches, which essentially prevent them from better recognizing the semantic categories of the scene due to their limited mid-level feature representation ability. In this article, to overcome the inadequate mid-level representation, a patch-based spatial-spectral hierarchical convolutional sparse auto-encoder (HCSAE) algorithm, based on deep learning, is proposed for HSR remote-sensing imagery scene classification. The HCSAE framework uses an unsupervised hierarchical network based on a sparse auto-encoder (SAE) model. In contrast to the single-level SAE, the HCSAE framework utilizes the significant features from the single-level algorithm in a feedforward and full connection approach to the maximum extent, which adequately represents the scene semantics in the high level of the HCSAE. To ensure robust feature learning and extraction during the SAE feature extraction procedure, a ‘dropout’ strategy is also introduced. The experimental results using the UC Merced data set with 21 classes and a Google Earth data set with 12 classes demonstrate that the proposed HCSAE framework can provide better accuracy than the traditional scene classification methods and the single-level convolutional sparse auto-encoder (CSAE) algorithm.  相似文献   

9.
ABSTRACT

Remote sensing scene classification is gaining much more interest in the recent few years for many strategic fields such as security, land cover and land use monitoring. Several methods have been proposed in the literature and they can be divided into three main classes based on the features used: handcrafted features, features obtained by unsupervised learning and those obtained from deep learning. Handcrafted features are generally time consuming and suboptimal. Unsupervised learning based features which have been proposed later gave better results but their performances are still limited because they mainly rely on shallow networks and are not able to extract powerful features. Deep learning based features are recently investigated and gave interesting results. But, they cannot be usually used because of the scarcity of labelled remote sensing images and are also computationally expensive. Most importantly, whatever kind of feature is used, the neighbourhood information of them is ignored. In this paper, we propose a novel remote sensing scene representation and classification approach called Bag of Visual SubGraphs (BoVSG). First, each image is segmented into superpixels in order to summarize the image content while retaining relevant information. Then, the superpixels from all images are clustered according to their colour and texture features and a random label is assigned to each cluster that probably corresponds to some material or land cover type. Thus superpixels belonging to the same cluster have the same label. Afterwards, each image is modelled with a graph where nodes correspond to labelled superpixels and edges model spatial neighbourhoods. Finally, each image is represented by a histogram of the most frequent subgraphs corresponding to land cover adjacency patterns. This way, local spatial relations between the nodes are also taken into account. Resultant feature vectors are classified using standard classification algorithms. The proposed approach is tested on three popular datasets and its performance outperforms state-of-the-art methods, including deep learning methods. Besides its accuracy, the proposed approach is computationally much less expensive than deep learning methods.  相似文献   

10.
Automation in plant disease detection and diagnosis is one of the challenging research areas that has gained significant attention in the agricultural sector. Traditional disease detection methods rely on extracting handcrafted features from the acquired images to identify the type of infection. Also, the performance of these works solely depends on the nature of the handcrafted features selected. This can be addressed by learning the features automatically with the help of Convolutional Neural Networks (CNN). This research presents two different deep architectures for detecting the type of infection in tomato leaves. The first architecture applies residual learning to learn significant features for classification. The second architecture applies attention mechanism on top of the residual deep network. Experiments were conducted using Plant Village Dataset comprising of three diseases namely early blight, late blight, and leaf mold. The proposed work exploited the features learned by the CNN at various processing hierarchy using the attention mechanism and achieved an overall accuracy of 98% on the validation sets in the 5-fold cross-validation.  相似文献   

11.

Immunoglobulin A (IgA)-nephropathy (IgAN) is one of the major reasons for renal failure. It provides vital clues to estimate the stage and the proliferation rate of end-stage kidney disease. IgA stage can be estimated with the help of MEST-C score. The manual estimation of MEST-C score from whole slide kidney images is a very tedious and difficult task. This study uses some Convolutional neural networks (CNNs) related models to detect mesangial hypercellularity (M score) in MEST-C. CNN learns the features directly from image data without the requirement of analytical data. CNN is trained efficiently when image data size is large enough for a particular class. In the case of smaller data size, transfer learning can be used efficiently in which CNN is pre-trained on some general images and then on subject images. Since the data set size is small, time spent in collecting large data set is saved. The training time of transfer learning is also reduced because the model is already pre-trained. This research work aims at the detection of mesangial hypercellularity from biopsy images with small data size by utilizing the transfer learning. The dataset used in this research work consists of 138 individual glomerulus (× 20 magnification digital biopsy) images of IgA patients received from All India Institute of Medical Science, Delhi. Here, machine learning (k-nearest neighbour (KNN) and support vector machine (SVM)) classifiers are compared to transfer learning CNN methods. The deep extracted image features are used by machine learning classifiers. The different evaluation parameters have been used for comparing the predictions of basic classifiers to the deep learning model. The research work concludes that the transfer learning deep CNN method can improve the detection of mesangial hypercellularity as compare to KNN, SVM methods when using the small data set. This model could help the pathologists to understand the stages of kidney failure.

  相似文献   

12.
Liu  Huafeng  Han  Xiaofeng  Li  Xiangrui  Yao  Yazhou  Huang  Pu  Tang  Zhenmin 《Multimedia Tools and Applications》2019,78(17):24269-24283

Robust road detection is a key challenge in safe autonomous driving. Recently, with the rapid development of 3D sensors, more and more researchers are trying to fuse information across different sensors to improve the performance of road detection. Although many successful works have been achieved in this field, methods for data fusion under deep learning framework is still an open problem. In this paper, we propose a Siamese deep neural network based on FCN-8s to detect road region. Our method uses data collected from a monocular color camera and a Velodyne-64 LiDAR sensor. We project the LiDAR point clouds onto the image plane to generate LiDAR images and feed them into one of the branches of the network. The RGB images are fed into another branch of our proposed network. The feature maps that these two branches extract in multiple scales are fused before each pooling layer, via padding additional fusion layers. Extensive experimental results on public dataset KITTI ROAD demonstrate the effectiveness of our proposed approach.

  相似文献   

13.
近年来,基于全卷积网络的显著性物体检测方法较手工选取特征的方法已经取得了较大的进展,但针对复杂场景图像的检测仍存在一些问题需要解决。提出了一种新的基于全局特征引导的显著性物体检测模型,研究深层语义特征在多尺度多层次特征表达中的重要作用。以特征金字塔网络的编解码结构为基础,在自底而上的路径中,设计了全局特征生成模块(GGM),准确提取显著性物体的位置信息;构建了加强上下文联系的残差模块(RM),提取各侧边输出的多尺度特征;采用特征引导流(GF)融合全局特征生成模块和残差模块,利用深层语义特征去引导浅层特征提取,高亮显著目标的同时抑制背景噪声。实验结果表明,在5个基准数据集上与11种主流方法相比,该模型具有优越性。  相似文献   

14.
One of the fast-growing disease affecting women’s health seriously is breast cancer. It is highly essential to identify and detect breast cancer in the earlier stage. This paper used a novel advanced methodology than machine learning algorithms such as Deep learning algorithms to classify breast cancer accurately. Deep learning algorithms are fully automatic in learning, extracting, and classifying the features and are highly suitable for any image, from natural to medical images. Existing methods focused on using various conventional and machine learning methods for processing natural and medical images. It is inadequate for the image where the coarse structure matters most. Most of the input images are downscaled, where it is impossible to fetch all the hidden details to reach accuracy in classification. Whereas deep learning algorithms are high efficiency, fully automatic, have more learning capability using more hidden layers, fetch as much as possible hidden information from the input images, and provide an accurate prediction. Hence this paper uses AlexNet from a deep convolution neural network for classifying breast cancer in mammogram images. The performance of the proposed convolution network structure is evaluated by comparing it with the existing algorithms.  相似文献   

15.
近年来,乳腺癌严重威胁全球女性的身体健康,乳腺X线摄影是乳腺癌筛查的有效影像检查手段.乳腺X线图像计算机辅助诊断(computer aided diagnosis,CAD)运用计算机视觉、图像处理、机器学习等人工智能先进技术,自动分析处理乳腺X线图像,可为医生在临床中提供重要的诊断参考.主要面向肿块和微钙化病变检测、分...  相似文献   

16.
With computers and the Internet being essential in everyday life, malware poses serious and evolving threats to their security, making the detection of malware of utmost concern. Accordingly, there have been many researches on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been achieved with these methods, most of them are built on shallow learning architectures. Due to its superior ability in feature learning through multilayer deep architecture, deep learning is starting to be leveraged in industrial and academic research for different applications. In this paper, based on the Windows application programming interface calls extracted from the portable executable files, we study how a deep learning architecture can be designed for intelligent malware detection. We propose a heterogeneous deep learning framework composed of an AutoEncoder stacked up with multilayer restricted Boltzmann machines and a layer of associative memory to detect newly unknown malware. The proposed deep learning model performs as a greedy layer-wise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning. Different from the existing works which only made use of the files with class labels (either malicious or benign) during the training phase, we utilize both labeled and unlabeled file samples to pre-train multiple layers in the heterogeneous deep learning framework from bottom to up for feature learning. A comprehensive experimental study on a real and large file collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed deep learning framework can further improve the overall performance in malware detection compared with traditional shallow learning methods, deep learning methods with homogeneous framework, and other existing anti-malware scanners. The proposed heterogeneous deep learning framework can also be readily applied to other malware detection tasks.  相似文献   

17.
Heterogeneous gap among different modalities emerges as one of the critical issues in multimedia retrieval areas. Unlike traditional unimodal cases, where raw features are extracted and directly measured, the heterogeneous nature of crossmodal tasks requires the intrinsic semantic representation to be compared in a unified framework. Based on a flexible “feature up-lifting and down projecting” mechanism, this paper studies the learning of crossmodal semantic features that can be retrieved across different modalities. Two effective methods are proposed to mine semantic correlations. One is for traditional handcrafted features, and the other is based on deep neural network. We treat them respectively as normal and deep version of our proposed shared discriminative semantic representation learning (SDSRL) framework. We evaluate both of these two methods on two public multimodal datasets for crossmodal and unimodal retrieval tasks. The experimental results demonstrate that our proposed methods outperform the compared baselines and achieve state-of-the-art performance in most scenarios.  相似文献   

18.
肺癌是一种严重威胁患者生命的恶性肿瘤。通过对肺癌病人进行生存预测分析并制定针对性治疗方案,对提高病人生存率具有重要意义。提出一种基于病理学图像的肺癌患者生存预测分析方法。首先采用深度学习方法对病理学图片进行肺癌细胞自动检测,并对检测出的肺癌细胞进行特征提取。在特征选取中,引入了反映肺癌细胞间关系和分布特性的拓扑特征的提取方法,将提取的拓扑特征作为生存分析的预测因素。最后采用Cox-Lasso方法对肺癌患者进行生存预测分析。实验结果表明,该方法能够提高细胞检测的效率和准确性,并具有较高的肺癌患者生存预测分析能力。  相似文献   

19.
视频烟雾检测研究进展   总被引:3,自引:0,他引:3       下载免费PDF全文
目的 视频烟雾检测具有响应速度快、不易受环境因素影响、适用面广、成本低等优势,为及早预警火灾提供有力保障。近年涌现大量视频检测方法,尽管检测率有所提升,但仍受到高误报率和高漏报率的困扰。为了全面反映视频烟雾检测的研究现状和最新进展,本文重点针对2014年至2017年国内外公开发表的主要文献,进行全面的梳理和分析。方法 该工作建立在广泛文献调研的基础上,立足于视频烟雾检测的基本框架,围绕视频图像预处理、疑似烟区提取、烟雾特征描述、烟雾分类识别等处理阶段,系统地对最新文献进行分析和总结。此外,对区别于传统框架的深度学习检测方法亦进行了相关归纳。结果 重点依据烟雾运动特征和烟雾静态特征这两类,对疑似烟区提取方法进行梳理;从统计量特征、变换域特征和局部模式特征3个方面对烟雾特征描述方法进行梳理,并从颜色、形状等七个角度进行总结;从基于规则和基于学习这两个视角,梳理烟雾识别和决策方法;最后,对于基于深度学习的方法单独进行了阐述。文献通过系统地梳理,凝练出视频烟雾检测近几年取得的进展和尚存在的不足,并对视频烟雾检测发展前景进行展望。结论 针对视频烟雾检测的研究一直备受青睐,越来越多性能优秀的检测算法不断涌现。通过对现有研究进行全面梳理和系统分析,期望视频烟雾检测能取得更大的进展并更好地应用于工业领域,为火灾预警提供更有力的保障。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号