首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Colon cancer has been reported to be one of the frequently diagnosed cancers and the leading cause of cancer deaths. Early detection and removal of malicious polyps, which are precursors of colon cancer, can enormously lessen the fatality rate. The detection and segmentation of polyps in colonoscopy is a challenging task even for an experienced colonoscopist, due to divergences in the size, shape, texture, and the close resemblance of polyps with the colon lining. Machine-assisted detection, localization, and segmentation of polyps in the screening procedure can profoundly help the clinicians. Autoencoder-based architectures used in polyp segmentation lack the efficiency in incorporating both local and long-range pixel dependencies. To address the challenges in the automatic segmentation of colon polyps we propose an autoencoder architecture, augmented with a feature attention module in the decoder part. The salient features from RGB colonoscopic images are extracted using the residual skip-connected autoencoder. The decoder attention module joins spatial subspace with feature subspace extracted from the deep residual convolutional neural network and enhances the feature weight for precise segmentation of polyp regions. Extensive experiments on four publicly available polyp datasets demonstrate that the proposed architecture provides very impressive performance in terms of segmentation metrics (Dice scores and Jaccard scores) when compared with the state-of-the-art polyp segmentation approaches.  相似文献   

2.
Detection and classification of the blurred and the non-blurred regions in images is a challenging task due to the limited available information about blur type, scenarios and level of blurriness. In this paper, we propose an effective method for blur detection and segmentation based on transfer learning concept. The proposed method consists of two separate steps. In the first step, genetic programming (GP) model is developed that quantify the amount of blur for each pixel in the image. The GP model method uses the multi-resolution features of the image and it provides an improved blur map. In the second phase, the blur map is segmented into blurred and non-blurred regions by using an adaptive threshold. A model based on support vector machine (SVM) is developed to compute adaptive threshold for the input blur map. The performance of the proposed method is evaluated using two different datasets and compared with various state-of-the-art methods. The comparative analysis reveals that the proposed method performs better against the state-of-the-art techniques.  相似文献   

3.
Coronavirus disease 2019 (COVID-19) epidemic has devastating effects on personal health around the world. It is significant to achieve accurate segmentation of pulmonary infection regions, which is an early indicator of disease. To solve this problem, a deep learning model, namely, the content-aware pre-activated residual UNet (CAPA-ResUNet), was proposed for segmenting COVID-19 lesions from CT slices. In this network, the pre-activated residual block was used for down-sampling to solve the problems of complex foreground and large fluctuations of distribution in datasets during training and to avoid gradient disappearance. The area loss function based on the false segmentation regions was proposed to solve the problem of fuzzy boundary of the lesion area. This model was evaluated by the public dataset (COVID-19 Lung CT Lesion Segmentation Challenge—2020) and compared its performance with those of classical models. Our method gains an advantage over other models in multiple metrics. Such as the Dice coefficient, specificity (Spe), and intersection over union (IoU), our CAPA-ResUNet obtained 0.775 points, 0.972 points, and 0.646 points, respectively. The Dice coefficient of our model was 2.51% higher than Content-aware residual UNet (CARes-UNet). The code is available at https://github.com/malu108/LungInfectionSeg .  相似文献   

4.
Tissue segmentation is a fundamental and important task in nasopharyngeal images analysis. However, it is a challenging task to accurately and quickly segment various tissues in the nasopharynx region due to the small difference in gray value between tissues in the nasopharyngeal image and the complexity of the tissue structure. In this paper, we propose a novel tissue segmentation approach based on a two-stage learning framework and U-Net. In the proposed methodology, the network consists of two segmentation modules. The first module performs rough segmentation and the second module performs accurate segmentation. Considering the training time and the limitation of computing resources, the structure of the second module is simpler and the number of network layers is less. In addition, our segmentation module is based on U-Net and incorporates a skip structure, which can make full use of the original features of the data and avoid feature loss. We evaluated our proposed method on the nasopharyngeal dataset provided by West China Hospital of Sichuan University. The experimental results show that the proposed method is superior to many standard segmentation structures and the recently proposed nasopharyngeal tissue segmentation method, and can be easily generalized across different tissue types in various organs.  相似文献   

5.
Unsupervised texture segmentation is a challenging topic in computer vision. It is difficult to obtain boundaries of texture regions automatically in real-time, especially for cluttered images. This paper presents a new fast unsupervised texture segmentation method. First, the Texel similarity map (TSM) is used to compare the changes of intensity and gray level of neighboring pixels to determine whether they are identical. Then, a scheme called multiple directions integral images (MDII) is proposed to quickly evaluate the TSM. With the aid of MDII, one pixel’s similarity value can be computed in constant time. Finally, the proposed segmentation method is tested on both artificial texture and natural images. Experimental results show that the proposed method performs well on a wide range of images, and outperforms state-of-the-art method on segmentation speed.  相似文献   

6.
Assessing the age of an individual via bones serves as a fool proof method in true determination of individual skills. Several attempts are reported in the past for assessment of chronological age of an individual based on variety of discriminative features found in wrist radiograph images. The permutation and combination of these features realized satisfactory accuracies for a set of limited groups. In this paper, assessment of gender for individuals of chronological age between 1-17 years is performed using left hand wrist radiograph images. A fully automated approach is proposed for removal of noise persisted due to non-uniform illumination during the process of radiograph acquisition process. Subsequent to this a computational technique for extraction of wrist region is proposed using operations on specific bit planes of image. A framework called GeNet of deep convolutional neural network is applied for classification of extracted wrist regions into male and female. The experimentations are conducted on the datasets of Radiological Society of North America (RSNA) of about 12442 images. Efficiency of preprocessing and segmentation techniques resulted into a correlation of about 99.09%. Performance of GeNet is evaluated on the extracted wrist regions resulting into an accuracy of 82.18%.  相似文献   

7.
Automatic road damage detection using image processing is an important aspect of road maintenance. It is also a challenging problem due to the inhomogeneity of road damage and complicated background in the road images. In recent years, deep convolutional neural network based methods have been used to address the challenges of road damage detection and classification. In this paper, we propose a new approach to address those challenges. This approach uses densely connected convolution networks as the backbone of the Mask R-CNN to effectively extract image feature, a feature pyramid network for combining multiple scales features, a region proposal network to generate the road damage region, and a fully convolutional neural network to classify the road damage region and refine the region bounding box. This method can not only detect and classify the road damage, but also create a mask of the road damage. Experimental results show that the proposed approach can achieve better results compared with other existing methods.  相似文献   

8.
Mass detection is a critical process in the examination of mammograms. The shape and texture of the mass are key parameters used in the diagnosis of breast cancer. To recover the shape of the mass, semantic segmentation is found to be more useful rather than mere object detection (or) localization. The main challenges involved in the mass segmentation include: (a) low signal to noise ratio (b) indiscernible mass boundaries, and (c) more false positives. These problems arise due to the significant overlap in the intensities of both the normal parenchymal region and the mass region. To address these challenges, deeply supervised U‐Net model (DS U‐Net) coupled with dense conditional random fields (CRFs) is proposed. Here, the input images are preprocessed using CLAHE and a modified encoder‐decoder‐based deep learning model is used for segmentation. In general, the encoder captures the textual information of various regions in an input image, whereas the decoder recovers the spatial location of the desired region of interest. The encoder‐decoder‐based models lack the ability to recover the non‐conspicuous and spiculated mass boundaries. In the proposed work, deep supervision is integrated with a popular encoder‐decoder model (U‐Net) to improve the attention of the network toward the boundary of the suspicious regions. The final segmentation map is also created as a linear combination of the intermediate feature maps and the output feature map. The dense CRF is then used to fine‐tune the segmentation map for the recovery of definite edges. The DS U‐Net with dense CRF is evaluated on two publicly available benchmark datasets CBIS‐DDSM and INBREAST. It provides a dice score of 82.9% for CBIS‐DDSM and 79% for INBREAST.  相似文献   

9.
Retinal vessel segmentation is of great significance for assisting doctors in diagnosis of ophthalmological diseases such as diabetic retinopathy, macular degeneration and glaucoma. This article proposes a new retinal vessel segmentation algorithm using generative adversarial learning with a large receptive field. A generative network maps an input retinal fundus image to a realistic vessel image while a discriminative network differentiates between images drawn from the database and the generative network. Firstly, the proposed generative network combines shallow features with the upsampled deep features to assemble a more precise vessel image. Secondly, the residual module in the proposed generative and discriminative networks can effectively help deep nets easy to optimize. Moreover, the dilated convolutions in the proposed generative network effectively enlarge the receptive field without increasing the amount of computations. A number of experiments are conducted on two publicly available datasets (DRIVE and STARE) achieving the segmentation accuracy rates of 95.63% and 96.84%, and the average areas under the ROC curve of 98.12% and 98.53%. Performance results show that the proposed automatic retinal vessel segmentation algorithm outperforms state-of-the-art algorithms in many validation metrics. The proposed algorithm can not only detect small tiny blood vessels but also capture large-scale high-level semantic vessel features.  相似文献   

10.
Image segmentation is widely applied for biomedical image analysis. However, segmentation of medical images is challenging due to many image modalities, such as, CT, X-ray, MRI, microscopy among others. An additional challenge to this is the high variability, inconsistent regions with missing edges, absence of texture contrast, and high noise in the background of biomedical images. Thus, many segmentation approaches have been investigated to address these issues and to transform medical images into meaningful information. During the past decade, finite mixture models have been revealed to be one of the most flexible and popular approaches in data clustering. In this article, we propose a statistical framework for online variational learning of finite inverted Beta-Liouville mixture model for clustering medical images. The online variational learning framework is used to estimate the parameters and the number of mixture components simultaneously, thus decreasing the computational complexity of the model. To this end, we evaluated our proposed algorithm on five different biomedical image data sets including optic disc detection and localization in diabetic retinopathy, digital imaging in melanoma lesion detection and segmentation, brain tumor detection, colon cancer detection and computer aid detection (CAD) of Malaria. Furthermore, we compared the proposed algorithm with three other popular algorithms. In our results, we analyze that the proposed online variational learning of finite IBL mixture model algorithm performs accurately on multiple modalities of medical images. It detects the disease patterns with high confidence. Computational and statistical approaches like the one presented in this article hold a significant impact on medical image analysis and interpretation in both clinical applications and scientific research. We believe that the proposed algorithm has the capacity to address multi modal biomedical image data sets and can be further applied by researchers to analyze correct disease patterns.  相似文献   

11.
We propose a novel object-of-interest (OOI) segmentation algorithm for various images that is based on human attention and semantic region clustering. As object-based image segmentation is beyond current computer vision techniques, the proposed method segments an image into regions, which are then merged as a semantic object. At the same time, an attention window (AW) is created based on the saliency map and saliency points from an image. Within the AW, a support vector machine is used to select the salient regions, which are then clustered into the OOI using the proposed region merging. Unlike other algorithms, the proposed method allows multiple OOIs to be segmented according to the saliency map.  相似文献   

12.
目的针对卷积神经网络在RGB-D(彩色-深度)图像中进行语义分割任务时模型参数量大且分割精度不高的问题,提出一种融合高效通道注意力机制的轻量级语义分割网络。方法文中网络基于RefineNet,利用深度可分离卷积(Depthwiseseparableconvolution)来轻量化网络模型,并在编码网络和解码网络中分别融合高效的通道注意力机制。首先RGB-D图像通过带有通道注意力机制的编码器网络,分别对RGB图像和深度图像进行特征提取;然后经过融合模块将2种特征进行多维度融合;最后融合特征经过轻量化的解码器网络得到分割结果,并与RefineNet等6种网络的分割结果进行对比分析。结果对提出的算法在语义分割网络常用公开数据集上进行了实验,实验结果显示文中网络模型参数为90.41 MB,且平均交并比(mIoU)比RefineNet网络提高了1.7%,达到了45.3%。结论实验结果表明,文中网络在参数量大幅减少的情况下还能提高了语义分割精度。  相似文献   

13.
This paper presents a handwritten document recognition system based on the convolutional neural network technique. In today’s world, handwritten document recognition is rapidly attaining the attention of researchers due to its promising behavior as assisting technology for visually impaired users. This technology is also helpful for the automatic data entry system. In the proposed system prepared a dataset of English language handwritten character images. The proposed system has been trained for the large set of sample data and tested on the sample images of user-defined handwritten documents. In this research, multiple experiments get very worthy recognition results. The proposed system will first perform image pre-processing stages to prepare data for training using a convolutional neural network. After this processing, the input document is segmented using line, word and character segmentation. The proposed system get the accuracy during the character segmentation up to 86%. Then these segmented characters are sent to a convolutional neural network for their recognition. The recognition and segmentation technique proposed in this paper is providing the most acceptable accurate results on a given dataset. The proposed work approaches to the accuracy of the result during convolutional neural network training up to 93%, and for validation that accuracy slightly decreases with 90.42%.  相似文献   

14.
Fully automatic brain tumor segmentation is one of the critical tasks in magnetic resonance imaging (MRI) images. This proposed work is aimed to develop an automatic method for brain tumor segmentation process by wavelet transformation and clustering technique. The proposed method using discrete wavelet transform (DWT) for pre‐ and post‐processing, fuzzy c‐means (FCM) for brain tissues segmentation. Initially, MRI images are preprocessed by DWT to sharpen the images and enhance the tumor region. It assists to quicken the FCM clustering technique and classified into four major classes: gray matter (GM), white matter (WM), cerebrospinal fluid (CSF), and background (BG). Then check the abnormality detection using Fuzzy symmetric measure for GM, WM, and CSF classes. Finally, DWT method is applied in segmented abnormal region of images respectively and extracts the tumor portion. The proposed method used 30 multimodal MRI training datasets from BraTS2012 database. Several quantitative measures were calculated and compared with the existing. The proposed method yielded the mean value of similarity index as 0.73 for complete tumor, 0.53 for core tumor, and 0.35 for enhancing tumor. The proposed method gives better results than the existing challenging methods over the publicly available training dataset from MICCAI multimodal brain tumor segmentation challenge and a minimum processing time for tumor segmentation. © 2016 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 26, 305–314, 2016  相似文献   

15.
Most image segmentation techniques efficiently segment images with prominent edges, but are less efficient for some images with low frequencies and overlapping regions of homogeneous intensities. A recently proposed selective segmentation model often works well, but not for such challenging images. In this paper, we introduce a new model using the coefficient of variation as a fidelity term, and our test results show it performs much better in these challenging cases.  相似文献   

16.
Nowadays, dietary assessment becomes the emerging system for evaluating the person’s food intake. In this paper, the multiple hypothesis image segmentation and feed-forward neural network classifier are proposed for dietary assessment to enhance the performance. Initially, the segmentation is applied to input image which is used to determine the regions where a particular food item is located using salient region detection, multi-scale segmentation, and fast rejection. Then, the significant feature of food items is extracted by the global feature and local feature extraction method. After the features are obtained, the classification is performed for each segmented region using feed-forward neural network model. Finally, the calorie value is computed with the aid of (i) food area volume and (ii) calorie and nutrition measure based on mass value. The outcome of the proposed method attains 96% of accuracy value which provides the better classification performance.  相似文献   

17.
In this article, we propose a novel model to overcome the drawbacks of the modified Chan–Vese (MCV) model. Our model is devoted to find an optimal partition of inhomogeneous regions accurately and computationally efficient. MCV model was proposed on the concept of using one level‐set function for one region. It needs fewer numbers of iterations and improves the efficiency of image segmentation in contrast to the multiphase Chan–Vese model. The MCV model, however, is highly dependent on initial curves placement and often leads to erroneous segmentations on images with intensity inhomogeneity. In our model, to eliminate the affection of background information on the curve evolution and speed up the curve evolution, we first use the k‐means algorithm to presegment the image to get the initial curves and then add the local image information to the total energy functional of MCV model to deal with the intensity inhomogeneity. Finally, extensive experiments are done and the segmentation results on homogeneous multiphase images verify that the proposed method has the better accuracy and efficiency comparing to MCV model. Moreover, we show results on challenging multiphase inhomogeneous image to illustrate the robust and accurate segmentation that are possible with this novel model. © 2012 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 22, 103–113, 2012  相似文献   

18.
Classification of brain neoplasm images is one of the most challenging research areas in the field of medical image processing. The main objective of this study is to design a brain neoplasm classification system that can be trained using multiple various sized MR images of different institutions. The proposed method is a generalized classification system; it can be used in a single institute or in a number of institutions at the same time, without any restriction on image size. The generalization and unbiased capability of the proposed method can bring researchers on a single platform to work on some standard forms of computer aided diagnosis system with more efficient diagnostic capabilities. In this study, a suitable size of moveable rectangular window is used between segmentation and feature extraction stages. A semiautomatic, localized region based active contour method is used for segmentation of brain neoplasm region. Discrete wavelet transform for feature extraction, principal component analysis for feature selection and support vector machine is used as classifier. For the first time MR images of 2 sizes and from different institutions are used in training and testing of brain neoplasm classifier. Three glioma grades were classified using 92 MR images. The proposed method achieved the highest accuracy of 88.26%, the highest sensitivity of 92.23% and the maximum specificity of 93.93%. In addition, the proposed method is computationally less complex, requires shorter processing time and is more efficient in terms of storage capacity.  相似文献   

19.
In medical imaging, segmenting brain tumor becomes a vital task, and it provides a way for early diagnosis and treatment. Manual segmentation of brain tumor in magnetic resonance (MR) images is a time‐consuming and challenging task. Hence, there is a need for a computer‐aided brain tumor segmentation approach. Using deep learning algorithms, a robust brain tumor segmentation approach is implemented by integrating convolution neural network (CNN) and multiple kernel K means clustering (MKKMC). In this proposed CNN‐MKKMC approach, classification of MR images into normal and abnormal is performed by CNN algorithm. At next, MKKMC algorithm is employed to segment the brain tumor from the abnormal brain image. The proposed CNN‐MKKMC algorithm is evaluated both visually and objectively in terms of accuracy, sensitivity, and specificity with the existing segmentation methods. The experimental results demonstrate that the proposed CNN‐MKKMC approach yields better accuracy in segmenting brain tumor with less time cost.  相似文献   

20.
The precise detection and segmentation of pectoral muscle areas in mediolateral oblique (MLO) views is an essential step in the development of a computer-aided diagnosis system to access breast malignant lesions or parenchyma. The goal of this article is to develop a robust and fully automatic algorithm for pectoral muscle segmentation from mammography images. This paper presents an image enhancement approach that improves the quality of mammogram scans and a convolutional neural network-based fully convolutional network architecture enhanced with residual connections for automatic segmentation of the pectoral muscle from the MLO views of a digital mammogram. For this purpose, the model is tested and trained on three different mammogram datasets named MIAS, INBREAST, and DDSM. The ground truth labels of the pectoral muscle were identified under the supervision of experienced radiologists. For training and testing, 10-fold cross-validation was used. The proposed model was compared with baseline U-Net-based architecture. Finally, we used a postprocessing step to find the actual boundary of the pectoral muscle. Our presented architecture generated a mean Intersection over Union (IoU) of 97%, dice similarity coefficient (DSC) of 96% and 98% accuracy on testing data. The proposed architecture for pectoral muscle segmentation from the MLO views of mammogram images with high accuracy and dice score can be quickly merged with the breast tumor segmentation problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号