共查询到20条相似文献,搜索用时 0 毫秒
1.
Balakumaresan Ragupathy Manivannan Karunakaran 《International journal of imaging systems and technology》2021,31(1):118-127
In medical imaging, segmenting brain tumor becomes a vital task, and it provides a way for early diagnosis and treatment. Manual segmentation of brain tumor in magnetic resonance (MR) images is a time‐consuming and challenging task. Hence, there is a need for a computer‐aided brain tumor segmentation approach. Using deep learning algorithms, a robust brain tumor segmentation approach is implemented by integrating convolution neural network (CNN) and multiple kernel K means clustering (MKKMC). In this proposed CNN‐MKKMC approach, classification of MR images into normal and abnormal is performed by CNN algorithm. At next, MKKMC algorithm is employed to segment the brain tumor from the abnormal brain image. The proposed CNN‐MKKMC algorithm is evaluated both visually and objectively in terms of accuracy, sensitivity, and specificity with the existing segmentation methods. The experimental results demonstrate that the proposed CNN‐MKKMC approach yields better accuracy in segmenting brain tumor with less time cost. 相似文献
2.
Ngangbam Herojit Singh;N. R. Gladiss Merlin;R. Thandaiah Prabu;Deepak Gupta;Meshal Alharbi; 《International journal of imaging systems and technology》2024,34(1):e22951
Brain tumors are still diagnosed and classified based on the results of histopathological examinations of biopsy samples. The existing method requires extra effort from the user, takes too long, and can lead to blunders. These limitations underline the need of employing a fully automated deep learning system for the multi-classification of brain tumors. In order to facilitate early detection, this study employs a convolutional neural network (CNN) to multi-classify brain tumors. In this research, we present three distinct CNN models for use in three separate categorization tasks. The first CNN model can correctly categorize brain tumors 99.74% of the time. The second CNN model is 96.27% accurate in differentiating between normal, glioma, meningioma, pituitary, and metastatic brain tumors. The third CNN model successfully distinguishes between Grades II, III, and IV brain tumors 99.18% of the time. The Hybrid Particle Swarm Grey Wolf Optimization (HPSGWO) technique is used to quickly and accurately determine optimal values for all of CNN models most important hyperparameters. An HPSGWO algorithm is used to fine-tune all the necessary hyperparameters for optimal classification performance. The results are compared with standard existing CNN models across a range of performance measures. The proposed models are trained using publicly available large clinical datasets. To verify their initial multi-classification of brain tumors, clinicians and radiologists might use the proposed CNN models. 相似文献
3.
4.
Dan Huang;Luyi Qiu;Zifeng Liu;Yi Ding;Mingsheng Cao; 《International journal of imaging systems and technology》2024,34(4):e23135
In clinical diagnosis and surgical planning, extracting brain tumors from magnetic resonance images (MRI) is very important. Nevertheless, considering the high variability and imbalance of the brain tumor datasets, the way of designing a deep neural network for accurately segmenting the brain tumor still challenges the researchers. Moreover, as the number of convolutional layers increases, the deep feature maps cannot provide fine-grained spatial information, and this feature information is useful for segmenting brain tumors from the MRI. Aiming to solve this problem, a brain tumor segmenting method of residual multilevel and multiscale framework (Res-MulFra) is proposed in this article. In the proposed framework, the multilevel is realized by stacking the proposed RMFM-based segmentation network (RMFMSegNet), which is mainly used to leverage the prior knowledge to gain a better brain tumor segmentation performance. The multiscale is implemented by the proposed RMFMSegNet, which includes both the parallel multibranch structure and the serial multibranch structure, and is mainly designed for obtaining the multiscale feature information. Moreover, from various receptive fields, a residual multiscale feature fusion module (RMFM) is also proposed to effectively combine the contextual feature information. Furthermore, in order to gain a better brain tumor segmentation performance, the channel attention module is also adopted. Through assessing the devised framework on the BraTS dataset and comparing it with other advanced methods, the effectiveness of the Res-MulFra is verified by the extensive experimental results. For the BraTS2015 testing dataset, the Dice value of the proposed method is 0.85 for the complete area, 0.72 for the core area, and 0.62 for the enhanced area. 相似文献
5.
Gurinderjeet Kaur Prashant Singh Rana Vinay Arora 《International journal of imaging systems and technology》2023,33(1):340-361
To propose and implement an automated machine learning (ML) based methodology to predict the overall survival of glioblastoma multiforme (GBM) patients. In the proposed methodology, we used deep learning (DL) based 3D U-shaped Convolutional Neural Network inspired encoder-decoder architecture to segment the brain tumor. Further, feature extraction was performed on these segmented and raw magnetic resonance imaging (MRI) scans using a pre-trained 2D residual neural network. The dimension-reduced principal components were integrated with clinical data and the handcrafted features of tumor subregions to compare the performance of regression-based automated ML techniques. Through the proposed methodology, we achieved the mean squared error (MSE) of 87 067.328, median squared error of 30 915.66, and a SpearmanR correlation of 0.326 for survival prediction (SP) with the validation set of Multimodal Brain Tumor Segmentation 2020 dataset. These results made the MSE far better than the existing automated techniques for the same patients. Automated SP of GBM patients is a crucial topic with its relevance in clinical use. The results proved that DL-based feature extraction using 2D pre-trained networks is better than many heavily trained 3D and 2D prediction models from scratch. The ensembled approach has produced better results than single models. The most crucial feature affecting GBM patients' survival is the patient's age, as per the feature importance plots presented in this work. The most critical MRI modality for SP of GBM patients is the T2 fluid attenuated inversion recovery, as evident from the feature importance plots. 相似文献
6.
Vehicle type classification is considered a central part of an intelligent traffic system. In recent years, deep learning had a vital role in object detection in many computer vision tasks. To learn high-level deep features and semantics, deep learning offers powerful tools to address problems in traditional architectures of handcrafted feature-extraction techniques. Unlike other algorithms using handcrated visual features, convolutional neural network is able to automatically learn good features of vehicle type classification. This study develops an optimized automatic surveillance and auditing system to detect and classify vehicles of different categories. Transfer learning is used to quickly learn the features by recording a small number of training images from vehicle frontal view images. The proposed system employs extensive data-augmentation techniques for effective training while avoiding the problem of data shortage. In order to capture rich and discriminative information of vehicles, the convolutional neural network is fine-tuned for the classification of vehicle types using the augmented data. The network extracts the feature maps from the entire dataset and generates a label for each object (vehicle) in an image, which can help in vehicle-type detection and classification. Experimental results on a public dataset and our own dataset demonstrated that the proposed method is quite effective in detection and classification of different types of vehicles. The experimental results show that the proposed model achieves 96.04% accuracy on vehicle type classification. 相似文献
7.
针对3D-CNN能够较好地提取视频中时空特征但对计算量和内存要求很高的问题,本文设计了高效3D卷积块替换原来计算量大的3×3×3卷积层,进而提出了一种融合3D卷积块的密集残差网络(3D-EDRNs)用于人体行为识别.高效3D卷积块由获取视频空间特征的1×3×3卷积层和获取视频时间特征的3×1×1卷积层组合而成.将高效3... 相似文献
8.
目的针对传统无纺布缺陷分类检测中人工依赖性强、效率低等问题,提出一种能够满足工厂要求的卷积神经网络分类检测方法。方法首先建立包括脏点、褶皱、断裂、缺纱和无缺陷等5种共计7万张无纺布图像样本库,其次构造一个具有不同神经元个数的卷积层和池化层的神经网络,然后采用反向传播算法逐层更新权值,通过梯度下降法最小化损失函数,最后利用Softmax分类器实现无纺布的缺陷分类检测。结果构建了12层的卷积神经网络,通过2万张样本进行测试实验,无缺陷样本准确率可以达到100%,缺陷样本分类准确率均在95%以上,检测时间在35 ms以内。结论该方法能够满足工业生产线中对于无纺布缺陷实时分类检测的要求。 相似文献
9.
Emotion recognition systems are helpful in human–machine interactions and Intelligence Medical applications. Electroencephalogram (EEG) is closely related to the central nervous system activity of the brain. Compared with other signals, EEG is more closely associated with the emotional activity. It is essential to study emotion recognition based on EEG information. In the research of emotion recognition based on EEG, it is a common problem that the results of individual emotion classification vary greatly under the same scheme of emotion recognition, which affects the engineering application of emotion recognition. In order to improve the overall emotion recognition rate of the emotion classification system, we propose the CSP_VAR_CNN (CVC) emotion recognition system, which is based on the convolutional neural network (CNN) algorithm to classify emotions of EEG signals. Firstly, the emotion recognition system using common spatial patterns (CSP) to reduce the EEG data, then the standardized variance (VAR) is selected as the parameter to form the emotion feature vectors. Lastly, a 5-layer CNN model is built to classify the EEG signal. The classification results show that this emotion recognition system can better the overall emotion recognition rate: the variance has been reduced to 0.0067, which is a decrease of 64% compared to that of the CSP_VAR_SVM (CVS) system. On the other hand, the average accuracy reaches 69.84%, which is 0.79% higher than that of the CVS system. It shows that the overall emotion recognition rate of the proposed emotion recognition system is more stable, and its emotion recognition rate is higher. 相似文献
10.
Brain tumor refers to the formation of abnormal cells in the brain. It can be divided into benign and malignant. The main diagnostic methods for brain tumors are plain X-ray film, Magnetic resonance imaging (MRI), and so on. However, these artificial diagnosis methods are easily affected by external factors. Scholars have made such impressive progress in brain tumors classification by using convolutional neural network (CNN). However, there are still some problems: (i) There are many parameters in CNN, which require much calculation. (ii) The brain tumor data sets are relatively small, which may lead to the overfitting problem in CNN. In this paper, our team proposes a novel model (RBEBT) for the automatic classification of brain tumors. We use fine-tuned ResNet18 to extract the features of brain tumor images. The RBEBT is different from the traditional CNN models in that the randomized neural network (RNN) is selected as the classifier. Meanwhile, our team selects the bat algorithm (BA) to optimize the parameters of RNN. We use five-fold cross-validation to verify the superiority of the RBEBT. The accuracy (ACC), specificity (SPE), precision (PRE), sensitivity (SEN), and F1-score (F1) are 99.00%, 95.00%, 99.00%, 100.00%, and 100.00%. The classification performance of the RBEBT is greater than 95%, which can prove that the RBEBT is an effective model to classify brain tumors. 相似文献
11.
针对地震勘探中噪声压制的问题,构建了一种适合分类和识别地震子波的卷积神经网络模型.首先对卷积神经网络模型的激活函数、卷积核大小以及归一化层等进行了设计,然后利用已搭建好的卷积神经网络对地震信号的时频谱图进行特征提取,最后实现了不同类型的含噪地震信号的分类和识别.实验结果表明,该模型有高分类率和识别率及较好的抗干扰能力,... 相似文献
12.
目的 针对锂电池极片涂布缺陷种类多,传统方法分类检测精度不高,以及人工依赖性强等问题,提出一种基于卷积神经网络的锂电池极片涂布缺陷自动分类算法。方法 首先对网络结构以及模型参数进行优化,接着在网络中加入跳跃连接结构,将空洞卷积提取到的多尺度特征与高层特征进行融合以获取更多缺陷特征,并采用LeakyReLU(Leaky Rectified Linear Unit)激活函数保留图像中的负值特征信息,最后通过构建的数据集训练模型,实现锂电池极片涂布缺陷的准确分类。结果 实验结果表明,当前方法识别准确率能够达到99.34%,平均检测时间为51ms。结论 改进后的方法能够准确分类出锂电池极片18种涂布缺陷,满足工业生产中实时分类检测的要求。 相似文献
13.
R. Harikumar B. Vinoth kumar 《International journal of imaging systems and technology》2015,25(1):33-40
In this article, we analyze the performance of artificial neural network, in classification of medical images using wavelets as feature extractor. This work classifies the mammographic image, MRI images, CT images, and ultrasound images as either normal or abnormal. We have tested the proposed approach using 50 mammogram images (13 normal and 37 abnormal), 24 MRI brain images (9 normal and 15 abnormal), 33 CT images (11 normal and 22 abnormal), and 20 ultrasound images (6 normal and 14 abnormal). Four kind of neural network models such as BPN (Back Propagation Network), Hopfield, RBF (Radial Basis Function), and PNN (Probabilistic neural network) were chosen for study. To improve diagnostic accuracy, the feature extracted using wavelets such as Harr, Daubechies (db2, db4, and db8), Biorthogonal and Coiflet wavelets are given as input to the neural network models. Good classification percentage of 96% was achieved using the RBF when Daubechies (db4) wavelet based feature extraction was used. We observed that the classification rate is almost high under the RBF neural network for all the dataset considered. © 2015 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 25, 33–40, 2015 相似文献
14.
Recently, the effectiveness of neural networks, especially convolutional neural networks, has been validated in the field of natural language processing, in which, sentiment classification for online reviews is an important and challenging task. Existing convolutional neural networks extract important features of sentences without local features or the feature sequence. Thus, these models do not perform well, especially for transition sentences. To this end, we propose a Piecewise Pooling Convolutional Neural Network (PPCNN) for sentiment classification. Firstly, with a sentence presented by word vectors, convolution operation is introduced to obtain the convolution feature map vectors. Secondly, these vectors are segmented according to the positions of transition words in sentences. Thirdly, the most significant feature of each local segment is extracted using max pooling mechanism, and then the different aspects of features can be extracted. Specifically, the relative sequence of these features is preserved. Finally, after processed by the dropout algorithm, the softmax classifier is trained for sentiment classification. Experimental results show that the proposed method PPCNN is effective and superior to other baseline methods, especially for datasets with transition sentences. 相似文献
15.
Muhammad Javaid Iqbal Muhammad Waseem Iqbal Muhammad Anwar Muhammad Murad Khan Abd Jabar Nazimi Mohammad Nazir Ahmad 《计算机、材料和连续体(英文)》2023,74(3):5267-5281
The brain tumour is the mass where some tissues become old or damaged, but they do not die or not leave their space. Mainly brain tumour masses occur due to malignant masses. These tissues must die so that new tissues are allowed to be born and take their place. Tumour segmentation is a complex and time-taking problem due to the tumour’s size, shape, and appearance variation. Manually finding such masses in the brain by analyzing Magnetic Resonance Images (MRI) is a crucial task for experts and radiologists. Radiologists could not work for large volume images simultaneously, and many errors occurred due to overwhelming image analysis. The main objective of this research study is the segmentation of tumors in brain MRI images with the help of digital image processing and deep learning approaches. This research study proposed an automatic model for tumor segmentation in MRI images. The proposed model has a few significant steps, which first apply the pre-processing method for the whole dataset to convert Neuroimaging Informatics Technology Initiative (NIFTI) volumes into the 3D NumPy array. In the second step, the proposed model adopts U-Net deep learning segmentation algorithm with an improved layered structure and sets the updated parameters. In the third step, the proposed model uses state-of-the-art Medical Image Computing and Computer-Assisted Intervention (MICCAI) BRATS 2018 dataset with MRI modalities such as T1, T1Gd, T2, and Fluid-attenuated inversion recovery (FLAIR). Tumour types in MRI images are classified according to the tumour masses. Labelling of these masses carried by state-of-the-art approaches such that the first is enhancing tumour (label 4), edema (label 2), necrotic and non-enhancing tumour core (label 1), and the remaining region is label 0 such that edema (whole tumour), necrosis and active. The proposed model is evaluated and gets the Dice Coefficient (DSC) value for High-grade glioma (HGG) volumes for their test set-a, test set-b, and test set-c 0.9795, 0.9855 and 0.9793, respectively. DSC value for the Low-grade glioma (LGG) volumes for the test set is 0.9950, which shows the proposed model has achieved significant results in segmenting the tumour in MRI using deep learning approaches. The proposed model is fully automatic that can implement in clinics where human experts consume maximum time to identify the tumorous region of the brain MRI. The proposed model can help in a way it can proceed rapidly by treating the tumor segmentation in MRI. 相似文献
16.
Sourav Kumar Tanwar;Prakash Choudhary; Priyanka;Tarun Agrawal; 《International journal of imaging systems and technology》2024,34(4):e23149
Classifying fetal ultrasound images into different anatomical categories, such as the abdomen, brain, femur, thorax, and so forth can contribute to the early identification of potential anomalies or dangers during prenatal care. Ignoring major abnormalities that might lead to fetal death or permanent disability. This article proposes a novel hybrid capsule network architecture-based method for identifying fetal ultrasound images. The proposed architecture increases the precision of fetal image categorization by combining the benefits of a capsule network with a convolutional neural network. The proposed hybrid model surpasses conventional convolutional network-based techniques with an overall accuracy of 0.989 when tested on a publicly accessible dataset of prenatal ultrasound images. The results indicate that the proposed hybrid architecture is a promising approach for precisely and consistently classifying fetal ultrasound images, with potential uses in clinical settings. 相似文献
17.
Phong Thanh Nguyen Vy Dang Bich Huynh Khoa Dang Vo Phuong Thanh Phan Eunmok Yang Gyanendra Prasad Joshi 《计算机、材料和连续体(英文)》2021,66(3):2815-2830
Diabetic Retinopathy (DR) is a significant blinding disease that poses serious threat to human vision rapidly. Classification and severity grading of DR are difficult processes to accomplish. Traditionally, it depends on ophthalmoscopically-visible symptoms of growing severity, which is then ranked in a stepwise scale from no retinopathy to various levels of DR severity. This paper presents an ensemble of Orthogonal Learning Particle Swarm Optimization (OPSO) algorithm-based Convolutional Neural Network (CNN) Model EOPSO-CNN in order to perform DR detection and grading. The proposed EOPSO-CNN model involves three main processes such as preprocessing, feature extraction, and classification. The proposed model initially involves preprocessing stage which removes the presence of noise in the input image. Then, the watershed algorithm is applied to segment the preprocessed images. Followed by, feature extraction takes place by leveraging EOPSO-CNN model. Finally, the extracted feature vectors are provided to a Decision Tree (DT) classifier to classify the DR images. The study experiments were carried out using Messidor DR Dataset and the results showed an extraordinary performance by the proposed method over compared methods in a considerable way. The simulation outcome offered the maximum classification with accuracy, sensitivity, and specificity values being 98.47%, 96.43%, and 99.02% respectively. 相似文献
18.
With the rapid development of computer technology, millions of images are produced everyday by different sources. How to efficiently process these images and accurately discern the scene in them becomes an important but tough task. In this paper, we propose a novel supervised learning framework based on proposed adaptive binary coding for scene classification. Specifically, we first extract some high-level features of images under consideration based on available models trained on public datasets. Then, we further design a binary encoding method called one-hot encoding to make the feature representation more efficient. Benefiting from the proposed adaptive binary coding, our method is free of time to train or fine-tune the deep network and can effectively handle different applications. Experimental results on three public datasets, i.e., UIUC sports event dataset, MIT Indoor dataset, and UC Merced dataset in terms of three different classifiers, demonstrate that our method is superior to the state-of-the-art methods with large margins. 相似文献
19.
Jainy Sachdeva;Deepanshu Sharma;Chirag Kamal Ahuja;Arnavdeep Singh; 《International journal of imaging systems and technology》2024,34(5):e23170
A multiscale feature fusion of Efficient-Residual Net is proposed for classifying tumors on brain Magnetic resonance images with solid or cystic masses, inadequate borders, unpredictable cystic and necrotic regions, and variable heterogeneity. Therefore, in this research, Efficient-Residual Net is proposed by efficaciously amalgamating features of two Deep Convolution Neural Networks—ResNet50 and EffficientNetB0. The skip connections in ResNet50 have reduced the chances of vanishing gradient and overfitting problems considerably thus learning of a higher number of features from input MR images. In addition, EffficientNetB0 uses a compound scaling coefficient for uniformly scaling the dimensions of the network such as depth, width, and resolution. The hybrid model has improved classification results on brain tumors with similar morphology and is tested on three internet repository datasets, namely, Kaggle, BraTS 2018, BraTS 2021, and real-time dataset from Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh. It is observed that the proposed system delivers an overall accuracy of 96.40%, 97.59%, 97.75%, and 97.99% on the four datasets, respectively. The proposed hybrid methodology has given assuring results of 98%–99% of other statistical such parameters as precision, negatively predicted values, and F1 score. The cloud-based web page is also created using the Django framework in Python programming language for accurate prediction and classification of different types of brain tumors. 相似文献
20.
Saad Awadh Alanazi 《计算机、材料和连续体(英文)》2022,72(1):37-55
Melanoma, also called malignant melanoma, is a form of skin cancer triggered by an abnormal proliferation of the pigment-producing cells, which give the skin its color. Melanoma is one of the skin diseases, which is exceptionally and globally dangerous, Skin lesions are considered to be a serious disease. Dermoscopy-based early recognition and detection procedure is fundamental for melanoma treatment. Early detection of melanoma using dermoscopy images improves survival rates significantly. At the same time, well-experienced dermatologists dominate the precision of diagnosis. However, precise melanoma recognition is incredibly hard due to several factors: low contrast between lesions and surrounding skin, visual similarity between melanoma and non-melanoma lesions, and so on. Thus, reliable automatic detection of skin tumors is critical for pathologists’ effectiveness and precision. To take care of this issue, numerous research centers around the world are creating autonomous image processing-oriented frameworks. We suggested deep learning methods in this article to address significant tasks that have emerged in the field of skin lesion image processing: we provided a Convolutional Neural Network (CNN) based framework using an Inception-v3 (INCP-v3) melanoma detection scheme and accomplished very high precision (98.96%) against melanoma detection. The classification framework of CNN is created utilizing TensorFlow and Keras in the backend (in Python). It likewise utilizes Transfer-Learning (TL) approach. It is prepared on the data gathered from the “International Skin Imaging Collaboration (ISIC)” repositories. The experiments show that the suggested technique outperforms state-of-the-art methods in terms of predictive performance. 相似文献