首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abnormal growth of brain tissues is the real cause of brain tumor. Strategy for the diagnosis of brain tumor at initial stages is one of the key step for saving the life of a patient. The manual segmentation of brain tumor magnetic resonance images (MRIs) takes time and results vary significantly in low-level features. To address this issue, we have proposed a ResNet-50 feature extractor depended on multilevel deep convolutional neural network (CNN) for reliable images segmentation by considering the low-level features of MRI. In this model, we have extracted features through ResNet-50 architecture and fed these feature maps to multi-level CNN model. To handle the classification process, we have collected a total number of 2043 MRI patients of normal, benign, and malignant tumor. Three model CNN, multi-level CNN, and ResNet-50 based multi-level CNN have been used for detection and classification of brain tumors. All the model results are calculated in terms of various numerical values identified as precision (P), recall (R), accuracy (Acc) and f1-score (F1-S). The obtained average results are much better as compared to already existing methods. This modified transfer learning architecture might help the radiologists and doctors as a better significant system for tumor diagnosis.  相似文献   

2.
With the development of Deep Convolutional Neural Networks (DCNNs), the extracted features for image recognition tasks have shifted from low-level features to the high-level semantic features of DCNNs. Previous studies have shown that the deeper the network is, the more abstract the features are. However, the recognition ability of deep features would be limited by insufficient training samples. To address this problem, this paper derives an improved Deep Fusion Convolutional Neural Network (DF-Net) which can make full use of the differences and complementarities during network learning and enhance feature expression under the condition of limited datasets. Specifically, DF-Net organizes two identical subnets to extract features from the input image in parallel, and then a well-designed fusion module is introduced to the deep layer of DF-Net to fuse the subnet’s features in multi-scale. Thus, the more complex mappings are created and the more abundant and accurate fusion features can be extracted to improve recognition accuracy. Furthermore, a corresponding training strategy is also proposed to speed up the convergence and reduce the computation overhead of network training. Finally, DF-Nets based on the well-known ResNet, DenseNet and MobileNetV2 are evaluated on CIFAR100, Stanford Dogs, and UECFOOD-100. Theoretical analysis and experimental results strongly demonstrate that DF-Net enhances the performance of DCNNs and increases the accuracy of image recognition.  相似文献   

3.
4.
Leaf species identification leads to multitude of societal applications. There is enormous research in the lines of plant identification using pattern recognition. With the help of robust algorithms for leaf identification, rural medicine has the potential to reappear as like the previous decades. This paper discusses CNN based approaches for Indian leaf species identification from white background using smartphones. Variations of CNN models over the features like traditional shape, texture, color and venation apart from the other miniature features of uniformity of edge patterns, leaf tip, margin and other statistical features are explored for efficient leaf classification.  相似文献   

5.
Content aware image resizing (CAIR) is an excellent technology used widely for image retarget. It can also be used to tamper with images and bring the trust crisis of image content to the public. Once an image is processed by CAIR, the correlation of local neighborhood pixels will be destructive. Although local binary patterns (LBP) can effectively describe the local texture, it however cannot describe the magnitude information of local neighborhood pixels and is also vulnerable to noise. Therefore, to deal with the detection of CAIR, a novel forensic method based on improved local ternary patterns (ILTP) feature and gradient energy feature (GEF) is proposed in this paper. Firstly, the adaptive threshold of the original local ternary patterns (LTP) operator is improved, and the ILTP operator is used to describe the change of correlation among local neighborhood pixels caused by CAIR. Secondly, the histogram features of ILTP and the gradient energy features are extracted from the candidate image for CAIR forgery detection. Then, the ILTP features and the gradient energy features are concatenated into the combined features, and the combined features are used to train classifier. Finally support vector machine (SVM) is exploited as a classifier to be trained and tested by the above features in order to distinguish whether an image is subjected to CAIR or not. The candidate images are extracted from uncompressed color image database (UCID), then the training and testing sets are created. The experimental results with many test images show that the proposed method can detect CAIR tampering effectively, and that its performance is improved compared with other methods. It can achieve a better performance than the state-of-the-art approaches.  相似文献   

6.
Diabetes or Diabetes Mellitus (DM) is the upset that happens due to high glucose level within the body. With the passage of time, this polygenic disease creates eye deficiency referred to as Diabetic Retinopathy (DR) which can cause a major loss of vision. The symptoms typically originate within the retinal space square in the form of enlarged veins, liquid dribble, exudates, haemorrhages and small scale aneurysms. In current therapeutic science, pictures are the key device for an exact finding of patients’ illness. Meanwhile, an assessment of new medicinal symbolisms stays complex. Recently, Computer Vision (CV) with deep neural networks can train models with high accuracy. The thought behind this paper is to propose a computerized learning model to distinguish the key precursors of Dimensionality Reduction (DR). The proposed deep learning framework utilizes the strength of selected models (VGG and Inception V3) by fusing the extracated features. To select the most discriminant features from a pool of features, an entropy concept is employed before the classification step. The deep learning models are fit for measuring the highlights as veins, liquid dribble, exudates, haemorrhages and miniaturized scale aneurysms into various classes. The model will ascertain the loads, which give the seriousness level of the patient’s eye. The model will be useful to distinguish the correct class of seriousness of diabetic retinopathy pictures.  相似文献   

7.
Skin cancer (melanoma) is one of the most aggressive of the cancers and the prevalence has significantly increased due to increased exposure to ultraviolet radiation. Therefore, timely detection and management of the lesion is a critical consideration in order to improve lifestyle and reduce mortality. To this end, we have designed, implemented and analyzed a hybrid approach entailing convolutional neural networks (CNN) and local binary patterns (LBP). The experiments have been performed on publicly accessible datasets ISIC 2017, 2018 and 2019 (HAM10000) with data augmentation for in-distribution generalization. As a novel contribution, the CNN architecture is enhanced with an intelligible layer, LBP, that extracts the pertinent visual patterns. Classification of Basal Cell Carcinoma, Actinic Keratosis, Melanoma and Squamous Cell Carcinoma has been evaluated on 8035 and 3494 cases for training and testing, respectively. Experimental outcomes with cross-validation depict a plausible performance with an average accuracy of 97.29%, sensitivity of 95.63% and specificity of 97.90%. Hence, the proposed approach can be used in research and clinical settings to provide second opinions, closely approximating experts’ intuition.  相似文献   

8.
In this study, a novel hybrid Water Cycle Moth-Flame Optimization (WCMFO) algorithm is proposed for multilevel thresholding brain image segmentation in Magnetic Resonance (MR) image slices. WCMFO constitutes a hybrid between the two techniques, comprising the water cycle and moth-flame optimization algorithms. The optimal thresholds are obtained by maximizing the between class variance (Otsu’s function) of the image. To test the performance of threshold searching process, the proposed algorithm has been evaluated on standard benchmark of ten axial T2-weighted brain MR images for image segmentation. The experimental outcomes infer that it produces better optimal threshold values at a greater and quicker convergence rate. In contrast to other state-of-the-art methods, namely Adaptive Wind Driven Optimization (AWDO), Adaptive Bacterial Foraging (ABF) and Particle Swarm Optimization (PSO), the proposed algorithm has been found to be better at producing the best objective function, Peak Signal-to-Noise Ratio (PSNR), Standard Deviation (STD) and lower computational time values. Further, it was observed thatthe segmented image gives greater detail when the threshold level increases. Moreover, the statistical test result confirms that the best and mean values are almost zero and the average difference between best and mean value 1.86 is obtained through the 30 executions of the proposed algorithm.Thus, these images will lead to better segments of gray, white and cerebrospinal fluid that enable better clinical choices and diagnoses using a proposed algorithm.  相似文献   

9.
One of the leading causes of mortality worldwide is liver cancer. The earlier the detection of hepatic tumors, the lower the mortality rate. This paper introduces a computer-aided diagnosis system to extract hepatic tumors from computed tomography scans and classify them into malignant or benign tumors. Segmenting hepatic tumors from computed tomography scans is considered a challenging task due to the fuzziness in the liver pixel range, intensity values overlap between the liver and neighboring organs, high noise from computed tomography scanner, and large variance in tumors shapes. The proposed method consists of three main stages; liver segmentation using Fast Generalized Fuzzy C-Means, tumor segmentation using dynamic thresholding, and the tumor's classification into malignant/benign using support vector machines classifier. The performance of the proposed system was evaluated using three liver benchmark datasets, which are MICCAI-Sliver07, LiTS17, and 3Dircadb. The proposed computer adided diagnosis system achieved an average accuracy of 96.75%, sensetivity of 96.38%, specificity of 95.20% and Dice similarity coefficient of 95.13%.  相似文献   

10.
COVID-19 remains to proliferate precipitously in the world. It has significantly influenced public health, the world economy, and the persons’ lives. Hence, there is a need to speed up the diagnosis and precautions to deal with COVID-19 patients. With this explosion of this pandemic, there is a need for automated diagnosis tools to help specialists based on medical images. This paper presents a hybrid Convolutional Neural Network (CNN)-based classification and segmentation approach for COVID-19 detection from Computed Tomography (CT) images. The proposed approach is employed to classify and segment the COVID-19, pneumonia, and normal CT images. The classification stage is firstly applied to detect and classify the input medical CT images. Then, the segmentation stage is performed to distinguish between pneumonia and COVID-19 CT images. The classification stage is implemented based on a simple and efficient CNN deep learning model. This model comprises four Rectified Linear Units (ReLUs), four batch normalization layers, and four convolutional (Conv) layers. The Conv layer depends on filters with sizes of 64, 32, 16, and 8. A 2 × 2 window and a stride of 2 are employed in the utilized four max-pooling layers. A soft-max activation function and a Fully-Connected (FC) layer are utilized in the classification stage to perform the detection process. For the segmentation process, the Simplified Pulse Coupled Neural Network (SPCNN) is utilized in the proposed hybrid approach. The proposed segmentation approach is based on salient object detection to localize the COVID-19 or pneumonia region, accurately. To summarize the contributions of the paper, we can say that the classification process with a CNN model can be the first stage a highly-effective automated diagnosis system. Once the images are accepted by the system, it is possible to perform further processing through a segmentation process to isolate the regions of interest in the images. The region of interest can be assesses both automatically and through experts. This strategy helps so much in saving the time and efforts of specialists with the explosion of COVID-19 pandemic in the world. The proposed classification approach is applied for different scenarios of 80%, 70%, or 60% of the data for training and 20%, 30, or 40% of the data for testing, respectively. In these scenarios, the proposed approach achieves classification accuracies of 100%, 99.45%, and 98.55%, respectively. Thus, the obtained results demonstrate and prove the efficacy of the proposed approach for assisting the specialists in automated medical diagnosis services.  相似文献   

11.
Internet of Things (IoT) defines a network of devices connected to the internet and sharing a massive amount of data between each other and a central location. These IoT devices are connected to a network therefore prone to attacks. Various management tasks and network operations such as security, intrusion detection, Quality-of-Service provisioning, performance monitoring, resource provisioning, and traffic engineering require traffic classification. Due to the ineffectiveness of traditional classification schemes, such as port-based and payload-based methods, researchers proposed machine learning-based traffic classification systems based on shallow neural networks. Furthermore, machine learning-based models incline to misclassify internet traffic due to improper feature selection. In this research, an efficient multilayer deep learning based classification system is presented to overcome these challenges that can classify internet traffic. To examine the performance of the proposed technique, Moore-dataset is used for training the classifier. The proposed scheme takes the pre-processed data and extracts the flow features using a deep neural network (DNN). In particular, the maximum entropy classifier is used to classify the internet traffic. The experimental results show that the proposed hybrid deep learning algorithm is effective and achieved high accuracy for internet traffic classification, i.e., 99.23%. Furthermore, the proposed algorithm achieved the highest accuracy compared to the support vector machine (SVM) based classification technique and k-nearest neighbours (KNNs) based classification technique.  相似文献   

12.
In the sorting system of the production line, the object movement, fixed angle of view, light intensity and other reasons lead to obscure blurred images. It results in bar code recognition rate being low and real time being poor. Aiming at the above problems, a progressive bar code compressed recognition algorithm is proposed. First, assuming that the source image is not tilted, use the direct recognition method to quickly identify the compressed source image. Failure indicates that the compression ratio is improper or the image is skewed. Then, the source image is enhanced to identify the source image directly. Finally, the inclination of the compressed image is detected by the barcode region recognition method and the source image is corrected to locate the barcode information in the barcode region recognition image. The results of multitype image experiments show that the proposed method is improved by 5+ times computational efficiency compared with the former methods, and can recognize fuzzy images better.  相似文献   

13.
A brain tumor is a mass or growth of abnormal cells in the brain. In children and adults, brain tumor is considered one of the leading causes of death. There are several types of brain tumors, including benign (non-cancerous) and malignant (cancerous) tumors. Diagnosing brain tumors as early as possible is essential, as this can improve the chances of successful treatment and survival. Considering this problem, we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models (Resnet50, Vgg16, Vgg19, U-Net) and their integration for computer-aided detection and localization systems in brain tumors. These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas. The dataset consists of 120 patients. The pre-trained models have been used to classify tumor or no tumor images, while integrated models are applied to segment the tumor region correctly. We have evaluated their performance in terms of loss, accuracy, intersection over union, Jaccard distance, dice coefficient, and dice coefficient loss. From pre-trained models, the U-Net model achieves higher performance than other models by obtaining 95% accuracy. In contrast, U-Net with ResNet-50 outperforms all other models from integrated pre-trained models and correctly classified and segmented the tumor region.  相似文献   

14.
15.
Diabetic retinopathy (DR) is a disease with an increasing prevalence and the major reason for blindness among working-age population. The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment. An automated screening for DR has been identified as an effective method for early DR detection, which can decrease the workload associated to manual grading as well as save diagnosis costs and time. Several studies have been carried out to develop automated detection and classification models for DR. This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy (DR). The proposed model incorporates different processes namely data collection, preprocessing, segmentation, feature extraction and classification. At first, the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server. Then, the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization (CLAHE) model. Next, the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering (ASKFCM) model. Afterwards, deep Convolution Neural Network (CNN) based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes (GNB) model. The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.  相似文献   

16.
In this work, an inverse approach based on depth-sensing instrumented indentation tests is proposed to determine the Young's modulus, yield strength and strain hardening exponent of the materials for which the elastoplastic part of the stress-strain curve can be described using a power function. Numerical verifications performed on typical engineering metals demonstrate the effectiveness of the new method. The sensitivity of the method to data noise and some experimental uncertainties are also discussed, which may provide useful information for the application of the method in practice.  相似文献   

17.
Image segmentation is vital when analyzing medical images, especially magnetic resonance (MR) images of the brain. Recently, several image segmentation techniques based on multilevel thresholding have been proposed for medical image segmentation; however, the algorithms become trapped in local minima and have low convergence speeds, particularly as the number of threshold levels increases. Consequently, in this paper, we develop a new multilevel thresholding image segmentation technique based on the jellyfish search algorithm (JSA) (an optimizer). We modify the JSA to prevent descents into local minima, and we accelerate convergence toward optimal solutions. The improvement is achieved by applying two novel strategies: Ranking-based updating and an adaptive method. Ranking-based updating is used to replace undesirable solutions with other solutions generated by a novel updating scheme that improves the qualities of the removed solutions. We develop a new adaptive strategy to exploit the ability of the JSA to find a best-so-far solution; we allow a small amount of exploration to avoid descents into local minima. The two strategies are integrated with the JSA to produce an improved JSA (IJSA) that optimally thresholds brain MR images. To compare the performances of the IJSA and JSA, seven brain MR images were segmented at threshold levels of 3, 4, 5, 6, 7, 8, 10, 15, 20, 25, and 30. IJSA was compared with several other recent image segmentation algorithms, including the improved and standard marine predator algorithms, the modified salp and standard salp swarm algorithms, the equilibrium optimizer, and the standard JSA in terms of fitness, the Structured Similarity Index Metric (SSIM), the peak signal-to-noise ratio (PSNR), the standard deviation (SD), and the Features Similarity Index Metric (FSIM). The experimental outcomes and the Wilcoxon rank-sum test demonstrate the superiority of the proposed algorithm in terms of the FSIM, the PSNR, the objective values, and the SD; in terms of the SSIM, IJSA was competitive with the others.  相似文献   

18.
A non-linear fracture mechanics based approach is proposed to depict a typical fracture mechanism from initiation to growth, eventually leading to failure. This concept is developed for a lightly reinforced beam in flexure. The proposed model integrates the existing methodology of a Stress Intensity Factor equilibrium equation with the bridging forces developed in concrete cover and rebar. The model and solution algorithm outlined presents an elaborate understanding of the mechanism involved and is significant in predicting the behaviour of flexural members. The analysis is performed using MATLAB programming. The proposed approach ensures a maximum tolerable crack length and crack width for flexural members to prevent a catastrophic failure. Such an approach has the potential to serve as an analysis and design tool for reinforced concrete components subjected to normal conditions and towards deciding rehabilitation and strengthening measures.  相似文献   

19.
Thermoelastic stress analysis (TSA) and digital image correlation (DIC) are used to examine the stress and strain distributions around the geometric discontinuity in a composite double butt strap joint. A well‐known major limitation in conducting analysis using TSA is that it provides a metric that is only related to the sum of the principal stresses and cannot provide the component stresses/strains. The stress metric is related to the thermoelastic response by a combination of material properties known as the thermoelastic constant (coefficient of thermal expansion divided by density and specific heat). The thermoelastic constant is usually obtained by a calibration process. For calibration purposes when using orthotropic materials, it is necessary to obtain the thermoelastic constant in the principal material directions, as the principal stress directions for a general structure are unknown. Often, it is assumed that the principal stress directions are coincident with the principal material directions. Clearly, this assumption is not valid in complex stress systems, and therefore, a means of obtaining the thermoelastic constants in the principal stress directions is required. Such a region is that in the neighbourhood of the discontinuities in a bonded lap joint. A methodology is presented that employs a point‐wise manipulation of the thermoelastic constants from the material directions to the principal stress directions using full‐field DIC strain data obtained from the neighbourhood of the discontinuity. A comparison of stress metrics generated from the TSA and DIC data is conducted to provide an independent experimental validation of the two‐dimensional DIC analysis. The accuracy of a two‐dimensional plane strain finite element model representing the joint is assessed against the two experimental data sets. Excellent agreement is found between the experimental and numerical results in the adhesive layer; the adhesive is the only component of the joint where the material properties were not obtained experimentally. The reason for the discrepancy is discussed in the paper.  相似文献   

20.
Lamb wave tomography (LWT) is a potential and efficient technique for non-destructive tomographic reconstruction of damage images in structural components or materials. A new two-stage inverse algorithm with a small amount of scanning data for quickly reconstructing damage images in aluminum and CFRP laminated plates was proposed in this paper. Due to its high sensitivity to damages, the amplitude decrease of transmitted Lamb waves after travelling through the inspected region was employed as a key signal parameter related to the attenuation of Lamb waves in propagation routes. A through-thickness circular hole and a through-thickness elliptical hole in two aluminum plates, and an impact-induced invisible internal delamination in a CFRP laminated plate were used to validate the effectiveness and reliability of the proposed method. It was concluded that the present new algorithm was capable of reconstructing the images of the above mentioned various damages successfully with much less experimental data compared with those needed by some traditional techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号