首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nowadays, dietary assessment becomes the emerging system for evaluating the person’s food intake. In this paper, the multiple hypothesis image segmentation and feed-forward neural network classifier are proposed for dietary assessment to enhance the performance. Initially, the segmentation is applied to input image which is used to determine the regions where a particular food item is located using salient region detection, multi-scale segmentation, and fast rejection. Then, the significant feature of food items is extracted by the global feature and local feature extraction method. After the features are obtained, the classification is performed for each segmented region using feed-forward neural network model. Finally, the calorie value is computed with the aid of (i) food area volume and (ii) calorie and nutrition measure based on mass value. The outcome of the proposed method attains 96% of accuracy value which provides the better classification performance.  相似文献   

2.
ABSTRACT

This paper proposes the multiple-hypotheses image segmentation and feed-forward neural network classifier for food recognition to improve the performance. Initially, the food or meal image is given as input. Then, the segmentation is applied to identify the regions, where a particular food item is located using salient region detection, multi-scale segmentation, and fast rejection. Then, the features of every food item are extracted by the global feature and local feature extraction. After the features are obtained, the classification is performed for each segmented region using a feed-forward neural network model. Finally, the calorie value is computed with the aid of (i) food volume and (ii) calorie and nutrition measure based on mass value. The experimental results and performance evaluation are validated. The outcome of the proposed method attains 0.947 for Macro Average Accuracy (MAA) and 0.959 for Standard Accuracy (SA), which provides better classification performance.  相似文献   

3.
Computer vision is one of the significant trends in computer science. It plays as a vital role in many applications, especially in the medical field. Early detection and segmentation of different tumors is a big challenge in the medical world. The proposed framework uses ultrasound images from Kaggle, applying five diverse models to denoise the images, using the best possible noise-free image as input to the U-Net model for segmentation of the tumor, and then using the Convolution Neural Network (CNN) model to classify whether the tumor is benign, malignant, or normal. The main challenge faced by the framework in the segmentation is the speckle noise. It’s is a multiplicative and negative issue in breast ultrasound imaging, because of this noise, the image resolution and contrast become reduced, which affects the diagnostic value of this imaging modality. As result, speckle noise reduction is very vital for the segmentation process. The framework uses five models such as Generative Adversarial Denoising Network (DGAN-Net), Denoising U-Shaped Net (D-U-NET), Batch Renormalization U-Net (Br-U-NET), Generative Adversarial Network (GAN), and Nonlocal Neutrosophic of Wiener Filtering (NLNWF) for reducing the speckle noise from the breast ultrasound images then choose the best image according to peak signal to noise ratio (PSNR) for each level of speckle-noise. The five used methods have been compared with classical filters such as Bilateral, Frost, Kuan, and Lee and they proved their efficiency according to PSNR in different levels of noise. The five diverse models are achieved PSNR results for speckle noise at level (0.1, 0.25, 0.5, 0.75), (33.354, 29.415, 27.218, 24.115), (31.424, 28.353, 27.246, 24.244), (32.243, 28.42, 27.744, 24.893), (31.234, 28.212, 26.983, 23.234) and (33.013, 29.491, 28.556, 25.011) for DGAN, Br-U-NET, D-U-NET, GAN and NLNWF respectively. According to the value of PSNR and level of speckle noise, the best image passed for segmentation using U-Net and classification using CNN to detect tumor type. The experiments proved the quality of U-Net and CNN in segmentation and classification respectively, since they achieved 95.11 and 95.13 in segmentation and 95.55 and 95.67 in classification as dice score and accuracy respectively.  相似文献   

4.
Medical image steganography aims to increase data security by concealing patient-personal information as well as diagnostic and therapeutic data in the spatial or frequency domain of radiological images. On the other hand, the discipline of image steganalysis generally provides a classification based on whether an image has hidden data or not. Inspired by previous studies on image steganalysis, this study proposes a deep ensemble learning model for medical image steganalysis to detect malicious hidden data in medical images and develop medical image steganography methods aimed at securing personal information. With this purpose in mind, a dataset containing brain Magnetic Resonance (MR) images of healthy individuals and epileptic patients was built. Spatial Version of the Universal Wavelet Relative Distortion (S-UNIWARD), Highly Undetectable Stego (HUGO), and Minimizing the Power of Optimal Detector (MIPOD) techniques used in spatial image steganalysis were adapted to the problem, and various payloads of confidential data were hidden in medical images. The architectures of medical image steganalysis networks were transferred separately from eleven Dense Convolutional Network (DenseNet), Residual Neural Network (ResNet), and Inception-based models. The steganalysis outputs of these networks were determined by assembling models separately for each spatial embedding method with different payload ratios. The study demonstrated the success of pre-trained ResNet, DenseNet, and Inception models in the cover-stego mismatch scenario for each hiding technique with different payloads. Due to the high detection accuracy achieved, the proposed model has the potential to lead to the development of novel medical image steganography algorithms that existing deep learning-based steganalysis methods cannot detect. The experiments and the evaluations clearly proved this attempt.  相似文献   

5.
The Internet of Medical Things (IoMT) emerges with the vision of the Wireless Body Sensor Network (WBSN) to improve the health monitoring systems and has an enormous impact on the healthcare system for recognizing the levels of risk/severity factors (premature diagnosis, treatment, and supervision of chronic disease i.e., cancer) via wearable/electronic health sensor i.e., wireless endoscopic capsule. However, AI-assisted endoscopy plays a very significant role in the detection of gastric cancer. Convolutional Neural Network (CNN) has been widely used to diagnose gastric cancer based on various feature extraction models, consequently, limiting the identification and categorization performance in terms of cancerous stages and grades associated with each type of gastric cancer. This paper proposed an optimized AI-based approach to diagnose and assess the risk factor of gastric cancer based on its type, stage, and grade in the endoscopic images for smart healthcare applications. The proposed method is categorized into five phases such as image pre-processing, Four-Dimensional (4D) image conversion, image segmentation, K-Nearest Neighbour (K-NN) classification, and multi-grading and staging of image intensities. Moreover, the performance of the proposed method has experimented on two different datasets consisting of color and black and white endoscopic images. The simulation results verified that the proposed approach is capable of perceiving gastric cancer with 88.09% sensitivity, 95.77% specificity, and 96.55% overall accuracy respectively.  相似文献   

6.
一种基于PCNN的图像分割方法   总被引:10,自引:0,他引:10  
针对脉冲耦合神经网络无法确定最优分割的问题,提出了一种将脉冲耦合神经网络和类间方差准则相结合的图像分割方法。在每次迭代时将脉冲耦合神经网络点火的神经元对应的像素作为目标,未点火的神经元对应的像素作为背景,计算目标和背景之间的类间方差,取类间方差值最大的分割图像作为最终结果。实验结果表明该方法能获得视觉效果较好的分割结果并具有较强的普适性,对一幅大小为256×256的图像进行分割所需要的时间是0.8秒左右。  相似文献   

7.
A new model is proposed in this paper on color edge detection that uses the second derivative operators and data fusion mechanism. The second-order neighborhood shows the connection between the current pixel and the surroundings of this pixel. This connection is for each RGB component color of the input image. Once the image edges are detected for the three primary colors: red, green, and blue, these colors are merged using the combination rule. Then, the final decision is applied to obtain the segmentation. This process allows different data sources to be combined, which is essential to improve the image information quality and have an optimal image segmentation. Finally, the segmentation results of the proposed model are validated. Moreover, the classification accuracy of the tested data is assessed, and a comparison with other current models is conducted. The comparison results show that the proposed model outperforms the existing models in image segmentation.  相似文献   

8.
A computer software system is designed for the segmentation and classification of benign and malignant tumor slices in brain computed tomography images. In this paper, we present a texture analysis methods to find and select the texture features of the tumor region of each slice to be segmented by support vector machine (SVM). The images considered for this study belongs to 208 benign and malignant tumor slices. The features are extracted and selected using Student's t‐test. The reduced optimal features are used to model and train the probabilistic neural network (PNN) classifier and the classification accuracy is evaluated using k fold cross validation method. The segmentation results are also compared with the experienced radiologist ground truth. Quantitative analysis between ground truth and segmented tumor is presented in terms of quantitative measure of segmentation accuracy and the overlap similarity measure of Jaccard index. The proposed system provides some newly found texture features have important contribution in segmenting and classifying benign and malignant tumor slices efficiently and accurately. The experimental results show that the proposed hybrid texture feature analysis method using Probabilistic Neural Network (PNN) based classifier is able to achieve high segmentation and classification accuracy effectiveness as measured by Jaccard index, sensitivity, and specificity.  相似文献   

9.
目的针对卷积神经网络在RGB-D(彩色-深度)图像中进行语义分割任务时模型参数量大且分割精度不高的问题,提出一种融合高效通道注意力机制的轻量级语义分割网络。方法文中网络基于RefineNet,利用深度可分离卷积(Depthwiseseparableconvolution)来轻量化网络模型,并在编码网络和解码网络中分别融合高效的通道注意力机制。首先RGB-D图像通过带有通道注意力机制的编码器网络,分别对RGB图像和深度图像进行特征提取;然后经过融合模块将2种特征进行多维度融合;最后融合特征经过轻量化的解码器网络得到分割结果,并与RefineNet等6种网络的分割结果进行对比分析。结果对提出的算法在语义分割网络常用公开数据集上进行了实验,实验结果显示文中网络模型参数为90.41 MB,且平均交并比(mIoU)比RefineNet网络提高了1.7%,达到了45.3%。结论实验结果表明,文中网络在参数量大幅减少的情况下还能提高了语义分割精度。  相似文献   

10.
Recent developments in digital cameras and electronic gadgets coupled with Machine Learning (ML) and Deep Learning (DL)-based automated apple leaf disease detection models are commonly employed as reasonable alternatives to traditional visual inspection models. In this background, the current paper devises an Effective Sailfish Optimizer with EfficientNet-based Apple Leaf disease detection (ESFO-EALD) model. The goal of the proposed ESFO-EALD technique is to identify the occurrence of plant leaf diseases automatically. In this scenario, Median Filtering (MF) approach is utilized to boost the quality of apple plant leaf images. Moreover, SFO with Kapur's entropy-based segmentation technique is also utilized for the identification of the affected plant region from test image. Furthermore, Adam optimizer with EfficientNet-based feature extraction and Spiking Neural Network (SNN)-based classification are employed to detect and classify the apple plant leaf images. A wide range of simulations was conducted to ensure the effective outcomes of ESFO-EALD technique on benchmark dataset. The results reported the supremacy of the proposed ESFO-EALD approach than the existing approaches.  相似文献   

11.
Diabetic retinopathy (DR) is a disease with an increasing prevalence and the major reason for blindness among working-age population. The possibility of severe vision loss can be extensively reduced by timely diagnosis and treatment. An automated screening for DR has been identified as an effective method for early DR detection, which can decrease the workload associated to manual grading as well as save diagnosis costs and time. Several studies have been carried out to develop automated detection and classification models for DR. This paper presents a new IoT and cloud-based deep learning for healthcare diagnosis of Diabetic Retinopathy (DR). The proposed model incorporates different processes namely data collection, preprocessing, segmentation, feature extraction and classification. At first, the IoT-based data collection process takes place where the patient wears a head mounted camera to capture the retinal fundus image and send to cloud server. Then, the contrast level of the input DR image gets increased in the preprocessing stage using Contrast Limited Adaptive Histogram Equalization (CLAHE) model. Next, the preprocessed image is segmented using Adaptive Spatial Kernel distance measure-based Fuzzy C-Means clustering (ASKFCM) model. Afterwards, deep Convolution Neural Network (CNN) based Inception v4 model is applied as a feature extractor and the resulting feature vectors undergo classification in line with the Gaussian Naive Bayes (GNB) model. The proposed model was tested using a benchmark DR MESSIDOR image dataset and the obtained results showcased superior performance of the proposed model over other such models compared in the study.  相似文献   

12.
《成像科学杂志》2013,61(7):592-600
Abstract

Segmentation is one of the most complicated procedures in the image processing that has important role in the image analysis. In this paper, an improved pixon-based method for image segmentation is proposed. In proposed algorithm, complex partial differential equations (PDEs) is used as a kernel function to make pixonal image. Using this kernel function causes noise on images to reduce and an image not to be over-segment when the pixon-based method is used. Utilising the PDE-based method leads to elimination of some unnecessary details and results in a fewer pixon number, faster performance and more robustness against unwanted environmental noises. As the next step, the appropriate pixons are extracted and eventually, we segment the image with the use of a Markov random field. The experimental results indicate that the proposed pixon-based approach has a reduced computational load and a better accuracy compared to the other existing pixon-image segmentation techniques. To evaluate the proposed algorithm and compare it with the last best algorithms, many experiments on standard images were performed. The results indicate that the proposed algorithm is faster than other methods, with the most segmentation accuracy.  相似文献   

13.
Lung cancer is a dangerous disease causing death to individuals. Currently precise classification and differential diagnosis of lung cancer is essential with the stability and accuracy of cancer identification is challenging. Classification scheme was developed for lung cancer in CT images by Kernel based Non-Gaussian Convolutional Neural Network (KNG-CNN). KNG-CNN comprises of three convolutional, two fully connected and three pooling layers. Kernel based Non-Gaussian computation is used for the diagnosis of false positive or error encountered in the work. Initially Lung Image Database Consortium image collection (LIDC-IDRI) dataset is used for input images and a ROI based segmentation using efficient CLAHE technique is carried as preprocessing steps, enhancing images for better feature extraction. Morphological features are extracted after the segmentation process. Finally, KNG-CNN method is used for effectual classification of tumour > 30mm. An accuracy of 87.3% was obtained using this technique. This method is effectual for classifying the lung cancer from the CT scanned image.  相似文献   

14.
The need for a general purpose Content Based Image Retrieval (CBIR) system for huge image databases has attracted information-technology researchers and institutions for CBIR techniques development. These techniques include image feature extraction, segmentation, feature mapping, representation, semantics, indexing and storage, image similarity-distance measurement and retrieval making CBIR system development a challenge. Since medical images are large in size running to megabits of data they are compressed to reduce their size for storage and transmission. This paper investigates medical image retrieval problem for compressed images. An improved image classification algorithm for CBIR is proposed. In the proposed method, RAW images are compressed using Haar wavelet. Features are extracted using Gabor filter and Sobel edge detector. The extracted features are classified using Partial Recurrent Neural Network (PRNN). Since training parameters in Neural Network are NP hard, a hybrid Particle Swarm Optimization (PSO) – Cuckoo Search algorithm (CS) is proposed to optimize the learning rate of the neural network.  相似文献   

15.
为提高增强纤维约束混凝土柱应力-应变模型中特征点(峰值应力、应变)的计算精度,针对已有文献资料提出的特征点近似计算公式的不足,引入径向基函数,以混凝土轴心抗压强度、FRP抗拉强度、FRP环向约束体积比、拐角半径与截面短边比值及截面长宽比为输入参数,峰值应力比、峰值应变比为输出参数,建立特征点的径向基网络模型.模型计算结...  相似文献   

16.
Due to the difficulties of brain tumor segmentation, this paper proposes a strategy for extracting brain tumors from three-dimensional Magnetic Resonance Image (MRI) and Computed Tomography (CT) scans utilizing 3D U-Net Design and ResNet50, taken after by conventional classification strategies. In this inquire, the ResNet50 picked up accuracy with 98.96%, and the 3D U-Net scored 97.99% among the different methods of deep learning. It is to be mentioned that traditional Convolutional Neural Network (CNN) gives 97.90% accuracy on top of the 3D MRI. In expansion, the image fusion approach combines the multimodal images and makes a fused image to extricate more highlights from the medical images. Other than that, we have identified the loss function by utilizing several dice measurements approach and received Dice Result on top of a specific test case. The average mean score of dice coefficient and soft dice loss for three test cases was 0.0980. At the same time, for two test cases, the sensitivity and specification were recorded to be 0.0211 and 0.5867 using patch level predictions. On the other hand, a software integration pipeline was integrated to deploy the concentrated model into the webserver for accessing it from the software system using the Representational state transfer (REST) API. Eventually, the suggested models were validated through the Area Under the Curve–Receiver Characteristic Operator (AUC–ROC) curve and Confusion Matrix and compared with the existing research articles to understand the underlying problem. Through Comparative Analysis, we have extracted meaningful insights regarding brain tumour segmentation and figured out potential gaps. Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imaging modalities.  相似文献   

17.
采用多元线性回归与人工神经网络系统(Artificial Neural Network,简称ANN)分别建立了柴油发动机噪声声音品质预测模型,并将两种模型的预测值与实测值进行比较。试验结果表明,该系统可以反映客观参数和主观满意度间的非线性关系,可以用来预测和描述柴油发动机噪声的声音品质。  相似文献   

18.
By efficiently and accurately predicting the adoptability of pets, shelters and rescuers can be positively guided on improving attraction of pet profiles, reducing animal suffering and euthanization. Previous prediction methods usually only used a single type of content for training. However, many pets contain not only textual content, but also images. To make full use of textual and visual information, this paper proposed a novel method to process pets that contain multimodal information. We employed several CNN (Convolutional Neural Network) based models and other methods to extract features from images and texts to obtain the initial multimodal representation, then reduce the dimensions and fuse them. Finally, we trained the fused features with two GBDT (Gradient Boosting Decision Tree) based models and a Neural Network (NN) and compare the performance of them and their ensemble. The evaluation result demonstrates that the proposed ensemble learning can improve the accuracy of prediction.  相似文献   

19.
In medical imaging, segmenting brain tumor becomes a vital task, and it provides a way for early diagnosis and treatment. Manual segmentation of brain tumor in magnetic resonance (MR) images is a time‐consuming and challenging task. Hence, there is a need for a computer‐aided brain tumor segmentation approach. Using deep learning algorithms, a robust brain tumor segmentation approach is implemented by integrating convolution neural network (CNN) and multiple kernel K means clustering (MKKMC). In this proposed CNN‐MKKMC approach, classification of MR images into normal and abnormal is performed by CNN algorithm. At next, MKKMC algorithm is employed to segment the brain tumor from the abnormal brain image. The proposed CNN‐MKKMC algorithm is evaluated both visually and objectively in terms of accuracy, sensitivity, and specificity with the existing segmentation methods. The experimental results demonstrate that the proposed CNN‐MKKMC approach yields better accuracy in segmenting brain tumor with less time cost.  相似文献   

20.
Internet of Things (IoT) paves a new direction in the domain of smart farming and precision agriculture. Smart farming is an upgraded version of agriculture which is aimed at improving the cultivation practices and yield to a certain extent. In smart farming, IoT devices are linked among one another with new technologies to improve the agricultural practices. Smart farming makes use of IoT devices and contributes in effective decision making. Rice is the major food source in most of the countries. So, it becomes inevitable to detect rice plant diseases during early stages with the help of automated tools and IoT devices. The development and application of Deep Learning (DL) models in agriculture offers a way for early detection of rice diseases and increase the yield and profit. This study presents a new Convolutional Neural Network-based inception with ResNset v2 model and Optimal Weighted Extreme Learning Machine (CNNIR-OWELM)-based rice plant disease diagnosis and classification model in smart farming environment. The proposed CNNIR-OWELM method involves a set of IoT devices which capture the images of rice plants and transmit it to cloud server via internet. The CNNIR-OWELM method uses histogram segmentation technique to determine the affected regions in rice plant image. In addition, a DL-based inception with ResNet v2 model is engaged to extract the features. Besides, in OWELM, the Weighted Extreme Learning Machine (WELM), optimized by Flower Pollination Algorithm (FPA), is employed for classification purpose. The FPA is incorporated into WELM to determine the optimal parameters such as regularization coefficient C and kernel . The outcome of the presented model was validated against a benchmark image dataset and the results were compared with one another. The simulation results inferred that the presented model effectively diagnosed the disease with high sensitivity of 0.905, specificity of 0.961, and accuracy of 0.942.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号