首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the machine learning (ML) paradigm, data augmentation serves as a regularization approach for creating ML models. The increase in the diversification of training samples increases the generalization capabilities, which enhances the prediction performance of classifiers when tested on unseen examples. Deep learning (DL) models have a lot of parameters, and they frequently overfit. Effectively, to avoid overfitting, data plays a major role to augment the latest improvements in DL. Nevertheless, reliable data collection is a major limiting factor. Frequently, this problem is undertaken by combining augmentation of data, transfer learning, dropout, and methods of normalization in batches. In this paper, we introduce the application of data augmentation in the field of image classification using Random Multi-model Deep Learning (RMDL) which uses the association approaches of multiDL to yield random models for classification. We present a methodology for using Generative Adversarial Networks (GANs) to generate images for data augmenting. Through experiments, we discover that samples generated by GANs when fed into RMDL improve both accuracy and model efficiency. Experimenting across both MNIST and CIAFAR-10 datasets show that, error rate with proposed approach has been decreased with different random models.  相似文献   

2.
From fraud detection to speech recognition, including price prediction, Machine Learning (ML) applications are manifold and can significantly improve different areas. Nevertheless, machine learning models are vulnerable and are exposed to different security and privacy attacks. Hence, these issues should be addressed while using ML models to preserve the security and privacy of the data used. There is a need to secure ML models, especially in the training phase to preserve the privacy of the training datasets and to minimise the information leakage. In this paper, we present an overview of ML threats and vulnerabilities, and we highlight current progress in the research works proposing defence techniques against ML security and privacy attacks. The relevant background for the different attacks occurring in both the training and testing/inferring phases is introduced before presenting a detailed overview of Membership Inference Attacks (MIA) and the related countermeasures. In this paper, we introduce a countermeasure against membership inference attacks (MIA) on Conventional Neural Networks (CNN) based on dropout and L2 regularization. Through experimental analysis, we demonstrate that this defence technique can mitigate the risks of MIA attacks while ensuring an acceptable accuracy of the model. Indeed, using CNN model training on two datasets CIFAR-10 and CIFAR-100, we empirically verify the ability of our defence strategy to decrease the impact of MIA on our model and we compare results of five different classifiers. Moreover, we present a solution to achieve a trade-off between the performance of the model and the mitigation of MIA attack.  相似文献   

3.
In this study, multi-objective genetic algorithms (GAs) are introduced to partial least squares (PLS) model building. This method aims to improve the performance and robustness of the PLS model by removing samples with systematic errors, including outliers, from the original data. Multi-objective GA optimizes the combination of these samples to be removed. Training and validation sets were used to reduce the undesirable effects of over-fitting on the training set by multi-objective GA. The reduction of the over-fitting leads to accurate and robust PLS models. To clearly visualize the factors of the systematic errors, an index defined with the original PLS model and a specific Pareto-optimal solution is also introduced. This method is applied to three kinds of near-infrared (NIR) spectra to build PLS models. The results demonstrate that multi-objective GA significantly improves the performance of the PLS models. They also show that the sample selection by multi-objective GA enhances the ability of the PLS models to detect samples with systematic errors.  相似文献   

4.
Cybersecurity-related solutions have become familiar since it ensures security and privacy against cyberattacks in this digital era. Malicious Uniform Resource Locators (URLs) can be embedded in email or Twitter and used to lure vulnerable internet users to implement malicious data in their systems. This may result in compromised security of the systems, scams, and other such cyberattacks. These attacks hijack huge quantities of the available data, incurring heavy financial loss. At the same time, Machine Learning (ML) and Deep Learning (DL) models paved the way for designing models that can detect malicious URLs accurately and classify them. With this motivation, the current article develops an Artificial Fish Swarm Algorithm (AFSA) with Deep Learning Enabled Malicious URL Detection and Classification (AFSADL-MURLC) model. The presented AFSADL-MURLC model intends to differentiate the malicious URLs from genuine URLs. To attain this, AFSADL-MURLC model initially carries out data preprocessing and makes use of glove-based word embedding technique. In addition, the created vector model is then passed onto Gated Recurrent Unit (GRU) classification to recognize the malicious URLs. Finally, AFSA is applied to the proposed model to enhance the efficiency of GRU model. The proposed AFSADL-MURLC technique was experimentally validated using benchmark dataset sourced from Kaggle repository. The simulation results confirmed the supremacy of the proposed AFSADL-MURLC model over recent approaches under distinct measures.  相似文献   

5.
Data fusion is one of the challenging issues, the healthcare sector is facing in the recent years. Proper diagnosis from digital imagery and treatment are deemed to be the right solution. Intracerebral Haemorrhage (ICH), a condition characterized by injury of blood vessels in brain tissues, is one of the important reasons for stroke. Images generated by X-rays and Computed Tomography (CT) are widely used for estimating the size and location of hemorrhages. Radiologists use manual planimetry, a time-consuming process for segmenting CT scan images. Deep Learning (DL) is the most preferred method to increase the efficiency of diagnosing ICH. In this paper, the researcher presents a unique multi-modal data fusion-based feature extraction technique with Deep Learning (DL) model, abbreviated as FFE-DL for Intracranial Haemorrhage Detection and Classification, also known as FFEDL-ICH. The proposed FFEDL-ICH model has four stages namely, preprocessing, image segmentation, feature extraction, and classification. The input image is first preprocessed using the Gaussian Filtering (GF) technique to remove noise. Secondly, the Density-based Fuzzy C-Means (DFCM) algorithm is used to segment the images. Furthermore, the Fusion-based Feature Extraction model is implemented with handcrafted feature (Local Binary Patterns) and deep features (Residual Network-152) to extract useful features. Finally, Deep Neural Network (DNN) is implemented as a classification technique to differentiate multiple classes of ICH. The researchers, in the current study, used benchmark Intracranial Haemorrhage dataset and simulated the FFEDL-ICH model to assess its diagnostic performance. The findings of the study revealed that the proposed FFEDL-ICH model has the ability to outperform existing models as there is a significant improvement in its performance. For future researches, the researcher recommends the performance improvement of FFEDL-ICH model using learning rate scheduling techniques for DNN.  相似文献   

6.
Coronavirus (COVID-19) infection was initially acknowledged as a global pandemic in Wuhan in China. World Health Organization (WHO) stated that the COVID-19 is an epidemic that causes a 3.4% death rate. Chest X-Ray (CXR) and Computerized Tomography (CT) screening of infected persons are essential in diagnosis applications. There are numerous ways to identify positive COVID-19 cases. One of the fundamental ways is radiology imaging through CXR, or CT images. The comparison of CT and CXR scans revealed that CT scans are more effective in the diagnosis process due to their high quality. Hence, automated classification techniques are required to facilitate the diagnosis process. Deep Learning (DL) is an effective tool that can be utilized for detection and classification this type of medical images. The deep Convolutional Neural Networks (CNNs) can learn and extract essential features from different medical image datasets. In this paper, a CNN architecture for automated COVID-19 detection from CXR and CT images is offered. Three activation functions as well as three optimizers are tested and compared for this task. The proposed architecture is built from scratch and the COVID-19 image datasets are directly fed to train it. The performance is tested and investigated on the CT and CXR datasets. Three activation functions: Tanh, Sigmoid, and ReLU are compared using a constant learning rate and different batch sizes. Different optimizers are studied with different batch sizes and a constant learning rate. Finally, a comparison between different combinations of activation functions and optimizers is presented, and the optimal configuration is determined. Hence, the main objective is to improve the detection accuracy of COVID-19 from CXR and CT images using DL by employing CNNs to classify medical COVID-19 images in an early stage. The proposed model achieves a classification accuracy of 91.67% on CXR image dataset, and a classification accuracy of 100% on CT dataset with training times of 58 min and 46 min on CXR and CT datasets, respectively. The best results are obtained using the ReLU activation function combined with the SGDM optimizer at a learning rate of 10−5 and a minibatch size of 16.  相似文献   

7.
Industry 4.0 production environments and smart manufacturing systems integrate both the physical and decision-making aspects of manufacturing operations into autonomous and decentralized systems. One of the key aspects of these systems is a production planning, specifically, Scheduling operations on the machines. To cope with this problem, this paper proposed a Deep Reinforcement Learning with an Actor-Critic algorithm (DRLAC). We model the Job-Shop Scheduling Problem (JSSP) as a Markov Decision Process (MDP), represent the state of a JSSP as simple Graph Isomorphism Networks (GIN) to extract nodes features during scheduling, and derive the policy of optimal scheduling which guides the included node features to the best next action of schedule. In addition, we adopt the Actor-Critic (AC) network’s training algorithm-based reinforcement learning for achieving the optimal policy of the scheduling. To prove the proposed model’s effectiveness, first, we will present a case study that illustrated a conflict between two job scheduling, secondly, we will apply the proposed model to a known benchmark dataset and compare the results with the traditional scheduling methods and trending approaches. The numerical results indicate that the proposed model can be adaptive with real-time production scheduling, where the average percentage deviation (APD) of our model achieved values between 0.009 and 0.21 compared with heuristic methods and values between 0.014 and 0.18 compared with other trending approaches.  相似文献   

8.
Presently, suspect prediction of crime scenes can be considered as a classification task, which predicts the suspects based on the time, space, and type of crime. Performing digital forensic investigation in a big data environment poses several challenges to the investigational officer. Besides, the facial sketches are widely employed by the law enforcement agencies for assisting the suspect identification of suspects involved in crime scenes. The sketches utilized in the forensic investigations are either drawn by forensic artists or generated through the computer program (composite sketches) based on the verbal explanation given by the eyewitness or victim. Since this suspect identification process is slow and difficult, it is required to design a technique for a quick and automated facial sketch generation. Machine Learning (ML) and deep learning (DL) models find it useful to automatically support the decision of forensics experts. The challenge is the incorporation of the domain expert knowledge with DL models for developing efficient techniques to make better decisions. In this view, this study develops a new artificial intelligence (AI) based DL model with face sketch synthesis (FSS) for suspect identification (DLFSS-SI) in a big data environment. The proposed method performs preprocessing at the primary stage to improvise the image quality. In addition, the proposed model uses a DL based MobileNet (MN) model for feature extractor, and the hyper parameters of the MobileNet are tuned by quasi oppositional firefly optimization (QOFFO) algorithm. The proposed model automatically draws the sketches of the input facial images. Moreover, a qualitative similarity assessment takes place with the sketch drawn by a professional artist by the eyewitness. If there is a higher resemblance between the two sketches, the suspect will be determined. To validate the effective performance of the DLFSS-SI method, a detailed qualitative and quantitative examination takes place. The experimental outcome stated that the DLFSS-SI model has outperformed the compared methods in terms of mean square error (MSE), peak signal to noise ratio (PSNR), average actuary, and average computation time.  相似文献   

9.
Cyberbullying (CB) is a challenging issue in social media and it becomes important to effectively identify the occurrence of CB. The recently developed deep learning (DL) models pave the way to design CB classifier models with maximum performance. At the same time, optimal hyperparameter tuning process plays a vital role to enhance overall results. This study introduces a Teacher Learning Genetic Optimization with Deep Learning Enabled Cyberbullying Classification (TLGODL-CBC) model in Social Media. The proposed TLGODL-CBC model intends to identify the existence and non-existence of CB in social media context. Initially, the input data is cleaned and pre-processed to make it compatible for further processing. Followed by, independent recurrent autoencoder (IRAE) model is utilized for the recognition and classification of CBs. Finally, the TLGO algorithm is used to optimally adjust the parameters related to the IRAE model and shows the novelty of the work. To assuring the improved outcomes of the TLGODL-CBC approach, a wide range of simulations are executed and the outcomes are investigated under several aspects. The simulation outcomes make sure the improvements of the TLGODL-CBC model over recent approaches.  相似文献   

10.
With the increase in research on AI (Artificial Intelligence), the importance of DL (Deep Learning) in various fields, such as materials, biotechnology, genomes, and new drugs, is increasing significantly, thereby increasing the number of deep-learning framework users. However, to design a deep neural network, a considerable understanding of the framework is required. To solve this problem, a GUI (Graphical User Interface)-based DNN (Deep Neural Network) design tool is being actively researched and developed. The GUI-based DNN design tool can design DNNs quickly and easily. However, the existing GUI-based DNN design tool has certain limitations such as poor usability, framework dependency, and difficulty encountered in changing GUI components. In this study, a deep learning algorithm that solves the problem of poor usability was developed using a template to increase the accessibility for users. Moreover, the proposed tool was developed to save and share only the necessary parts for quick operation. To solve the framework dependency, we applied ONNX (Open Neural Network Exchange), which is an exchange standard for neural networks, and configured it such that DNNs designed with the existing deep-learning framework can be imported. Finally, to address the difficulty encountered in changing GUI components, we defined and developed the JSON format to quickly respond to version updates. The developed DL neural network designer was validated by running it with KISTI’s supercomputer-based AI Studio.  相似文献   

11.
In recent years, progressive developments have been observed in recent technologies and the production cost has been continuously decreasing. In such scenario, Internet of Things (IoT) network which is comprised of a set of Unmanned Aerial Vehicles (UAV), has received more attention from civilian to military applications. But network security poses a serious challenge to UAV networks whereas the intrusion detection system (IDS) is found to be an effective process to secure the UAV networks. Classical IDSs are not adequate to handle the latest computer networks that possess maximum bandwidth and data traffic. In order to improve the detection performance and reduce the false alarms generated by IDS, several researchers have employed Machine Learning (ML) and Deep Learning (DL) algorithms to address the intrusion detection problem. In this view, the current research article presents a deep reinforcement learning technique, optimized by Black Widow Optimization (DRL-BWO) algorithm, for UAV networks. In addition, DRL involves an improved reinforcement learning-based Deep Belief Network (DBN) for intrusion detection. For parameter optimization of DRL technique, BWO algorithm is applied. It helps in improving the intrusion detection performance of UAV networks. An extensive set of experimental analysis was performed to highlight the supremacy of the proposed model. From the simulation values, it is evident that the proposed method is appropriate as it attained high precision, recall, F-measure, and accuracy values such as 0.985, 0.993, 0.988, and 0.989 respectively.  相似文献   

12.
Deep learning (DL) techniques, which do not need complex pre-processing and feature analysis, are used in many areas of medicine and achieve promising results. On the other hand, in medical studies, a limited dataset decreases the abstraction ability of the DL model. In this context, we aimed to produce synthetic brain images including three tumor types (glioma, meningioma, and pituitary), unlike traditional data augmentation methods, and classify them with DL. This study proposes a tumor classification model consisting of a Dense Convolutional Network (DenseNet121)-based DL model to prevent forgetting problems in deep networks and delay information flow between layers. By comparing models trained on two different datasets, we demonstrated the effect of synthetic images generated by Cycle Generative Adversarial Network (CycleGAN) on the generalization of DL. One model is trained only on the original dataset, while the other is trained on the combined dataset of synthetic and original images. Synthetic data generated by CycleGAN improved the best accuracy values for glioma, meningioma, and pituitary tumor classes from 0.9633, 0.9569, and 0.9904 to 0.9968, 0.9920, and 0.9952, respectively. The developed model using synthetic data obtained a higher accuracy value than the related studies in the literature. Additionally, except for pixel-level and affine transform data augmentation, synthetic data has been generated in the figshare brain dataset for the first time.  相似文献   

13.
Internet of Things (IoT) paves a new direction in the domain of smart farming and precision agriculture. Smart farming is an upgraded version of agriculture which is aimed at improving the cultivation practices and yield to a certain extent. In smart farming, IoT devices are linked among one another with new technologies to improve the agricultural practices. Smart farming makes use of IoT devices and contributes in effective decision making. Rice is the major food source in most of the countries. So, it becomes inevitable to detect rice plant diseases during early stages with the help of automated tools and IoT devices. The development and application of Deep Learning (DL) models in agriculture offers a way for early detection of rice diseases and increase the yield and profit. This study presents a new Convolutional Neural Network-based inception with ResNset v2 model and Optimal Weighted Extreme Learning Machine (CNNIR-OWELM)-based rice plant disease diagnosis and classification model in smart farming environment. The proposed CNNIR-OWELM method involves a set of IoT devices which capture the images of rice plants and transmit it to cloud server via internet. The CNNIR-OWELM method uses histogram segmentation technique to determine the affected regions in rice plant image. In addition, a DL-based inception with ResNet v2 model is engaged to extract the features. Besides, in OWELM, the Weighted Extreme Learning Machine (WELM), optimized by Flower Pollination Algorithm (FPA), is employed for classification purpose. The FPA is incorporated into WELM to determine the optimal parameters such as regularization coefficient C and kernel . The outcome of the presented model was validated against a benchmark image dataset and the results were compared with one another. The simulation results inferred that the presented model effectively diagnosed the disease with high sensitivity of 0.905, specificity of 0.961, and accuracy of 0.942.  相似文献   

14.
Nowadays, Internet of Things (IoT) has penetrated all facets of human life while on the other hand, IoT devices are heavily prone to cyberattacks. It has become important to develop an accurate system that can detect malicious attacks on IoT environments in order to mitigate security risks. Botnet is one of the dreadful malicious entities that has affected many users for the past few decades. It is challenging to recognize Botnet since it has excellent carrying and hidden capacities. Various approaches have been employed to identify the source of Botnet at earlier stages. Machine Learning (ML) and Deep Learning (DL) techniques are developed based on heavy influence from Botnet detection methodology. In spite of this, it is still a challenging task to detect Botnet at early stages due to low number of features accessible from Botnet dataset. The current study devises IoT with Cloud Assisted Botnet Detection and Classification utilizing Rat Swarm Optimizer with Deep Learning (BDC-RSODL) model. The presented BDC-RSODL model includes a series of processes like pre-processing, feature subset selection, classification, and parameter tuning. Initially, the network data is pre-processed to make it compatible for further processing. Besides, RSO algorithm is exploited for effective selection of subset of features. Additionally, Long Short Term Memory (LSTM) algorithm is utilized for both identification and classification of botnets. Finally, Sine Cosine Algorithm (SCA) is executed for fine-tuning the hyperparameters related to LSTM model. In order to validate the promising performance of BDC-RSODL system, a comprehensive comparison analysis was conducted. The obtained results confirmed the supremacy of BDC-RSODL model over recent approaches.  相似文献   

15.
White blood cells (WBC) or leukocytes are a vital component of the blood which forms the immune system, which is accountable to fight foreign elements. The WBC images can be exposed to different data analysis approaches which categorize different kinds of WBC. Conventionally, laboratory tests are carried out to determine the kind of WBC which is erroneous and time consuming. Recently, deep learning (DL) models can be employed for automated investigation of WBC images in short duration. Therefore, this paper introduces an Aquila Optimizer with Transfer Learning based Automated White Blood Cells Classification (AOTL-WBCC) technique. The presented AOTL-WBCC model executes data normalization and data augmentation process (rotation and zooming) at the initial stage. In addition, the residual network (ResNet) approach was used for feature extraction in which the initial hyperparameter values of the ResNet model are tuned by the use of AO algorithm. Finally, Bayesian neural network (BNN) classification technique has been implied for the identification of WBC images into distinct classes. The experimental validation of the AOTL-WBCC methodology is performed with the help of Kaggle dataset. The experimental results found that the AOTL-WBCC model has outperformed other techniques which are based on image processing and manual feature engineering approaches under different dimensions.  相似文献   

16.
This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography (PPG) sensors and a deep learning (DL) that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators. The proposed platform measured the signal changes in PPG and converted them into physiological indicators, such as pulse transit time (PTT), pulse wave velocity (PWV), perfusion index (PI) and heart rate (HR); these indicators were then fed into the DL to calculate blood pressure. The hardware of the experiment comprised 2 PPG components (i.e., Raspberry Pi 3 Model B and analog-to-digital converter [MCP3008]), which were connected using a serial peripheral interface. The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure (SBP), diastolic blood pressure (DBP) and mean arterial pressure (MAP). To increase the robustness of the DL model, this study input data of 100 Asian participants into the training database, including those with and without cardiovascular disease, each with a proportion of approximately 50%. The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17 ± 0.46 mmHg. The mean absolute error and standard deviation of DBP was 0.27 ± 0.52 mmHg. The mean absolute error and standard deviation of MAP was 0.16 ± 0.40 mmHg.  相似文献   

17.
Data fusion is a multidisciplinary research area that involves different domains. It is used to attain minimum detection error probability and maximum reliability with the help of data retrieved from multiple healthcare sources. The generation of huge quantity of data from medical devices resulted in the formation of big data during which data fusion techniques become essential. Securing medical data is a crucial issue of exponentially-pacing computing world and can be achieved by Intrusion Detection Systems (IDS). In this regard, since singular-modality is not adequate to attain high detection rate, there is a need exists to merge diverse techniques using decision-based multimodal fusion process. In this view, this research article presents a new multimodal fusion-based IDS to secure the healthcare data using Spark. The proposed model involves decision-based fusion model which has different processes such as initialization, pre-processing, Feature Selection (FS) and multimodal classification for effective detection of intrusions. In FS process, a chaotic Butterfly Optimization (BO) algorithm called CBOA is introduced. Though the classic BO algorithm offers effective exploration, it fails in achieving faster convergence. In order to overcome this, i.e., to improve the convergence rate, this research work modifies the required parameters of BO algorithm using chaos theory. Finally, to detect intrusions, multimodal classifier is applied by incorporating three Deep Learning (DL)-based classification models. Besides, the concepts like Hadoop MapReduce and Spark were also utilized in this study to achieve faster computation of big data in parallel computation platform. To validate the outcome of the presented model, a series of experimentations was performed using the benchmark NSLKDDCup99 Dataset repository. The proposed model demonstrated its effective results on the applied dataset by offering the maximum accuracy of 99.21%, precision of 98.93% and detection rate of 99.59%. The results assured the betterment of the proposed model.  相似文献   

18.
Recently years, convolutional neural networks (CNNs) have proven to be powerful tools for a broad range of computer vision tasks. However, training a CNN from scratch is difficult because it requires a large amount of labeled training data, which remains a challenge in medical imaging domain. To this end, deep transfer learning (TL) technique is widely used for many medical image tasks. In this paper, we propose a novel multisource transfer learning CNN model for lymph node detection. The mechanism behind it is straightforward. Point-wise (1 × 1) convolution is used to fuse multisource transfer learning knowledge. Concretely, we view the transferred features as priori domain knowledge and 1 × 1 convolutional operation is implemented after pre-trained convolution layers to adaptively combine the transfer information for target task. In order to learn non-linear transferred features and prevent over-fitting, we present an encode process for the pre-trained convolution kernels. At last, based on convolutional factorization technique, we train the proposed CNN model and the encoder process jointly, which improves the feasibility of our approach. The effectiveness of the proposed method is verified on lymph node (LN) dataset: 388 mediastinal LNs labeled by radiologists in 90 patient CT scans, and 595 abdominal LNs in 86 patient CT scans for LN detection. Our method demonstrates sensitivities of about 85%/71% at 3 FP/vol. and 92%/85% at 6 FP/vol. for mediastinum and abdomen respectively, which compares favorably to previous methods.  相似文献   

19.
Event detection (ED) is aimed at detecting event occurrences and categorizing them. This task has been previously solved via recognition and classification of event triggers (ETs), which are defined as the phrase or word most clearly expressing event occurrence. Thus, current approaches require both annotated triggers as well as event types in training data. Nevertheless, triggers are non-essential in ED, and it is time-wasting for annotators to identify the “most clearly” word from a sentence, particularly in longer sentences. To decrease manual effort, we evaluate event detection without triggers. We propose a novel framework that combines Type-aware Attention and Graph Convolutional Networks (TA-GCN) for event detection. Specifically, the task is identified as a multi-label classification problem. We first encode the input sentence using a novel type-aware neural network with attention mechanisms. Then, a Graph Convolutional Networks (GCN)-based multi-label classification model is exploited for event detection. Experimental results demonstrate the effectiveness.  相似文献   

20.
The detection of alcoholism is of great importance due to its effects on individuals and society. Automatic alcoholism detection system (AADS) based on electroencephalogram (EEG) signals is effective, but the design of a robust AADS is a challenging problem. AADS’ current designs are based on conventional, hand-engineered methods and restricted performance. Driven by the excellent deep learning (DL) success in many recognition tasks, we implement an AAD system based on EEG signals using DL. A DL model requires huge number of learnable parameters and also needs a large dataset of EEG signals for training which is not easy to obtain for the AAD problem. In order to solve this problem, we propose a multi-channel Pyramidal neural convolutional (MP-CNN) network that requires a less number of learnable parameters. Using the deep CNN model, we build an AAD system to detect from EEG signal segments whether the subject is alcoholic or normal. We validate the robustness and effectiveness of proposed AADS using KDD, a benchmark dataset for alcoholism detection problem. In order to find the brain region that contributes significant role in AAD, we investigated the effects of selected 19 EEG channels (SC-19), those from the whole brain (ALL-61), and 05 brain regions, i.e., TEMP, OCCIP, CENT, FRONT, and PERI. The results show that SC-19 contributes significant role in AAD with the accuracy of 100%. The comparison reveals that the state-of-the-art systems are outperformed by the AADS. The proposed AADS will be useful in medical diagnosis research and health care systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号