首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Applied linguistics is an interdisciplinary domain which identifies, investigates, and offers solutions to language-related real-life problems. The new coronavirus disease, otherwise known as Coronavirus disease (COVID-19), has severely affected the everyday life of people all over the world. Specifically, since there is insufficient access to vaccines and no straight or reliable treatment for coronavirus infection, the country has initiated the appropriate preventive measures (like lockdown, physical separation, and masking) for combating this extremely transmittable disease. So, individuals spent more time on online social media platforms (i.e., Twitter, Facebook, Instagram, LinkedIn, and Reddit) and expressed their thoughts and feelings about coronavirus infection. Twitter has become one of the popular social media platforms and allows anyone to post tweets. This study proposes a sine cosine optimization with bidirectional gated recurrent unit-based sentiment analysis (SCOBGRU-SA) on COVID-19 tweets. The SCOBGRU-SA technique aimed to detect and classify the various sentiments in Twitter data during the COVID-19 pandemic. The SCOBGRU-SA technique follows data pre-processing and the Fast-Text word embedding process to accomplish this. Moreover, the BGRU model is utilized to recognise and classify sentiments present in the tweets. Furthermore, the SCO algorithm is exploited for tuning the BGRU method’s hyperparameter, which helps attain improved classification performance. The experimental validation of the SCOBGRU-SA technique takes place using a benchmark dataset, and the results signify its promising performance compared to other DL models.  相似文献   

2.
With new developments experienced in Internet of Things (IoT), wearable, and sensing technology, the value of healthcare services has enhanced. This evolution has brought significant changes from conventional medicine-based healthcare to real-time observation-based healthcare. Bio-medical Electrocardiogram (ECG) signals are generally utilized in examination and diagnosis of Cardiovascular Diseases (CVDs) since it is quick and non-invasive in nature. Due to increasing number of patients in recent years, the classifier efficiency gets reduced due to high variances observed in ECG signal patterns obtained from patients. In such scenario computer-assisted automated diagnostic tools are important for classification of ECG signals. The current study devises an Improved Bat Algorithm with Deep Learning Based Biomedical ECG Signal Classification (IBADL-BECGC) approach. To accomplish this, the proposed IBADL-BECGC model initially pre-processes the input signals. Besides, IBADL-BECGC model applies NasNet model to derive the features from test ECG signals. In addition, Improved Bat Algorithm (IBA) is employed to optimally fine-tune the hyperparameters related to NasNet approach. Finally, Extreme Learning Machine (ELM) classification algorithm is executed to perform ECG classification method. The presented IBADL-BECGC model was experimentally validated utilizing benchmark dataset. The comparison study outcomes established the improved performance of IBADL-BECGC model over other existing methodologies since the former achieved a maximum accuracy of 97.49%.  相似文献   

3.
The sentiment of a text depends on the clausal structure of the sentence and the connectives’ discourse arguments. In this work, the clause boundary, discourse argument, and syntactic and semantic information of the sentence are used to assign the text’s sentiment. The clause boundaries identify the span of the text, and the discourse connectives identify the arguments. Since the lexicon-based analysis of traditional sentiment analysis gives the wrong sentiment of the sentence, a deeper-level semantic analysis is required for the correct analysis of sentiments. Hence, in this study, explicit connectives in Malayalam are considered to identify the discourse arguments. A supervised method, conditional random fields, is used to identify the clause boundary and discourse arguments. For the study, 1,000 sentiment sentences from Malayalam documents were analyzed. Experimental results show that the discourse structure integration considerably improves sentiment analysis performance from the baseline system.  相似文献   

4.
Nowadays, the amount of wed data is increasing at a rapid speed, which presents a serious challenge to the web monitoring. Text sentiment analysis, an important research topic in the area of natural language processing, is a crucial task in the web monitoring area. The accuracy of traditional text sentiment analysis methods might be degraded in dealing with mass data. Deep learning is a hot research topic of the artificial intelligence in the recent years. By now, several research groups have studied the sentiment analysis of English texts using deep learning methods. In contrary, relatively few works have so far considered the Chinese text sentiment analysis toward this direction. In this paper, a method for analyzing the Chinese text sentiment is proposed based on the convolutional neural network (CNN) in deep learning in order to improve the analysis accuracy. The feature values of the CNN after the training process are nonuniformly distributed. In order to overcome this problem, a method for normalizing the feature values is proposed. Moreover, the dimensions of the text features are optimized through simulations. Finally, a method for updating the learning rate in the training process of the CNN is presented in order to achieve better performances. Experiment results on the typical datasets indicate that the accuracy of the proposed method can be improved compared with that of the traditional supervised machine learning methods, e.g., the support vector machine method.  相似文献   

5.
The text classification process has been extensively investigated in various languages, especially English. Text classification models are vital in several Natural Language Processing (NLP) applications. The Arabic language has a lot of significance. For instance, it is the fourth mostly-used language on the internet and the sixth official language of the United Nations. However, there are few studies on the text classification process in Arabic. A few text classification studies have been published earlier in the Arabic language. In general, researchers face two challenges in the Arabic text classification process: low accuracy and high dimensionality of the features. In this study, an Automated Arabic Text Classification using Hyperparameter Tuned Hybrid Deep Learning (AATC-HTHDL) model is proposed. The major goal of the proposed AATC-HTHDL method is to identify different class labels for the Arabic text. The first step in the proposed model is to pre-process the input data to transform it into a useful format. The Term Frequency-Inverse Document Frequency (TF-IDF) model is applied to extract the feature vectors. Next, the Convolutional Neural Network with Recurrent Neural Network (CRNN) model is utilized to classify the Arabic text. In the final stage, the Crow Search Algorithm (CSA) is applied to fine-tune the CRNN model’s hyperparameters, showing the work’s novelty. The proposed AATC-HTHDL model was experimentally validated under different parameters and the outcomes established the supremacy of the proposed AATC-HTHDL model over other approaches.  相似文献   

6.
Stroke and cerebral haemorrhage are the second leading causes of death in the world after ischaemic heart disease. In this work, a dataset containing medical, physiological and environmental tests for stroke was used to evaluate the efficacy of machine learning, deep learning and a hybrid technique between deep learning and machine learning on the Magnetic Resonance Imaging (MRI) dataset for cerebral haemorrhage. In the first dataset (medical records), two features, namely, diabetes and obesity, were created on the basis of the values of the corresponding features. The t-Distributed Stochastic Neighbour Embedding algorithm was applied to represent the high-dimensional dataset in a low-dimensional data space. Meanwhile,the Recursive Feature Elimination algorithm (RFE) was applied to rank the features according to priority and their correlation to the target feature and to remove the unimportant features. The features are fed into the various classification algorithms, namely, Support Vector Machine (SVM), K Nearest Neighbours (KNN), Decision Tree, Random Forest, and Multilayer Perceptron. All algorithms achieved superior results. The Random Forest algorithm achieved the best performance amongst the algorithms; it reached an overall accuracy of 99%. This algorithm classified stroke cases with Precision, Recall and F1 score of 98%, 100% and 99%, respectively. In the second dataset, the MRI image dataset was evaluated by using the AlexNet model and AlexNet + SVM hybrid technique. The hybrid model AlexNet + SVM performed is better than the AlexNet model; it reached accuracy, sensitivity, specificity and Area Under the Curve (AUC) of 99.9%, 100%, 99.80% and 99.86%, respectively.  相似文献   

7.
Sentiment analysis (AS) is one of the basic research directions in natural language processing (NLP), it is widely adopted for news, product review, and politics. Aspect-based sentiment analysis (ABSA) aims at identifying the sentiment polarity of a given target context, previous existing model of sentiment analysis possesses the issue of the insufficient exaction of features which results in low accuracy. Hence this research work develops a deep-semantic and contextual knowledge networks (DSCNet). DSCNet tends to exploit the semantic and contextual knowledge to understand the context and enhance the accuracy based on given aspects. At first temporal relationships are established then deep semantic knowledge and contextual knowledge are introduced. Further, a deep integration layer is introduced to measure the importance of features for efficient extraction of different dimensions. Novelty of DSCNet model lies in introducing the deep contextual. DSCNet is evaluated on three datasets i.e., Restaurant, Laptop, and Twitter dataset considering different deep learning (DL) metrics like precision, recall, accuracy, and Macro-F1 score. Also, comparative analysis is carried out with different baseline methods in terms of accuracy and Macro-F1 score. DSCNet achieves 92.59% of accuracy on restaurant dataset, 86.99% of accuracy on laptop dataset and 78.76% of accuracy on Twitter dataset.  相似文献   

8.
Earlier recognition of breast cancer is crucial to decrease the severity and optimize the survival rate. One of the commonly utilized imaging modalities for breast cancer is histopathological images. Since manual inspection of histopathological images is a challenging task, automated tools using deep learning (DL) and artificial intelligence (AI) approaches need to be designed. The latest advances of DL models help in accomplishing maximum image classification performance in several application areas. In this view, this study develops a Deep Transfer Learning with Rider Optimization Algorithm for Histopathological Classification of Breast Cancer (DTLRO-HCBC) technique. The proposed DTLRO-HCBC technique aims to categorize the existence of breast cancer using histopathological images. To accomplish this, the DTLRO-HCBC technique undergoes pre-processing and data augmentation to increase quantitative analysis. Then, optimal SqueezeNet model is employed for feature extractor and the hyperparameter tuning process is carried out using the Adadelta optimizer. Finally, rider optimization with deep feed forward neural network (RO-DFFNN) technique was utilized employed for breast cancer classification. The RO algorithm is applied for optimally adjusting the weight and bias values of the DFFNN technique. For demonstrating the greater performance of the DTLRO-HCBC approach, a sequence of simulations were carried out and the outcomes reported its promising performance over the current state of art approaches.  相似文献   

9.
In the machine learning (ML) paradigm, data augmentation serves as a regularization approach for creating ML models. The increase in the diversification of training samples increases the generalization capabilities, which enhances the prediction performance of classifiers when tested on unseen examples. Deep learning (DL) models have a lot of parameters, and they frequently overfit. Effectively, to avoid overfitting, data plays a major role to augment the latest improvements in DL. Nevertheless, reliable data collection is a major limiting factor. Frequently, this problem is undertaken by combining augmentation of data, transfer learning, dropout, and methods of normalization in batches. In this paper, we introduce the application of data augmentation in the field of image classification using Random Multi-model Deep Learning (RMDL) which uses the association approaches of multiDL to yield random models for classification. We present a methodology for using Generative Adversarial Networks (GANs) to generate images for data augmenting. Through experiments, we discover that samples generated by GANs when fed into RMDL improve both accuracy and model efficiency. Experimenting across both MNIST and CIAFAR-10 datasets show that, error rate with proposed approach has been decreased with different random models.  相似文献   

10.
The outbreak of the pandemic, caused by Coronavirus Disease 2019 (COVID-19), has affected the daily activities of people across the globe. During COVID-19 outbreak and the successive lockdowns, Twitter was heavily used and the number of tweets regarding COVID-19 increased tremendously. Several studies used Sentiment Analysis (SA) to analyze the emotions expressed through tweets upon COVID-19. Therefore, in current study, a new Artificial Bee Colony (ABC) with Machine Learning-driven SA (ABCML-SA) model is developed for conducting Sentiment Analysis of COVID-19 Twitter data. The prime focus of the presented ABCML-SA model is to recognize the sentiments expressed in tweets made upon COVID-19. It involves data pre-processing at the initial stage followed by n-gram based feature extraction to derive the feature vectors. For identification and classification of the sentiments, the Support Vector Machine (SVM) model is exploited. At last, the ABC algorithm is applied to fine tune the parameters involved in SVM. To demonstrate the improved performance of the proposed ABCML-SA model, a sequence of simulations was conducted. The comparative assessment results confirmed the effectual performance of the proposed ABCML-SA model over other approaches.  相似文献   

11.
This survey paper aims to show methods to analyze and classify field satellite images using deep learning and machine learning algorithms. Users of deep learning-based Convolutional Neural Network (CNN) technology to harvest fields from satellite images or generate zones of interest were among the planned application scenarios (ROI). Using machine learning, the satellite image is placed on the input image, segmented, and then tagged. In contemporary categorization, field size ratio, Local Binary Pattern (LBP) histograms, and color data are taken into account. Field satellite image localization has several practical applications, including pest management, scene analysis, and field tracking. The relationship between satellite images in a specific area, or contextual information, is essential to comprehending the field in its whole.  相似文献   

12.
Melanoma remains a serious illness which is a common form of skin cancer. Since the earlier detection of melanoma reduces the mortality rate, it is essential to design reliable and automated disease diagnosis model using dermoscopic images. The recent advances in deep learning (DL) models find useful to examine the medical image and make proper decisions. In this study, an automated deep learning based melanoma detection and classification (ADL-MDC) model is presented. The goal of the ADL-MDC technique is to examine the dermoscopic images to determine the existence of melanoma. The ADL-MDC technique performs contrast enhancement and data augmentation at the initial stage. Besides, the k-means clustering technique is applied for the image segmentation process. In addition, Adagrad optimizer based Capsule Network (CapsNet) model is derived for effective feature extraction process. Lastly, crow search optimization (CSO) algorithm with sparse autoencoder (SAE) model is utilized for the melanoma classification process. The exploitation of the Adagrad and CSO algorithm helps to properly accomplish improved performance. A wide range of simulation analyses is carried out on benchmark datasets and the results are inspected under several aspects. The simulation results reported the enhanced performance of the ADL-MDC technique over the recent approaches.  相似文献   

13.
Cybersecurity-related solutions have become familiar since it ensures security and privacy against cyberattacks in this digital era. Malicious Uniform Resource Locators (URLs) can be embedded in email or Twitter and used to lure vulnerable internet users to implement malicious data in their systems. This may result in compromised security of the systems, scams, and other such cyberattacks. These attacks hijack huge quantities of the available data, incurring heavy financial loss. At the same time, Machine Learning (ML) and Deep Learning (DL) models paved the way for designing models that can detect malicious URLs accurately and classify them. With this motivation, the current article develops an Artificial Fish Swarm Algorithm (AFSA) with Deep Learning Enabled Malicious URL Detection and Classification (AFSADL-MURLC) model. The presented AFSADL-MURLC model intends to differentiate the malicious URLs from genuine URLs. To attain this, AFSADL-MURLC model initially carries out data preprocessing and makes use of glove-based word embedding technique. In addition, the created vector model is then passed onto Gated Recurrent Unit (GRU) classification to recognize the malicious URLs. Finally, AFSA is applied to the proposed model to enhance the efficiency of GRU model. The proposed AFSADL-MURLC technique was experimentally validated using benchmark dataset sourced from Kaggle repository. The simulation results confirmed the supremacy of the proposed AFSADL-MURLC model over recent approaches under distinct measures.  相似文献   

14.
Indian agriculture is striving to achieve sustainable intensification, the system aiming to increase agricultural yield per unit area without harming natural resources and the ecosystem. Modern farming employs technology to improve productivity. Early and accurate analysis and diagnosis of plant disease is very helpful in reducing plant diseases and improving plant health and food crop productivity. Plant disease experts are not available in remote areas thus there is a requirement of automatic low-cost, approachable and reliable solutions to identify the plant diseases without the laboratory inspection and expert's opinion. Deep learning-based computer vision techniques like Convolutional Neural Network (CNN) and traditional machine learning-based image classification approaches are being applied to identify plant diseases. In this paper, the CNN model is proposed for the classification of rice and potato plant leaf diseases. Rice leaves are diagnosed with bacterial blight, blast, brown spot and tungro diseases. Potato leaf images are classified into three classes: healthy leaves, early blight and late blight diseases. Rice leaf dataset with 5932 images and 1500 potato leaf images are used in the study. The proposed CNN model was able to learn hidden patterns from the raw images and classify rice images with 99.58% accuracy and potato leaves with 97.66% accuracy. The results demonstrate that the proposed CNN model performed better when compared with other machine learning image classifiers such as Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Decision Tree and Random Forest.  相似文献   

15.
Diabetic Retinopathy (DR) has become a widespread illness among diabetics across the globe. Retinal fundus images are generally used by physicians to detect and classify the stages of DR. Since manual examination of DR images is a time-consuming process with the risks of biased results, automated tools using Artificial Intelligence (AI) to diagnose the disease have become essential. In this view, the current study develops an Optimal Deep Learning-enabled Fusion-based Diabetic Retinopathy Detection and Classification (ODL-FDRDC) technique. The intention of the proposed ODL-FDRDC technique is to identify DR and categorize its different grades using retinal fundus images. In addition, ODL-FDRDC technique involves region growing segmentation technique to determine the infected regions. Moreover, the fusion of two DL models namely, CapsNet and MobileNet is used for feature extraction. Further, the hyperparameter tuning of these models is also performed via Coyote Optimization Algorithm (COA). Gated Recurrent Unit (GRU) is also utilized to identify DR. The experimental results of the analysis, accomplished by ODL-FDRDC technique against benchmark DR dataset, established the supremacy of the technique over existing methodologies under different measures.  相似文献   

16.
In the field of natural language processing (NLP), the advancement of neural machine translation has paved the way for cross-lingual research. Yet, most studies in NLP have evaluated the proposed language models on well-refined datasets. We investigate whether a machine translation approach is suitable for multilingual analysis of unrefined datasets, particularly, chat messages in Twitch. In order to address it, we collected the dataset, which included 7,066,854 and 3,365,569 chat messages from English and Korean streams, respectively. We employed several machine learning classifiers and neural networks with two different types of embedding: word-sequence embedding and the final layer of a pre-trained language model. The results of the employed models indicate that the accuracy difference between English, and English to Korean was relatively high, ranging from 3% to 12%. For Korean data (Korean, and Korean to English), it ranged from 0% to 2%. Therefore, the results imply that translation from a low-resource language (e.g., Korean) into a high-resource language (e.g., English) shows higher performance, in contrast to vice versa. Several implications and limitations of the presented results are also discussed. For instance, we suggest the feasibility of translation from resource-poor languages for using the tools of resource-rich languages in further analysis.  相似文献   

17.
Atherosclerosis diagnosis is an inarticulate and complicated cognitive process. Researches on medical diagnosis necessitate maximum accuracy and performance to make optimal clinical decisions. Since the medical diagnostic outcomes need to be prompt and accurate, the recently developed artificial intelligence (AI) and deep learning (DL) models have received considerable attention among research communities. This study develops a novel Metaheuristics with Deep Learning Empowered Biomedical Atherosclerosis Disease Diagnosis and Classification (MDL-BADDC) model. The proposed MDL-BADDC technique encompasses several stages of operations such as pre-processing, feature selection, classification, and parameter tuning. Besides, the proposed MDL-BADDC technique designs a novel Quasi-Oppositional Barnacles Mating Optimizer (QOBMO) based feature selection technique. Moreover, the deep stacked autoencoder (DSAE) based classification model is designed for the detection and classification of atherosclerosis disease. Furthermore, the krill herd algorithm (KHA) based parameter tuning technique is applied to properly adjust the parameter values. In order to showcase the enhanced classification performance of the MDL-BADDC technique, a wide range of simulations take place on three benchmarks biomedical datasets. The comparative result analysis reported the better performance of the MDL-BADDC technique over the compared methods.  相似文献   

18.
Electroencephalography (EEG) eye state classification becomes an essential tool to identify the cognitive state of humans. It can be used in several fields such as motor imagery recognition, drug effect detection, emotion categorization, seizure detection, etc. With the latest advances in deep learning (DL) models, it is possible to design an accurate and prompt EEG EyeState classification problem. In this view, this study presents a novel compact bat algorithm with deep learning model for biomedical EEG EyeState classification (CBADL-BEESC) model. The major intention of the CBADL-BEESC technique aims to categorize the presence of EEG EyeState. The CBADL-BEESC model performs feature extraction using the ALexNet model which helps to produce useful feature vectors. In addition, extreme learning machine autoencoder (ELM-AE) model is applied to classify the EEG signals and the parameter tuning of the ELM-AE model is performed using CBA. The experimental result analysis of the CBADL-BEESC model is carried out on benchmark results and the comparative outcome reported the supremacy of the CBADL-BEESC model over the recent methods.  相似文献   

19.
As the amount of online video content is increasing, consumers are becoming increasingly interested in various product names appearing in videos, particularly in cosmetic-product names in videos related to fashion, beauty, and style. Thus, the identification of such products by using image recognition technology may aid in the identification of current commercial trends. In this paper, we propose a two-stage deep-learning detection and classification method for cosmetic products. Specifically, variants of the YOLO network are used for detection, where the bounding box for each given input product is predicted and subsequently cropped for classification. We use four state-of-the-art classification networks, namely ResNet, InceptionResNetV2, DenseNet, and EfficientNet, and compare their performance. Furthermore, we employ dilated convolution in these networks to obtain better feature representations and improve performance. Extensive experiments demonstrate that YOLOv3 and its tiny version achieve higher speed and accuracy. Moreover, the dilated networks marginally outperform the base models, or achieve similar performance in the worst case. We conclude that the proposed method can effectively detect and classify cosmetic products.  相似文献   

20.
Lung cancer is the main cause of cancer related death owing to its destructive nature and postponed detection at advanced stages. Early recognition of lung cancer is essential to increase the survival rate of persons and it remains a crucial problem in the healthcare sector. Computer aided diagnosis (CAD) models can be designed to effectually identify and classify the existence of lung cancer using medical images. The recently developed deep learning (DL) models find a way for accurate lung nodule classification process. Therefore, this article presents a deer hunting optimization with deep convolutional neural network for lung cancer detection and classification (DHODCNN-LCC) model. The proposed DHODCNN-LCC technique initially undergoes pre-processing in two stages namely contrast enhancement and noise removal. Besides, the features extraction process on the pre-processed images takes place using the Nadam optimizer with RefineDet model. In addition, denoising stacked autoencoder (DSAE) model is employed for lung nodule classification. Finally, the deer hunting optimization algorithm (DHOA) is utilized for optimal hyper parameter tuning of the DSAE model and thereby results in improved classification performance. The experimental validation of the DHODCNN-LCC technique was implemented against benchmark dataset and the outcomes are assessed under various aspects. The experimental outcomes reported the superior outcomes of the DHODCNN-LCC technique over the recent approaches with respect to distinct measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号