首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
The growing number of COVID-19 cases puts pressure on healthcare services and public institutions worldwide. The pandemic has brought much uncertainty to the global economy and the situation in general. Forecasting methods and modeling techniques are important tools for governments to manage critical situations caused by pandemics, which have negative impact on public health. The main purpose of this study is to obtain short-term forecasts of disease epidemiology that could be useful for policymakers and public institutions to make necessary short-term decisions. To evaluate the effectiveness of the proposed attention-based method combining certain data mining algorithms and the classical ARIMA model for short-term forecasts, data on the spread of the COVID-19 virus in Lithuania is used, the forecasts of epidemic dynamics were examined, and the results were presented in the study. Nevertheless, the approach presented might be applied to any country and other pandemic situations. The COVID-19 outbreak started at different times in different countries, hence some countries have a longer history of the disease with more historical data than others. The paper proposes a novel approach to data registration and machine learning-based analysis using data from attention-based countries for forecast validation to predict trends of the spread of COVID-19 and assess risks.  相似文献   

2.
    
From late 2019 to the present day, the coronavirus outbreak tragically affected the whole world and killed tens of thousands of people. Many countries have taken very stringent measures to alleviate the effects of the coronavirus disease 2019 (COVID-19) and are still being implemented. In this study, various machine learning techniques areimplemented to predict possible confirmed cases and mortality numbers for the future. According to these models, we have tried to shed light on the future in terms of possible measures to be taken or updating the current measures. Support Vector Machines (SVM), Holt-Winters, Prophet, and Long-Short Term Memory (LSTM) forecasting models are applied to the novel COVID-19 dataset. According to the results, the Prophet model givesthe lowest Root Mean Squared Error (RMSE) score compared to the other three models.Besides, according to this model, a projection for the future COVID-19 predictions of Turkey has been drawn and aimed to shape the current measures against the coronavirus.  相似文献   

3.
    
The COVID-19 outbreak initiated from the Chinese city of Wuhan and eventually affected almost every nation around the globe. From China, the disease started spreading to the rest of the world. After China, Italy became the next epicentre of the virus and witnessed a very high death toll. Soon nations like the USA became severely hit by SARS-CoV-2 virus. The World Health Organisation, on 11th March 2020, declared COVID-19 a pandemic. To combat the epidemic, the nations from every corner of the world has instituted various policies like physical distancing, isolation of infected population and researching on the potential vaccine of SARS-CoV-2. To identify the impact of various policies implemented by the affected countries on the pandemic spread, a myriad of AI-based models have been presented to analyse and predict the epidemiological trends of COVID-19. In this work, the authors present a detailed study of different artificial intelligence frameworks applied for predictive analysis of COVID-19 patient record. The forecasting models acquire information from records to detect the pandemic spreading and thus enabling an opportunity to take immediate actions to reduce the spread of the virus. This paper addresses the research issues and corresponding solutions associated with the prediction and detection of infectious diseases like COVID-19. It further focuses on the study of vaccinations to cope with the pandemic. Finally, the research challenges in terms of data availability, reliability, the accuracy of the existing prediction models and other open issues are discussed to outline the future course of this study.  相似文献   

4.
    
《Quality Engineering》2012,24(3):169-181
  相似文献   

5.
    
E-commerce refers to a system that allows individuals to purchase and sell things online. The primary goal of e-commerce is to offer customers the convenience of not going to a physical store to make a purchase. They will purchase the item online and have it delivered to their home within a few days. The goal of this research was to develop machine learning algorithms that might predict e-commerce platform sales. A case study has been designed in this paper based on a proposed continuous Stochastic Fractal Search (SFS) based on a Guided Whale Optimization Algorithm (WOA) to optimize the parameter weights of the Bidirectional Recurrent Neural Networks (BRNN). Furthermore, a time series dataset is tested in the experiments of e-commerce demand forecasting. Finally, the results were compared to many versions of the state-of-the-art optimization techniques such as the Particle Swarm Optimization (PSO), Whale Optimization Algorithm (WOA), and Genetic Algorithm (GA). A statistical analysis has proven that the proposed algorithm can work significantly better by statistical analysis test at the P-value less than 0.05 with a one-way analysis of variance (ANOVA) test applied to confirm the performance of the proposed ensemble model. The proposed Algorithm achieved a root mean square error of RMSE (0.0000359), Mean (0.00003593) and Standard Deviation (0.000002162).  相似文献   

6.
    
In the financial sector, data are highly confidential and sensitive, and ensuring data privacy is critical. Sample fusion is the basis of horizontal federation learning, but it is suitable only for scenarios where customers have the same format but different targets, namely for scenarios with strong feature overlapping and weak user overlapping. To solve this limitation, this paper proposes a federated learning-based model with local data sharing and differential privacy. The indexing mechanism of differential privacy is used to obtain different degrees of privacy budgets, which are applied to the gradient according to the contribution degree to ensure privacy without affecting accuracy. In addition, data sharing is performed to improve the utility of the global model. Further, the distributed prediction model is used to predict customers’ loan propensity on the premise of protecting user privacy. Using an aggregation mechanism based on federated learning can help to train the model on distributed data without exposing local data. The proposed method is verified by experiments, and experimental results show that for non-iid data, the proposed method can effectively improve data accuracy and reduce the impact of sample tilt. The proposed method can be extended to edge computing, blockchain, and the Industrial Internet of Things (IIoT) fields. The theoretical analysis and experimental results show that the proposed method can ensure the privacy and accuracy of the federated learning process and can also improve the model utility for non-iid data by 7% compared to the federated averaging method (FedAvg).  相似文献   

7.
传统的理论研究、实验研究及计算仿真已无法满足科学家对新材料的探索与设计。数据驱动的机器学习算法对材料的筛选与性能预测有着推动作用。将机器学习算法应用到材料信息学,基于现有材料热导率数据集,建立机器学习热导率预测模型,通过交叉验证来对机器学习回归模型进行评估。利用机器学习算法建立描述符与热导率属性之间的映射模型,可用于大规模的材料筛选,从而指导实验研究。  相似文献   

8.
    
The widespread use of smartwatches has increased their specific and complementary activities in the health sector for patient’s prognosis. In this study, we propose a framework referred to as smart forecasting CardioWatch (SCW) to measure the heart-rate variation (HRV) for patients with myocardial infarction (MI) who live alone or are outside their homes. In this study, HRV is used as a vital alarming sign for patients with MI. The performance of the proposed framework is measured using machine learning and deep learning techniques, namely, support vector machine, logistic regression, and decision-tree classification techniques. The results indicated that the analysis of heart rate can help health services that are located remotely from the patient to render timely emergency health care. Further, taking more cardiac parameters into account can lead to more accurate results. On the basis of our findings, we recommend the development of health-related software to aid researchers to develop frameworks, such as SCW, for effective provision of emergency health.  相似文献   

9.
    
Due to the widespread use of the internet and smart devices, various attacks like intrusion, zero-day, Malware, and security breaches are a constant threat to any organization's network infrastructure. Thus, a Network Intrusion Detection System (NIDS) is required to detect attacks in network traffic. This paper proposes a new hybrid method for intrusion detection and attack categorization. The proposed approach comprises three steps to address high false and low false-negative rates for intrusion detection and attack categorization. In the first step, the dataset is preprocessed through the data transformation technique and min-max method. Secondly, the random forest recursive feature elimination method is applied to identify optimal features that positively impact the model's performance. Next, we use various Support Vector Machine (SVM) types to detect intrusion and the Adaptive Neuro-Fuzzy System (ANFIS) to categorize probe, U2R, R2U, and DDOS attacks. The validation of the proposed method is calculated through Fine Gaussian SVM (FGSVM), which is 99.3% for the binary class. Mean Square Error (MSE) is reported as 0.084964 for training data, 0.0855203 for testing, and 0.084964 to validate multiclass categorization.  相似文献   

10.
    
The extreme imbalanced data problem is the core issue in anomaly detection. The amount of abnormal data is so small that we cannot get adequate information to analyze it. The mainstream methods focus on taking fully advantages of the normal data, of which the discrimination method is that the data not belonging to normal data distribution is the anomaly. From the view of data science, we concentrate on the abnormal data and generate artificial abnormal samples by machine learning method. In this kind of technologies, Synthetic Minority Over-sampling Technique and its improved algorithms are representative milestones, which generate synthetic examples randomly in selected line segments. In our work, we break the limitation of line segment and propose an Imbalanced Triangle Synthetic Data method. In theory, our method covers a wider range. In experiment with real world data, our method performs better than the SMOTE and its meliorations.  相似文献   

11.
    
Due to its outstanding ability in processing large quantity and high-dimensional data, machine learning models have been used in many cases, such as pattern recognition, classification, spam filtering, data mining and forecasting. As an outstanding machine learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations, yet in selecting qualified applicants for winning a funding is almost new. The major problem lies in how to accurately determine the importance of attributes. In this paper, we propose a Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classifyfunding applicants in to two types: approved ones or not approved ones. The FGDKNN is based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels by minimizing error ratio iteratively, so that the importance of attributes can be described better. We investigate the performance of FGDKNN with Beijing Innofund. The results show that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN, respectively. Moreover, the FGDKNN has fast convergence time under different training scales, and has good performance under different settings.  相似文献   

12.
    
Atherosclerosis diagnosis is an inarticulate and complicated cognitive process. Researches on medical diagnosis necessitate maximum accuracy and performance to make optimal clinical decisions. Since the medical diagnostic outcomes need to be prompt and accurate, the recently developed artificial intelligence (AI) and deep learning (DL) models have received considerable attention among research communities. This study develops a novel Metaheuristics with Deep Learning Empowered Biomedical Atherosclerosis Disease Diagnosis and Classification (MDL-BADDC) model. The proposed MDL-BADDC technique encompasses several stages of operations such as pre-processing, feature selection, classification, and parameter tuning. Besides, the proposed MDL-BADDC technique designs a novel Quasi-Oppositional Barnacles Mating Optimizer (QOBMO) based feature selection technique. Moreover, the deep stacked autoencoder (DSAE) based classification model is designed for the detection and classification of atherosclerosis disease. Furthermore, the krill herd algorithm (KHA) based parameter tuning technique is applied to properly adjust the parameter values. In order to showcase the enhanced classification performance of the MDL-BADDC technique, a wide range of simulations take place on three benchmarks biomedical datasets. The comparative result analysis reported the better performance of the MDL-BADDC technique over the compared methods.  相似文献   

13.
近年来,以深度学习为代表的机器学习技术飞速发展,凭借其出色的学习能力,在复杂环境条件下的建模问题中展现出了独特的优势。当前,基于机器学习的水声通信技术研究方兴未艾,在信道估计及均衡、典型通信系统应用等方面取得了一定的进展,但是针对实际水声环境约束条件下的研究较少。为此,文章围绕信道估计这一水声通信关键技术,针对水声信道估计中存在的样本不足,标签标定困难以及水声环境时空变导致的源域、目标域失配等问题,讨论了水声信道估计与数据增强、无标签学习、少样本学习等模型和方法交叉研究的发展思路,并给出了初步的仿真和试验结果。文章是对水声通信中的信道估计与机器学习交叉领域研究重难点问题的初步探索,为水下各类平台自主智能化的通信技术发展提供了参考。  相似文献   

14.
    
Forecasting future outbreaks can help in minimizing their spread. Influenza is a disease primarily found in animals but transferred to humans throughpigs. In 1918, influenza became a pandemic and spread rapidly all over the worldbecoming the cause behind killing one-third of the human population and killingone-fourth of the pig population. Afterwards, that influenza became a pandemicseveral times on a local and global levels. In 2009, influenza ‘A’ subtypeH1N1 again took many human lives. The disease spread like in a pandemicquickly. This paper proposes a forecasting modeling system for the influenza pandemic using a feed-forward propagation neural network (MSDII-FFNN). Thismodel helps us predict the outbreak, and determines which type of influenzabecomes a pandemic, as well as which geographical area is infected. Data collection for the model is done by using IoT devices. This model is divided into2 phases: The training phase and the validation phase, both being connectedthrough the cloud. In the training phase, the model is trained using FFNN andis updated on the cloud. In the validation phase, whenever the input is submittedthrough the IoT devices, the system model is updated through the cloud and predicts the pandemic alert. In our dataset, the data is divided into an 85% trainingratio and a 15% validation ratio. By applying the proposed model to our dataset,the predicted output precision is 90%.  相似文献   

15.
    
Data is always a crucial issue of concern especially during its prediction and computation in digital revolution. This paper exactly helps in providing efficient learning mechanism for accurate predictability and reducing redundant data communication. It also discusses the Bayesian analysis that finds the conditional probability of at least two parametric based predictions for the data. The paper presents a method for improving the performance of Bayesian classification using the combination of Kalman Filter and K-means. The method is applied on a small dataset just for establishing the fact that the proposed algorithm can reduce the time for computing the clusters from data. The proposed Bayesian learning probabilistic model is used to check the statistical noise and other inaccuracies using unknown variables. This scenario is being implemented using efficient machine learning algorithm to perpetuate the Bayesian probabilistic approach. It also demonstrates the generative function for Kalman-filer based prediction model and its observations. This paper implements the algorithm using open source platform of Python and efficiently integrates all different modules to piece of code via Common Platform Enumeration (CPE) for Python.  相似文献   

16.
    
Epilepsy is a type of brain disorder that causes recurrent seizures. It is the second most common neurological disease after Alzheimer’s. The effects of epilepsy in children are serious, since it causes a slower growth rate and a failure to develop certain skills. In the medical field, specialists record brain activity using an Electroencephalogram (EEG) to observe the epileptic seizures. The detection of these seizures is performed by specialists, but the results might not be accurate due to human errors; therefore, automated detection of epileptic pediatric seizures might be the optimal solution. This paper investigates the detection of epileptic seizures by applying supervised machine learning techniques. The techniques applied on the data of patients with ages seven years and below from children’s hospital boston massachusetts institute of technology (CHB-MIT) scalp EEG database of epileptic pediatric signals. A group of Naïve Bayes (NB), Support vector machine (SVM), Logistic regression (LR), k-nearest neighbor (KNN), Linear discernment (LD), Decision tree (DT), and ensemble learning methods were applied to the classification process. The results demonstrated the outperformance of the present study by achieving 100% for all parameters using the Ensemble learning model in contrast to state-of-the-art studies in the literature. Similarly, the SVM model achieved performance with 98.3% for sensitivity, 97.7% for specificity, and 98% for accuracy. The results of the LD and LR models reveal the lower performance i.e., the sensitivity at 66.9%–68.9%, specificity at 73.5%–77.1%, and accuracy at 70.2%–73%.  相似文献   

17.
吴攀 《发电技术》2020,41(3):231
为解决光伏发电系统发电功率在不同条件下误差较大问题,提出光伏发电系统发电功率预测新方法。通过分析光伏发电系统结构,研究光伏发电系统发电功率影响因素;以季节和天气类型作为历史样本选取样本源,针对气象部门提供的预测日分时气象数据在历史数据库中寻找相似数据点作为历史样本;依据历史样本构建离线参数寻优数据总集,使用核函数极限学习机算法构建发电系统发电功率预测模型,通过粒子群算法优化模型参数。实验结果表明:所提方法在不同条件下预测太阳能光伏发电系统发电功率的平均绝对百分比误差分别为1.47%和6.39%,光伏组件在综合异常条件下发电功率预测误差相对变化均低于1%,证明所提方法满足实际预测要求。  相似文献   

18.
    
As big data, its technologies, and application continue to advance, the Smart Grid (SG) has become one of the most successful pervasive and fixed computing platforms that efficiently uses a data-driven approach and employs efficient information and communication technology (ICT) and cloud computing. As a result of the complicated architecture of cloud computing, the distinctive working of advanced metering infrastructures (AMI), and the use of sensitive data, it has become challenging to make the SG secure. Faults of the SG are categorized into two main categories, Technical Losses (TLs) and Non-Technical Losses (NTLs). Hardware failure, communication issues, ohmic losses, and energy burnout during transmission and propagation of energy are TLs. NTL’s are human-induced errors for malicious purposes such as attacking sensitive data and electricity theft, along with tampering with AMI for bill reduction by fraudulent customers. This research proposes a data-driven methodology based on principles of computational intelligence as well as big data analysis to identify fraudulent customers based on their load profile. In our proposed methodology, a hybrid Genetic Algorithm and Support Vector Machine (GA-SVM) model has been used to extract the relevant subset of feature data from a large and unsupervised public smart grid project dataset in London, UK, for theft detection. A subset of 26 out of 71 features is obtained with a classification accuracy of 96.6%, compared to studies conducted on small and limited datasets.  相似文献   

19.
    
Designing future‐proof materials goes beyond a quest for the best. The next generation of materials needs to be adaptive, multipurpose, and tunable. This is not possible by following the traditional experimentally guided trial‐and‐error process, as this limits the search for untapped regions of the solution space. Here, a computational data‐driven approach is followed for exploring a new metamaterial concept and adapting it to different target properties, choice of base materials, length scales, and manufacturing processes. Guided by Bayesian machine learning, two designs are fabricated at different length scales that transform brittle polymers into lightweight, recoverable, and supercompressible metamaterials. The macroscale design is tuned for maximum compressibility, achieving strains beyond 94% and recoverable strengths around 0.1 kPa, while the microscale design reaches recoverable strengths beyond 100 kPa and strains around 80%. The data‐driven code is available to facilitate future design and analysis of metamaterials and structures ( https://github.com/mabessa/F3DAS ).  相似文献   

20.
    
In 2018, 1.76 million people worldwide died of lung cancer. Most of these deaths are due to late diagnosis, and early-stage diagnosis significantly increases the likelihood of a successful treatment for lung cancer. Machine learning is a branch of artificial intelligence that allows computers to quickly identify patterns within complex and large datasets by learning from existing data. Machine-learning techniques have been improving rapidly and are increasingly used by medical professionals for the successful classification and diagnosis of early-stage disease. They are widely used in cancer diagnosis. In particular, machine learning has been used in the diagnosis of lung cancer due to the benefits it offers doctors and patients. In this context, we performed a study on machine-learning techniques to increase the classification accuracy of lung cancer with 32 × 56 sized numerical data from the Machine Learning Repository web site of the University of California, Irvine. In this study, the precision of the classification model was increased by the effective employment of pre-processing methods instead of direct use of classification algorithms. Nine datasets were derived with pre-processing methods and six machine-learning classification methods were used to achieve this improvement. The study results suggest that the accuracy of the k-nearest neighbors algorithm is superior to random forest, naïve Bayes, logistic regression, decision tree, and support vector machines. The performance of pre-processing methods was assessed on the lung cancer dataset. The most successful pre-processing methods were Z-score (83% accuracy) for normalization methods, principal component analysis (87% accuracy) for dimensionality reduction methods, and information gain (71% accuracy) for feature selection methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号