首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The continuing growth in size and complexity of electric power systems requires the development of applicable load forecasting models to estimate the future electrical energy demands accurately. This paper presents a novel load forecasting approach called genetic‐based adaptive neuro‐fuzzy inference system (GBANFIS) to construct short‐term load forecasting expert systems and controllers. At the first stage, all records of data are searched by a novel genetic algorithm (GA) to find the most suitable feature of inputs to construct the model. Then, determined inputs are fed into the adaptive neuro‐fuzzy inference system to evolve the initial knowledge‐base of the expert system. Finally, the initial knowledge‐base is searched by another robust GA to induce a better cooperation among the rules by rule weight derivation and rule selection mechanisms. We show the superiority and applicability of our approach by applying it to the Iranian monthly electrical energy demand problem and comparing it with the most frequently adopted approaches in this field. Results indicate that GBANFIS outperforms its rival approaches and is a promising tool for dealing with short‐term load forecasting problems.  相似文献   

2.
This paper proposes a new methodology for carbon price forecasting. It posits a finite distributed lag (FDL) model and then applies a GA‐ridge algorithm to determine a set of proper predictors with coefficient estimates. An empirical study was conducted in the European Union Greenhouse Gas Emissions Trading market, revealing that our methodology not only yields good forecasting results but also provides some interesting analysis on the carbon price market. It turns out that the combination of the FDL model and GA‐ridge algorithm is desirable for forecasting and analyzing the complicated carbon price market because of its capability of selecting proper predictors from a class of predictors of itself.  相似文献   

3.
End stage renal disease condition increases the risk of cardiovascular disease. The mortality rates among hemodialysis patients are 20% higher than the general population, thus in recent years the preservation of the cardiovascular system has become a major point of focus for nephrology care in patients. Cardiovascular events jeopardize the life of a dialysis patient and must therefore be prevented. The aim of this study is to develop forecast models that can predict the cardiovascular outcome of incident hemodialysis (HD) patients. Data relating to the treatment methods and the physiological condition of patients was collected during the first 18 months of renal replacement therapy and then used to predict the insurgence of cardiovascular events within a 6-month time window. Information regarding 4246 incident hemodialysis patients was collected. A Lasso logistic regression model and a random forest model were developed and used for predictive comparison. Every forecast model was tested on 20% of the data and a 5-fold cross validation approach was used to validate the random forest model. Random forest showed higher performance with AUC of the ROC curve and sensitivity higher than 70% in both the temporal windows models, proving that random forests are able to exploit non-linear patterns retrieved in the feature space. Out of bag estimates of variable importance and regression coefficients were used to gain insight into the models implemented. We found out that malnutrition and an inflammatory condition strongly influence cardiovascular outcome in incident HD patients. Indeed the most important variables in the model were blood test variables such as the total protein content, percentage value of albumin, total protein content, creatinine and C reactive protein. Age of patients and weight loss in the first six months of renal replacement therapy were also highly involved in the prediction. A greater understanding of the mechanisms involved in the insurgence of cardiovascular events in dialysis patients can ensure physicians to intervene in the appropriate manner when a high-risk cardiovascular condition is identified.  相似文献   

4.
The aim of the present study is to comparatively assess the performance of different machine learning and statistical techniques with regard to their ability to estimate the risk of developing type 2 diabetes mellitus (Case 1) and cardiovascular disease complications (Case 2). This is the first work investigating the application of ensembles of artificial neural networks (EANN) towards producing the 5‐year risk of developing type 2 diabetes mellitus and cardiovascular disease as a long‐term diabetes complication. The performance of the proposed models has been comparatively assessed with the performance obtained by applying logistic regression, Bayesian‐based approaches, and decision trees. The models' discrimination and calibration have been evaluated using the classification accuracy (ACC), the area under the curve (AUC) criterion, and the Hosmer–Lemeshow goodness of fit test. The obtained results demonstrate the superiority of the proposed models (EANN) over the other models. In Case 1, EANN with different topologies has achieved high discrimination and good calibration performance (ACC = 80.20%, AUC = 0.849, p value = .886). In Case 2, EANN based on bagging has resulted in good discrimination and calibration performance (ACC = 92.86%, AUC = 0.739, p value = .755).  相似文献   

5.
Classification and regression models are widely used by mainstream credit granting institutions to assess the risk of customer default. In practice, the objectives used to derive model parameters and the business objectives used to assess models differ. Models parameters are determined by minimising some function or error or by maximising likelihood, but performance is assessed using global measures such as the GINI coefficient, or the misclassification rate at a specific point in the score distribution. This paper seeks to determine the impact on performance that results from having different objectives for model construction and model assessment. To do this a genetic algorithm (GA) is utilized to generate linear scoring models that directly optimise business measures of interest. The performance of the GA models is then compared to those constructed using logistic and linear regression. Empirical results show that all models perform similarly well, suggesting that modelling and business objectives are well aligned.  相似文献   

6.
Genetic algorithms and job shop scheduling   总被引:12,自引:0,他引:12  
We describe applications of Genetic Algorithms (GAs) to the Job Shop Scheduling (JSS) problem. More specifically, the task of generating inputs to the GA process for schedule optimization is addressed.

We believe GAs can be employed as an additional tool in the Computer Integrated Manufacturing (CIM) cycle. Our technique employs an extension to the Group Technology (GT) method for generating manufacturing process plans. It positions the GA scheduling process to receive outputs from both the automated process planning function and the order entry function. The GA scheduling process then passes its results to the factory floor in terms of optimal schedules.

An introduction to the GA process is discussed first. Then, an elementary n-task, one processor (machine) problem is provided to demonstrate the GA methodology in the JSS problem arena. The technique is then demonstrated on an n-task, two processor problem, and finally, the technique is generalized to the n-tasks on m-processors (serial) case.  相似文献   


7.
This article proposes a new genetic algorithm (GA) methodology to obtain parsimonious support vector regression (SVR) models capable of predicting highly precise setpoints in a continuous annealing furnace (GA-PARSIMONY). The proposal combines feature selection, model tuning, and parsimonious model selection in order to achieve robust SVR models. To this end, a novel GA selection procedure is introduced based on separate cost and complexity evaluations. The best individuals are initially sorted by an error fitness function, and afterwards, models with similar costs are rearranged according to model complexity measurement so as to foster models of lesser complexity. Therefore, the user-supplied penalty parameter, utilized to balance cost and complexity in other fitness functions, is rendered unnecessary. GA-PARSIMONY performed similarly to classical GA on twenty benchmark datasets from public repositories, but used a lower number of features in a striking 65% of models. Moreover, the performance of our proposal also proved useful in a real industrial process for predicting three temperature setpoints for a continuous annealing furnace. The results demonstrated that GA-PARSIMONY was able to generate more robust SVR models with less input features, as compared to classical GA.  相似文献   

8.
Acute coronary syndrome (ACS) is a leading cause of mortality and morbidity in the Arabian Gulf. In this study, the in‐hospital mortality amongst patients admitted with ACS to Arabian Gulf hospitals is predicted using a comprehensive modelling framework that combines powerful machine‐learning methods such as support‐vector machine (SVM), Naïve Bayes (NB), artificial neural networks (NN), and decision trees (DT). The performance of the machine‐learning methods is compared with that of the performance of a commonly used statistical method, namely, logistic regression (LR). The study follows the current practise of computing mortality risk using risk scores such as the Global Registry of Acute Coronary Events (GRACE) score, which has not been validated for Arabian Gulf patients. Cardiac registry data of 7,000 patients from 65 hospitals located in Arabian Gulf countries are used for the study. This study is unique as it uses a contemporary data analytics framework. A k‐fold (k = 10) cross‐validation is utilized to generate training and validation samples from the GRACE dataset. The machine‐learning‐based predictive models often incur prejudgments for imbalanced training data patterns. To mitigate the data imbalance due to scarce observations for in‐hospital mortalities, we have utilized specialized methods such as random undersampling (RUS) and synthetic minority over sampling technique (SMOTE). A detailed simulation experimentation is carried out to build models with each of the five predictive methods (LR, NN, NB, SVM, and DT) for the each of the three datasets k‐fold subsamples generated. The predictive models are developed under three schemes of the k‐fold samples that include no data imbalance, RUS, and SMOTE. We have implemented an information fusion method rooted in computing weighted impact scores obtain for an individual medical history attributes from each of the predictive models simulated for a collective recommendation based on an impact score specific to a predictor. Finally, we grouped the predictors using fuzzy c‐mean clustering method into three categories, high‐, medium‐, and low‐risk factors for in‐hospital mortality due to ACS. Our study revealed that patients with medical history related to the presences of peripheral artery disease, congestive heart failure, cardiovascular transient ischemic attack valvular disease, and coronary artery bypass grafting amongst others have the most risk for in‐hospital mortality.  相似文献   

9.
The analytical hierarchical process/data envelopment analysis (AHP/DEA) methodology for ranking decision‐making units (DMUs) has some problems: it illogically compares two DMUs in a DEA model; it is not compatible with DEA ranking in the case of multiple inputs/multiple outputs; and it leads to weak discrimination in cases where the number of inputs and outputs is large. In this paper, we propose a new two‐stage AHP/DEA methodology for ranking DMUs that removes these problems. In the first stage, we create a pairwise comparison matrix different from AHP/DEA methodology; the second stage is the same as AHP/DEA methodology. Numerical examples are presented in the paper to illustrate the advantages of the new AHP/DEA methodology.  相似文献   

10.
This paper presents a new problem-solving approach, termed simulation-based policy generation (SPG), that is able to generate solutions to problems that may otherwise be computationally intractable. The SPG method uses a simulation of the original problem to create an approximating Markov decision process (MDP) model which is then solved via traditional MDP solution approaches. Since this approximating MDP is a fairly rich and robust sequential optimization model, solution policies can be created which represent an intelligent and structured search of the policy space. An important feature of the SPG approach is its adaptive nature, in that it uses the original simulation model to generate improved aggregation schemes, allowing the approach to be applied in situations where the underlying problem structure is largely unknown. In order to illustrate the performance of the SPG methodology, we apply it to a common but computationally complex problem of inventory control, and we briefly discuss its application to a large-scale telephone network routing problem  相似文献   

11.
心血管疾病是威胁人类健康的常见疾病,为了能够更加准确地对其预测,本文在传统DNN模型基础上进行优化改进,提出定向正则的深度神经网络(TR-DNN)模型,通过改进原有深度神经网络模型所存在的缺陷,使其能够更好地对心血管疾病数据集进行训练并测试,进一步实现心血管疾病预测任务。实验表明该模型在数据集训练上的表现良好,并且在测试集上取得优秀的结果。最后,将TR-DNN与SVM、RF、XGBoost模型在同一数据集进行结果比较,TR-DNN模型的各项评价指标均优于其它模型,在准确率方面相较传统DNN模型提高1.507个百分点,召回率提高1.57个百分点,特异度提高2.54个百分点,精确率提高1.51个百分点。因此,TR-DNN模型可以应用于心血管疾病的预测。  相似文献   

12.
In this paper, we show that genetic algorithms (GA) can be used to control the output of procedural modeling algorithms. We propose an efficient way to encode the choices that have to be made during a procedural generation as a hierarchical genome representation. In combination with mutation and reproduction operations specifically designed for controlled procedural modeling, our GA can evolve a population of individual models close to any high‐level goal. Possible scenarios include a volume that should be filled by a procedurally grown tree or a painted silhouette that should be followed by the skyline of a procedurally generated city. These goals are easy to set up for an artist compared to the tens of thousands of variables that describe the generated model and are chosen by the GA. Previous approaches for controlled procedural modeling either use Reversible Jump Markov Chain Monte Carlo (RJMCMC) or Stochastically‐Ordered Sequential Monte Carlo (SOSMC) as workhorse for the optimization. While RJMCMC converges slowly, requiring multiple hours for the optimization of larger models, it produces high quality models. SOSMC shows faster convergence under tight time constraints for many models, but can get stuck due to choices made in the early stages of optimization. Our GA shows faster convergence than SOSMC and generates better models than RJMCMC in the long run.  相似文献   

13.
As a methodology, computing with words (CW) allows the use of words, instead of numbers or symbols, in the process of computing and reasoning and thus conforms more to humans’ inference when it is used to describe real‐world problems. In the line of developing a computational theory for CW, in this paper we develop a formal general type‐2 fuzzy model of CW by exploiting general type‐2 fuzzy sets (GT2 FSs) since GT2 FSs bear greater potential to model the linguistic uncertainty. On the one hand, we generalize the interval type‐2 fuzzy sets (IT2 FSs)‐based formal model of CW into general type‐2 fuzzy environments. Concretely, we present two kinds of general type‐2 fuzzy automata (i.e., general type‐2 fuzzy finite automata and general type‐2 fuzzy pushdown automata) as computational models of CW. On the other hand, we also give a somewhat universally general type‐2 fuzzy model of computing with (some special) words and establish a retraction principle from computing with words to computing with values for handling crisp inputs in general type‐2 fuzzy setting and a generalized extension principle from computing with words to computing with all words for handling general type‐2 fuzzy inputs.  相似文献   

14.
A new technique for behavioral modeling of power amplifier (PA) with short‐ and long‐term memory effects is presented here using recurrent neural networks (RNNs). RNN can be trained directly with only the input–output data without having to know the internal details of the circuit. The trained models can reflect the behavior of nonlinear circuits. In our proposed technique, we extract slow‐changing signals from the inputs and outputs of the PA and use these signals as extra inputs of RNN model to effectively represent long‐term memory effects. The methodology using the proposed RNN for modeling short‐term and long‐term memory effects is discussed. Examples of behavioral modeling of PAs with short‐ and long‐term memory using both the existing dynamic neural networks and the proposed RNNs techniques are shown. © 2014 Wiley Periodicals, Inc. Int J RF and Microwave CAE 25:289–298, 2015.  相似文献   

15.
The correct diagnosis of cardiovascular disease is a key factor to reduce social and economic costs. In this context, cardiovascular disease risk assessment tools are of fundamental importance. This work addresses two major drawbacks of the current cardiovascular risk score systems: reduced number of risk factors considered by each individual tool and the inability of these tools to deal with incomplete information.To achieve these goals a two phase strategy was followed. In the first phase, a common representation procedure, based on a Naïve-Bayes classifier methodology, was applied to a set of current risk assessment tools. Classifiers’ individual parameters and conditional probabilities were initially evaluated through a frequency estimation method. In a second phase, a combination scheme was proposed exploiting the particular features of Bayes probabilistic reasoning, followed by conditional probabilities optimization based on a genetic algorithm approach.This strategy was applied to describe and combine ASSIGN and Framingham models. Validation results were obtained based on individual models, assuming their statistical correctness. The achieved results are very promising, showing the potential of the strategy to accomplish the desired goals.  相似文献   

16.
Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years. Coronary cardiovascular (CHD) is a kind of heart and blood vascular disease. Predicting this sort of cardiac illness leads to more precise decisions for cardiac disorders. Implementing Grid Search Optimization (GSO) machine training models is therefore a useful way to forecast the sickness as soon as possible. The state-of-the-art work is the tuning of the hyperparameter together with the selection of the feature by utilizing the model search to minimize the false-negative rate. Three models with a cross-validation approach do the required task. Feature Selection based on the use of statistical and correlation matrices for multivariate analysis. For Random Search and Grid Search models, extensive comparison findings are produced utilizing retrieval, F1 score, and precision measurements. The models are evaluated using the metrics and kappa statistics that illustrate the three models’ comparability. The study effort focuses on optimizing function selection, tweaking hyperparameters to improve model accuracy and the prediction of heart disease by examining Framingham datasets using random forestry classification. Tuning the hyperparameter in the model of grid search thus decreases the erroneous rate achieves global optimization.  相似文献   

17.
This article first presents several formulas of chance distributions for trapezoidal fuzzy random variables and their functions, then develops a new class of chance model (C-model for short) about data envelopment analysis (DEA) in fuzzy random environments, in which the inputs and outputs are assumed to be characterized by fuzzy random variables with known possibility and probability distributions. Since the objective and constraint functions contain the chance of fuzzy random events, for general fuzzy random inputs and outputs, we suggest an approximation method to compute the chance. When the inputs and outputs are mutually independent trapezoidal fuzzy random variables, we can turn the chance constraints and the chance objective into their equivalent stochastic ones by applying the established formulas for the chance distributions. In the case when the inputs and the outputs are mutually independent trapezoidal fuzzy random vectors, the proposed C-model can be transformed to its equivalent stochastic programming one, in which the objective and the constraint functions include a number of standard normal distribution functions. To solve such an equivalent stochastic programming, we design a hybrid algorithm by integrating Monte Carlo (MC) simulation and genetic algorithm (GA), in which MC simulation is used to calculate standard normal distribution functions, and GA is used to solve the optimization problems. Finally, one numerical example is presented to demonstrate the proposed modeling idea and the efficiency in the proposed model.  相似文献   

18.
This research frame work investigates the application of a clustered based Neuro‐fuzzy system to nonlinear dynamic system modeling from a set of input‐output training patterns. It is concentrated on the modeling via Takagi‐Sugeno (T‐S) modeling technique and the employment of fuzzy clustering to generate suitable initial membership functions. Hence, such created initial memberships are then employed to construct suitable T‐S sub‐models. Furthermore, the T‐S fuzzy models have been validated and checked through the use of some standard model validation techniques (like the correlation functions). Compared to other well‐known approximation techniques such as artificial neural networks, fuzzy systems provide a more transparent representation of the system under study, which is mainly due to the possible linguistic interpretation in the form of rules. Such intelligent modeling scheme is very useful once making complicated systems linguistically transparent in terms of fuzzy if‐then rules. The developed T‐S Fuzzy modeling system has been then applied to model a nonlinear antenna dynamic system with two coupled inputs and outputs. Validation results have resulted in a very close antenna sub‐models of the original nonlinear antenna system. The suggested technique is very useful for development transparent linear control systems even for highly nonlinear dynamic systems.  相似文献   

19.
The omega ratio, a performance measure, is the ratio of the expected upside deviation of return to the expected downside deviation of return from a predetermined threshold described by an investor. It has been exhibited that the omega ratio optimization is equivalent to a linear program under a mild condition and thus easily tractable. But the omega ratio optimization fails to hedge against many other risks involved in portfolio return that may adversely affect the interests of a risk‐averse investor. On the other hand, there are widely accepted mean‐risk models for portfolio selection that seek to maximize mean return and minimize the associated risk but in general fail to maximize the relative performance ratio around the threshold return. In this paper, we aim to propose a model called ‘extended omega ratio optimization’ that combines the features of the omega ratio optimization model and mean‐risk models. The proposed model introduces constraint on a general risk function in the omega ratio optimization model in such a way that the resultant model remains linear and thus tractable. Our empirical experience with real data from S&P BSE sensex index shows that the optimal portfolios from the extended omega ratio optimization model(s) improved over the optimal portfolios from the omega ratio optimization in having less associated risk and over the optimal portfolios from the corresponding mean‐risk model(s) in having a high value omega ratio.  相似文献   

20.
National Aerospace Laboratory (NAL) and National Space Development Agency (NASDA) of Japan launched a hypersonic flight experiment vehicle (HYFLEX) for flight experiment of reentry phase to the top of the atmosphere in 1996. Flight condition in the experiment varied from an altitude of 107km and a speed of Mach 15 to an altitude of 30km and a speed of Mach 2. This paper describes design of a flight control system of the HYFLEX and evaluation of robustness against variations of command inputs and aerodynamic coefficients. A nonlinear simulation model that describes the vehicle motion has been made using the data obtained from wind tunnel experiments by NAL and NASDA. Linear models are computed from the nonlinear model for every second of the flight, assuming that the flight is a quasi‐trimmed one. Because the vehicle has a cylinder‐like configuration and takes a large angle of attack and a bank angle, coupling between longitudinal and lateral‐directional motions cannot be neglected; hence, the linear models include coupling terms. Averaging the linear models yields a nominal model for controller design, where the type‐1 linear quadratic (LQ) servo controller is employed for attitude control. Since the flight condition varies so much, it is difficult to control the vehicle using a single set of control gains. Therefore the flight period is segmented into eight intervals, for each of which a nominal linear model is computed, and then eight sets of feedback gain matrices are computed and scheduled as a function of flight time. The effectiveness and robustness of the flight control system are examined through computer simulation. Simulation results indicate that the control system works well even when attitude commands and aerodynamic parameters considerably deviate from nominal ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号