首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Chord progressions are the building blocks from which tonal music is constructed. The choice of a particular representation for chords has a strong impact on statistical modeling of the dependence between chord symbols and the actual sequences of notes in polyphonic music. Melodic prediction is used in this paper as a benchmark task to evaluate the quality of four chord representations using two probabilistic model architectures derived from Input/Output Hidden Markov Models (IOHMMs). Likelihoods and conditional and unconditional prediction error rates are used as complementary measures of the quality of each of the proposed chord representations. We observe empirically that different chord representations are optimal depending on the chosen evaluation metric. Also, representing chords only by their roots appears to be a good compromise in most of the reported experiments.  相似文献   

3.
The predictive control developments in the literature, in particular those giving a priori stability guarantees, have often been based on realigned models whereas industrial packages have often made much use of independent models. However, the transferal of many of these stability results to the independent model case is not straightforward especially for the important case of unstable open-loop processes. This note shows how terminal constraints can be deployed with unstable independent models and illustrates briefly the necessity and benefits of these developments.  相似文献   

4.
Techniques for evaluating fault prediction models   总被引:1,自引:1,他引:0  
Many statistical techniques have been proposed to predict fault-proneness of program modules in software engineering. Choosing the “best” candidate among many available models involves performance assessment and detailed comparison, but these comparisons are not simple due to the applicability of varying performance measures. Classifying a software module as fault-prone implies the application of some verification activities, thus adding to the development cost. Misclassifying a module as fault free carries the risk of system failure, also associated with cost implications. Methodologies for precise evaluation of fault prediction models should be at the core of empirical software engineering research, but have attracted sporadic attention. In this paper, we overview model evaluation techniques. In addition to many techniques that have been used in software engineering studies before, we introduce and discuss the merits of cost curves. Using the data from a public repository, our study demonstrates the strengths and weaknesses of performance evaluation techniques and points to a conclusion that the selection of the “best” model cannot be made without considering project cost characteristics, which are specific in each development environment.
Bojan CukicEmail:
  相似文献   

5.
Network reliability models for determining optimal network topology have been presented and solved by many researchers. This paper presents some new types of topological optimization model for communication network with multiple reliability goals. A stochastic simulation-based genetic algorithm is also designed for solving the proposed models. Some numerical examples are finally presented to illustrate the effectiveness of the algorithm.  相似文献   

6.
We tackle the problem of new users or documents in collaborative filtering. Generalization over users by grouping them into user groups is beneficial when a rating is to be predicted for a relatively new document having only few observed ratings. Analogously, generalization over documents improves predictions in the case of new users. We show that if either users and documents or both are new, two-way generalization becomes necessary. We demonstrate the benefits of grouping of users, grouping of documents, and two-way grouping, with artificial data and in two case studies with real data. We have introduced a probabilistic latent grouping model for predicting the relevance of a document to a user. The model assumes a latent group structure for both users and items. We compare the model against a state-of-the-art method, the User Rating Profile model, where only the users have a latent group structure. We compute the posterior of both models by Gibbs sampling. The Two-Way Model predicts relevance more accurately when the target consists of both new documents and new users. The reason is that generalization over documents becomes beneficial for new documents and at the same time generalization over users is needed for new users.  相似文献   

7.
Confidence interval prediction for neural network models   总被引:2,自引:0,他引:2  
To derive an estimate of a neural network's accuracy as an empirical modeling tool, a method to quantify the confidence intervals of a neural network model of a physical system is desired. In general, a model of a physical system has error associated with its predictions due to the dependence of the physical system's output on uncontrollable or unobservable quantities. A confidence interval can be computed for a neural network model with the assumption of normally distributed error for the neural network. The proposed method accounts for the accuracy of the data with which the neural network model is trained.  相似文献   

8.
Supporting human decision-making is a major goal of data mining. The more decision-making is critical, the more interpretability is required in the predictive model. This paper proposes a new framework to build a fully interpretable predictive model for questionnaire data, while maintaining a reasonable prediction accuracy with regard to the final outcome. Such a model has applications in project risk assessment, in healthcare, in social studies, and, presumably, in any real-world application that relies on questionnaire data for informative and accurate prediction. Our framework is inspired by models in item response theory (IRT), which were originally developed in psychometrics with applications to standardized academic tests. We extend these models, which are essentially unsupervised, to the supervised setting. For model estimation, we introduce a new iterative algorithm by combining Gauss–Hermite quadrature with an expectation–maximization algorithm. The learned probabilistic model is linked to the metric learning framework for informative and accurate prediction. The model is validated by three real-world data sets: Two are from information technology project failure prediction and the other is an international social survey about people’s happiness. To the best of our knowledge, this is the first work that leverages the IRT framework to provide informative and accurate prediction on ordinal questionnaire data.  相似文献   

9.
This paper describes a novel modelling approach based on a hybrid structure developed for predicting the material properties of aluminium alloys for different deformation conditions. The model is based on physical equations and neuro-fuzzy models. The paper describes the methodology for developing the hybrid model and the validation process which covers a wide range of alloys, treatment temperatures and deformation conditions (e.g. plane strain compression (PSC) tests, strain rate).  相似文献   

10.
This paper presents the study of three forecasting models??a multilayer perceptron, a support vector machine, and a hierarchical model. The hierarchical model is made up of a self-organizing map and a support vector machine??the latter on top of the former. The models are trained and assessed on a time series of a Brazilian stock market fund. The results from the experiments show that the performance of the hierarchical model is better than that of the support vector machine, and much better than that of the multilayer perceptron.  相似文献   

11.
A summary is presented of a study on two-dimensional linear prediction models for image sequence processing and its application to change detection and scene coding. The study focused on two-dimensional joint process modeling of interframe relationships, the derivation of computationally efficient matching algorithms, and the implementation of a block-adaptive interframe predictor for use in interframe predictive coding and change detection. In the approach presented, the spatial nonstationarity is handled by an underlying quadtree segmentation structure. A maximum-likelihood criterion and a simpler minimum-variance criterion are discussed as detection and segmentation rules. The results of this research indicate that a constrained joint process model involving only a single gain parameter and a shift parameter is the best tradeoff between performance and computational complexity  相似文献   

12.
The hybrid grey-based models for temperature prediction   总被引:3,自引:0,他引:3  
In this paper several grey-based models are applied to temperature prediction problems. Standard normal distribution, linear regression, and fuzzy techniques are respectively integrated into the grey model to enhance the embedded GM(1, 1), a single variable first order grey model, prediction capability. The original data are preprocessed by the statistical method of standard normal distribution such that they will become normally distributed with a mean of zero and a standard deviation of one. The normalized data are then used to construct the grey model. Due to the inherent error between the predicted and actual outputs, the grey model is further supplemented by either the linear regression or fuzzy method or both to improve the prediction accuracy. Results from predicting the monthly temperatures for two different cities demonstrate that each proposed hybrid methodology can somewhat reduce the prediction errors. When both the statistics and fuzzy methods are incorporated with the grey model, the prediction capability of the hybrid model is quite satisfactory. We repeat the prediction problems in neural networks and the results are also presented for comparison.  相似文献   

13.
采用分子全息距离矢量方法描述40个氨基喹啉类化合物的分子结构,运用主成分回归方法建模进行定量构效关系分析,预测其抗疟原虫活性。其两组活性数据所得结果相关系数分别为0.9438和0.9737,交互检验相关系数分别为0.8305和0.9098。由此表明所建立的多参数模型稳定,能较好地预测氨基喹啉类药物的抗疟原虫活性。为指导和设计新的高效低毒抗疟疾药物提供有力依据。  相似文献   

14.
文本主题引发的情感反馈与用户特征之间具有一定的关联。为了充分挖掘用户特征的价值以提高情感预测的准确度,在双层主题模型MSTM和SLTM的基础上,增加了对用户特征信息的采样层,进而提出了基于用户特征的“用户-主题-情感”三层主题模型UMSTM和USLTM。通过三层模型与基础模型在最高情感命中率以及情感概率预测相关系数的对比实验,来检验用户特征对情感预测产生的效果与影响。实验验证了UMSTM和USLTM在以上两种指标中,相对于MSTM和SLTM均有提高。  相似文献   

15.
Cloud computing allows dynamic resource scaling for enterprise online transaction systems, one of the key characteristics that differentiates the cloud from the traditional computing paradigm. However, initializing a new virtual instance in a cloud is not instantaneous; cloud hosting platforms introduce several minutes delay in the hardware resource allocation. In this paper, we develop prediction-based resource measurement and provisioning strategies using Neural Network and Linear Regression to satisfy upcoming resource demands.Experimental results demonstrate that the proposed technique offers more adaptive resource management for applications hosted in the cloud environment, an important mechanism to achieve on-demand resource allocation in the cloud.  相似文献   

16.
《Ergonomics》2012,55(12):1419-1429
The main objective of this research was to compare three representative methods of predicting the compressive forces on the lumbosacral disc: LP-based model, double LP-based model, and EMG-assisted model. Two subjects simulated lifting tasks that are frequently performed in the refractories industry of Korea, in which vertical and lateral distances, and weight of load were varied. To calculate the L5/ SI compressive forces, EMG signals from six trunk muscles were measured, and postural data and locations of load were recorded using the Motion Analysis System. The EMG-assisted model was shown to reflect well all three factors considered here, whereas the compressive forces from the two LP-based models were only significantly affected by weight of load. In addition, low lifting index (LI) values were observed for relatively high L5/S1 compressive forces from the EMG-assisted model, suggesting that the 1991 NIOSH lifting equations may not fully evaluate the risk of dynamic asymmetric lifting tasks.  相似文献   

17.
Data-driven techniques such as Auto-Regressive Moving Average (ARMA), K-Nearest-Neighbors (KNN), and Artificial Neural Networks (ANN), are widely applied to hydrologic time series prediction. This paper investigates different data-driven models to determine the optimal approach of predicting monthly streamflow time series. Four sets of data from different locations of People’s Republic of China (Xiangjiaba, Cuntan, Manwan, and Danjiangkou) are applied for the investigation process. Correlation integral and False Nearest Neighbors (FNN) are first employed for Phase Space Reconstruction (PSR). Four models, ARMA, ANN, KNN, and Phase Space Reconstruction-based Artificial Neural Networks (ANN-PSR) are then compared by one-month-ahead forecast using Cuntan and Danjiangkou data. The KNN model performs the best among the four models, but only exhibits weak superiority to ARMA. Further analysis demonstrates that a low correlation between model inputs and outputs could be the main reason to restrict the power of ANN. A Moving Average Artificial Neural Networks (MA-ANN), using the moving average of streamflow series as inputs, is also proposed in this study. The results show that the MA-ANN has a significant improvement on the forecast accuracy compared with the original four models. This is mainly due to the improvement of correlation between inputs and outputs depending on the moving average operation. The optimal memory lengths of the moving average were three and six for Cuntan and Danjiangkou, respectively, when the optimal model inputs are recognized as the previous twelve months.  相似文献   

18.
This paper introduces two robust forecasting models for efficient prediction of different exchange rates for future months ahead. These models employ Wilcoxon artificial neural network (WANN) and Wilcoxon functional link artificial neural network (WFLANN). The learning algorithms required to train the weights of these models are derived by minimizing a robust norm called Wilcoxon norm. These models offer robust exchange rate predictions in the sense that the training of weight parameters of these models are not influenced by outliers present in the training samples. The Wilcoxon norm considers the rank or position of an error value rather than its amplitude. Simulation based experiments have been conducted using real life data and the results indicate that both models, unlike conventional models, demonstrate consistently superior prediction performance under different densities of outliers present in the training samples. Further, comparison of performance between the two proposed models reveals that both provide almost identical performance but the later involved low computational complexity and hence is preferable over the WANN model.  相似文献   

19.
Many current technological challenges require the capacity of forecasting future measurements of a phenomenon. This, in most cases, leads directly to solve a time series prediction problem. Statistical models are the classical approaches for tackling this problem. More recently, neural approaches such as Backpropagation, Radial Basis Functions and recurrent networks have been proposed as an alternative. Most neural-based predictors have chosen a global modelling approach, which tries to approximate a goal function adjusting a unique model. This philosophy of design could present problems when data is extracted from a phenomenon that continuously changes its operational regime or represents distinct operational regimes in a unbalanced manner. In this paper, two alternative neural-based local modelling approaches are proposed. Both follow the divide and conquer principle, splitting the original prediction problem into several subproblems, adjusting a local model for each one. In order to check their adequacy, these methods are compared with other global and local modelling classical approaches using three benchmark time series and different sizes (medium and high) of training data sets. As it is shown, both models demonstrate to be useful pragmatic paradigms to improve forecasting accuracy, with the advantages of a relatively low computational time and scalability to data set size.  相似文献   

20.
The design, operation, and control of chemical separation processes heavily rely on the knowledge of the vapor-liquid equilibrium (VLE). Often, conducting experiments to gain an insight into the separation behavior becomes tedious and expensive. Thus, standard thermodynamic models are used in the VLE prediction. Sometimes, exclusively data-driven models are also used in VLE prediction although this method too possesses drawbacks such as a trial and error approach in specifying the data-fitting function. For overcoming these difficulties, this paper employs a machine learning (ML) formalism namely “genetic programming (GP)” possessing certain attractive features for the VLE prediction. Specifically, three case studies have been performed wherein GP-based models have been developed using experimental data, for predicting the vapor phase composition of a ternary, and a group of non–ideal binary systems. The inputs to models consists of three pure component attributes (acentric factor, critical temperature, and critical pressure), and as many intensive thermodynamic parameters (liquid phase composition, pressure, and temperature). A comparison of the VLE prediction and generalization performance of the GP-based models with the corresponding standard thermodynamic models reveals that the former class of models possess either superior or closely comparable performance vis-a-vis thermodynamic models. Noteworthy features of this study are: (i) a single GP-based model can predict VLE of a group of binary systems, and (ii) applicability of a GP-based model trained on an alcohol-acetate series data for its higher homolog. The VLE modeling approach exemplified here can be gainfully extended to other ternary and non-ideal binary systems, and for designing corresponding experiments in different pressure and temperature ranges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号