首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A common statistical model for paired comparisons is the Bradley–Terry model. This research re-parameterizes the Bradley–Terry model as a single-layer artificial neural network (ANN) and shows how it can be fitted using the delta rule. The ANN model is appealing because it makes using and extending the Bradley–Terry model accessible to a broader community. It also leads to natural incremental and iterative updating methods. Several extensions are presented that allow the ANN model to learn to predict the outcome of complex, uneven two-team group competitions by rating individuals—no other published model currently does this. An incremental-learning Bradley–Terry ANN yields a probability estimate within less than 5% of the actual value training over 3,379 multi-player online matches of a popular team- and objective-based first-person shooter.  相似文献   

2.
A challenge in building pervasive and smart spaces is to learn and recognize human activities of daily living (ADLs). In this paper, we address this problem and argue that in dealing with ADLs, it is beneficial to exploit both their typical duration patterns and inherent hierarchical structures. We exploit efficient duration modeling using the novel Coxian distribution to form the Coxian hidden semi-Markov model (CxHSMM) and apply it to the problem of learning and recognizing ADLs with complex temporal dependencies. The Coxian duration model has several advantages over existing duration parameterization using multinomial or exponential family distributions, including its denseness in the space of nonnegative distributions, low number of parameters, computational efficiency and the existence of closed-form estimation solutions. Further we combine both hierarchical and duration extensions of the hidden Markov model (HMM) to form the novel switching hidden semi-Markov model (SHSMM), and empirically compare its performance with existing models. The model can learn what an occupant normally does during the day from unsegmented training data and then perform online activity classification, segmentation and abnormality detection. Experimental results show that Coxian modeling outperforms a range of baseline models for the task of activity segmentation. We also achieve a recognition accuracy competitive to the current state-of-the-art multinomial duration model, while gaining a significant reduction in computation. Furthermore, cross-validation model selection on the number of phases K in the Coxian indicates that only a small K is required to achieve the optimal performance. Finally, our models are further tested in a more challenging setting in which the tracking is often lost and the activities considerably overlap. With a small amount of labels supplied during training in a partially supervised learning mode, our models are again able to deliver reliable performance, again with a small number of phases, making our proposed framework an attractive choice for activity modeling.  相似文献   

3.

The drift capacity of reinforced concrete (RC) columns is a crucial factor in displacement and seismic based design procedure of RC structures, since they might be able to withstand the loads or dissipate the energy applied through deformation and ductility. Considering the high costs of testing methods for observing the drift capacity and ductility of RC structural members in addition to the impact of numerous parameters, numerical analyses and predictive modeling techniques have very much been appreciated by researchers and engineers in this field. This study is concerned with providing an alternative approach, termed as linear genetic programming (LGP), for predictive modeling of the lateral drift capacity (Δmax) of circular RC columns. A new model is developed by LGP incorporating various key variables existing in the experimental database employed and those well-known models presented by various researchers. The LGP model is examined from various perspectives. The comparison analysis of the results with those obtained by previously proposed models confirm the precision of the LGP model in estimation of the Δmax factor. The results reveal the fact that the LGP model impressively outperforms the existing models in terms of predictability and performance and can be definitely used for further engineering purposes. These approve the applicability of LGP technique for numerical analysis and modeling of complicated engineering problems.

  相似文献   

4.
Antilock braking system (ABS), traction control system, etc. are used in modern automobiles for enhanced safety and reliability. Autonomous ABS system can take over the traction control of the vehicle either completely or partially. An antilock braking system using an on–off control strategy to maintain the wheel slip within a predefined range is studied here. The controller design needs integration with the vehicle dynamics model. A single wheel or a bicycle vehicle model considers only constant normal loading on the wheels. On the other hand, a four wheel vehicle model that accounts for dynamic normal loading on the wheels and generates correct lateral forces is suitable for reliable brake system design. This paper describes an integrated vehicle braking system dynamics and control modeling procedure for a four wheel vehicle. The vehicle system comprises several energy domains. The interdisciplinary modeling technique called bond graph is used to integrate models in different energy domains and control systems. The bond graph model of the integrated vehicle dynamic system is developed in a modular and hierarchical modeling environment and is simulated to evaluate the performance of the ABS system under various operating conditions.  相似文献   

5.
Antilock braking system (ABS), traction control system, etc. are used in modern automobiles for enhanced safety and reliability. Autonomous ABS system can take over the traction control of the vehicle either completely or partially. An antilock braking system using an on–off control strategy to maintain the wheel slip within a predefined range is studied here. The controller design needs integration with the vehicle dynamics model. A single wheel or a bicycle vehicle model considers only constant normal loading on the wheels. On the other hand, a four wheel vehicle model that accounts for dynamic normal loading on the wheels and generates correct lateral forces is suitable for reliable brake system design. This paper describes an integrated vehicle braking system dynamics and control modeling procedure for a four wheel vehicle. The vehicle system comprises several energy domains. The interdisciplinary modeling technique called bond graph is used to integrate models in different energy domains and control systems. The bond graph model of the integrated vehicle dynamic system is developed in a modular and hierarchical modeling environment and is simulated to evaluate the performance of the ABS system under various operating conditions.  相似文献   

6.
Most methods of selecting an appropriate log-linear model for categorical data are sensitive to the underlying distributional assumptions. However, there are many situations in which the assumption that the data are randomly chosen from an underlying Poisson, multinomial or product-multinomial distribution cannot be sustained. In these cases we propose a criterion to select among log-linear models that is an analogue of the Cp statistic for regression models and describe a method to estimate the denominator of this statistic.  相似文献   

7.

For the first time analytical modeling of nanoscale work function engineered gate recessed S/D SOI MOSFET including quantum mechanical effects has been presented based on the solution of 1 D Schrödinger and 2 D Poisson’s equation. As classical models are insufficient in nanoscale regime, quantization effect has been incorporated in this model to explore the actual potential profile characteristics along the film thickness. An extensive calculation has been carried out with proper boundary condition to solve the 2-D Poisson’s equation for device parameters. The value of deviated quantum threshold voltage has been calculated from classical model, and then these two are added to resolve the final quantum threshold voltage. Channel length modulation has also been taken into consideration during drain current modeling for this structure. A comparative study based on the threshold voltage, drain current, transconductance and drain conductance has been presented for the classical and quantum model. The results are also compared with the simulation of SILVACO Deck build, Deck Editor Version 4.2.5.R (aka 4.2.5.R) device simulator to validate the proposed model.

  相似文献   

8.
9.

In this work, the performance of rapid prototyping (RP) based rapid tool is investigated during electrical discharge machining (EDM) of titanium as work piece using EDM 30 oil as dielectric medium. Selective laser sintering, a RP technique, is used to produce the tool electrode made of AlSi10Mg. The performance of rapid tool is compared with conventional solid copper and graphite tool electrodes. The machining performance measures considered in this study are material removal rate, tool wear rate and surface integrity of the machined surface measured in terms of average surface roughness (Ra), white layer thickness, surface crack density and micro-hardness on white layer. Since the machining process is a complex one, potentiality of application of a predictive tool such as least square support vector machine has been explored to provide guidelines for the practitioners to predict various machining performance measures before actual machining. The predictive model is said to be robust one as root mean square error in the range of 0.11–0.34 is obtained for various performance measures. A hybrid optimization technique known as desirability based grey relational analysis in combination with firefly algorithm is adopted for simultaneously optimizing the performance measures. It is observed that peak current and tool type are the significant parameters influencing all the performance measures.

  相似文献   

10.
This paper examines various constraints of networked control systems (NCSs) such as network-induced random delays, successive packet dropouts and Poisson noise. Time delays are represented as modes of Markov chain and successive packet dropouts are modeled using Poisson probability distribution. For each delay-mode, a separate Poisson distribution is used with the help of an indicator function. Poisson noise is incorporated in the design to cater sudden network link failures and power shutdowns. After modeling these constraints, a stability criterion is proposed by using Lyapunov-Krasovskii functional. On the basis of the stability criteria, sufficient conditions for the existence of a robust H state feedback controller are given in terms of bilinear matrix inequalities (BMIs). Later, BMIs are converted into quasi-convex linear matrix inequalities (LMIs) and are solved by using a cone complementarity linearizing algorithm. The effectiveness of the proposed scheme is elaborated with the help of two simulation examples. Moreover, the effects of successive packet dropouts and Poisson noise on H performance are analyzed.  相似文献   

11.
12.
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.  相似文献   

13.

This paper aims to investigate the size scale effect on the buckling and post-buckling of single-walled carbon nanotube (SWCNT) rested on nonlinear elastic foundations using energy-equivalent model (EEM). CNTs are modelled as a beam with higher order shear deformation to consider a shear effect and eliminate the shear correction factor, which appeared in Timoshenko and missed in Euler–Bernoulli beam theories. Energy-equivalent model is proposed to bridge the chemical energy between atoms with mechanical strain energy of beam structure. Therefore, Young’s and shear moduli and Poisson’s ratio for zigzag (n, 0), and armchair (n, n) carbon nanotubes (CNTs) are presented as functions of orientation and force constants. Conservation energy principle is exploited to derive governing equations of motion in terms of primary displacement variable. The differential–integral quadrature method (DIQM) is exploited to discretize the problem in spatial domain and transformed the integro-differential equilibrium equations to algebraic equations. The static problem is solved for critical buckling loads and the post-buckling deformation as a function of applied axial load, CNT length, orientations and elastic foundation parameters. Numerical results show that effects of chirality angle, boundary conditions, tube length and elastic foundation constants on buckling and post-buckling behaviors of armchair and zigzag CNTs are significant. This model is helpful especially in mechanical design of NEMS manufactured from CNTs.

  相似文献   

14.
We develop computationally intensive Bayesian methods to estimate the size of a closed population and apply these methods to estimate the number of children born in upstate New York with spina bifida from 1969 to 1974. The names of these children may appear on three different administrative lists: medical; birth; and death records. We assume diffuse prior distributions on the marginal probabilities of a name appearing on each record and on the various odds ratios modeling the interactions of these lists. Samples from the posterior distribution are generated using a modified sample-resample technique. A Bayesian log-linear model is developed and the posterior distribution is sampled from a Markov chain generated using the Metropolis algorithm. These two approaches are compared in terms of their interpretability and computational complexity.  相似文献   

15.
16.
Parameter uncertainty and sensitivity for a watershed-scale simulation model in Portugal were explored to identify the most critical model parameters in terms of model calibration and prediction. The research is intended to help provide guidance regarding allocation of limited data collection and model parameterization resources for modelers working in any data and resource limited environment. The watershed-scale hydrology and water quality simulation model, Hydrologic Simulation Program – FORTRAN (HSPF), was used to predict the hydrology of Lis River basin in Portugal. The model was calibrated for a 5-year period 1985–1989 and validated for a 4-year period 2003–2006. Agreement between simulated and observed streamflow data was satisfactory considering the performance measures such as Nash–Sutcliffe efficiency (E), deviation runoff (Dv) and coefficient of determination (R2). The Generalized Likelihood Uncertainty Estimation (GLUE) method was used to establish uncertainty bounds for the simulated flow using the Nash–Sutcliffe coefficient as a performance likelihood measure. Sensitivity analysis results indicate that runoff estimations are most sensitive to parameters related to climate conditions, soil and land use. These results state that even though climate conditions are generally most significant in water balance modeling, attention should also focus on land use characteristics as well. Specifically with respect to HSPF, the two most sensitive parameters, INFILT and LZSN, are both directly dependent on soil and land use characteristics.  相似文献   

17.
ABSTRACT

In this article, we modify Mumford–Shah level-set model to handle speckles and blur in synthetic aperture radar (SAR) imagery. The proposed model is formulated using a non-local regularization framework. Hence, the model duly cares about local gradient oscillations (corresponding to the fine details/textures) during the evolution process. It is assumed that the speckle intensity is gamma distributed, while designing a maximum a posteriori estimator of the functional. The parameters of the gamma distribution (i.e. scale and shape) are estimated using a maximum likelihood estimator. The regularization parameter of the model is evaluated adaptively using these (estimated) parameters at each iteration. The split-Bregman iterative scheme is employed to improve the convergence rate of the model. The proposed and the state-of-the-art despeckling models are experimentally verified and compared using a large number of speckled and blurred SAR images. Statistical quantifiers are used to numerically evaluate the performance of various models under consideration.  相似文献   

18.
A software reliability growth model is one of the fundamental technique to assess software reliability quantitatively. The software reliability growth model is required to have a good performance in terms of goodness-of-fit, predictability, and so forth. In this paper, we propose discretized software reliability growth models. As to the software reliability growth modeling, discretized nonhomogeneous Poisson process models are investigated particularly for accurate software reliability assessment. We show that the discrete nonhomogeneous Poisson process models have better performance than discretized deterministic software reliability growth models which have been proposed so far.  相似文献   

19.
The concepts of faithfulness and strong-faithfulness are important for statistical learning of graphical models. Graphs are not sufficient for describing the association structure of a discrete distribution. Hypergraphs representing hierarchical log-linear models are considered instead, and the concept of parametric (strong-)faithfulness with respect to a hypergraph is introduced. The strength of association in a discrete distribution can be quantified with various measures, leading to different concepts of strong-faithfulness. It is proven that strong-faithfulness defined in terms of interaction parameters ensures the existence of uniformly consistent parameter estimators and enables building uniformly consistent procedures for a hypergraph search. Lower and upper bounds for the proportions of distributions that do not satisfy strong-faithfulness are computed for different parameterizations and measures of association.  相似文献   

20.

In recent years, the importance of computationally efficient surrogate models has been emphasized as the use of high-fidelity simulation models increases. However, high-dimensional models require a lot of samples for surrogate modeling. To reduce the computational burden in the surrogate modeling, we propose an integrated algorithm that incorporates accurate variable selection and surrogate modeling. One of the main strengths of the proposed method is that it requires less number of samples compared with conventional surrogate modeling methods by excluding dispensable variables while maintaining model accuracy. In the proposed method, the importance of selected variables is evaluated using the quality of the model approximated with the selected variables only. Nonparametric probabilistic regression is adopted as the modeling method to deal with inaccuracy caused by using selected variables during modeling. In particular, Gaussian process regression (GPR) is utilized for the modeling because it is suitable for exploiting its model performance indices in the variable selection criterion. Outstanding variables that result in distinctly superior model performance are finally selected as essential variables. The proposed algorithm utilizes a conservative selection criterion and appropriate sequential sampling to prevent incorrect variable selection and sample overuse. Performance of the proposed algorithm is verified with two test problems with challenging properties such as high dimension, nonlinearity, and the existence of interaction terms. A numerical study shows that the proposed algorithm is more effective as the fraction of dispensable variables is high.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号