共查询到20条相似文献,搜索用时 0 毫秒
1.
M. Martinelli 《Computers & Fluids》2010,39(6):953-964
This article addresses the delicate issue of estimating physical uncertainties in aerodynamics. Usually, flow simulations are performed in a fully deterministic approach, although in real life operational uncertainty arises due to unpredictable factors that alter the flow conditions. In this article, we present and compare two methods to account for uncertainty in aerodynamic simulation. Firstly, automatic differentiation tools are used to estimate first- and second-order derivatives of aerodynamic coefficients with respect to uncertain variables, yielding an estimate of expectation and variance values (Method of Moments). Secondly, metamodelling techniques (radial basis functions, kriging) are employed in conjunction with Monte-Carlo simulations to derive statistical information. These methods are demonstrated for 3D Eulerian flows around the wing of a business aircraft at different regimes subject to uncertain Mach number and angle of attack. 相似文献
2.
Jinwen Ma Author Vitae Jianfeng Liu Author VitaeAuthor Vitae 《Pattern recognition》2009,42(11):2659-2670
Finite mixture is widely used in the fields of information processing and data analysis. However, its model selection, i.e., the selection of components in the mixture for a given sample data set, has been still a rather difficult task. Recently, the Bayesian Ying-Yang (BYY) harmony learning has provided a new approach to the Gaussian mixture modeling with a favorite feature that model selection can be made automatically during parameter learning. In this paper, based on the same BYY harmony learning framework for finite mixture, we propose an adaptive gradient BYY learning algorithm for Poisson mixture with automated model selection. It is demonstrated well by the simulation experiments that this adaptive gradient BYY learning algorithm can automatically determine the number of actual Poisson components for a sample data set, with a good estimation of the parameters in the original or true mixture where the components are separated in a certain degree. Moreover, the adaptive gradient BYY learning algorithm is successfully applied to texture classification. 相似文献
3.
Spatially explicit demographic models are increasingly being used to forecast the effect of global change on the range dynamics of species. These models are typically complex, with the structure and parameter values often estimated with considerable uncertainty. If not properly accounted, this can lead to bias or false precision in projections of changes to species range dynamics and extinction risk. Here we present a new open-source freeware tool, “Sensitivity Analysis of Range Dynamics Models” (SARDM) that provides an all-in-one approach for: (i) determining the implications of integrating complex and often uncertain information into spatially explicit demographic models compiled in RAMAS GIS, and (ii) identifying and ranking the relative importance of different sources of parameter uncertainty. The sensitivity and uncertainty analysis techniques built into SARDM will facilitate ecologists and conservation scientists in better establishing confidence in forecasts of range movement and abundance. 相似文献
4.
为了进一步理解水力溢流分级机的分级原理,采用双流体模型中的Mixture模型,应用Fluent软件模拟了直径1.8 m的水力溢流分级机分级碳化硅微粉的过程,研究了不同粒径的颗粒在分级机内的体积分率时空分布规律,探讨了沉降颗粒和溢流颗粒在不同分级时间下的粒径分布情况.模拟结果表明:粒径<3.36 μm的细颗粒和粒径在3.... 相似文献
5.
Modeling distribution of Amazonian tree species and diversity using remote sensing measurements 总被引:4,自引:0,他引:4
Sassan Saatchi Wolfgang Buermann Hans ter Steege Scott Mori Thomas B. Smith 《Remote sensing of environment》2008,112(5):2000-2017
The availability of a wide range of satellite measurements of environmental variables at different spatial and temporal resolutions, together with an increasing number of digitized and georeferenced species occurrences, has created the opportunity to model and monitor species geographic distribution and richness at regional to continental scales. In this paper, we examine the application of recently developed global data products from satellite observations in modeling the potential distribution of tree species and diversity in the Amazon basin. We use data from satellite sensors, including MODIS, QSCAT, SRTM, and TRMM, to develop different environmental variables related to vegetation, landscape, and climate. These variables are used in a maximum entropy method (Maxent) to model the geographical distribution of five commercial trees and to classify the patterns of tree alpha-diversity in the Amazon basin. Maxent simulations are analyzed using binomial tests of omission rates and the area under the receiver operating characteristics (ROC) curves to examine the model performance, the accuracy of geographic distributions, and the significance of environmental variables for discriminating suitable habitats. To evaluate the importance of satellite data, we used the Maxent jackknife test to quantify the training gains from data layers and to compare the results with model simulations using climate-only data. For all species and tree alpha-diversity, modeled distributions are in agreement with historical data and field observations. The results compare with climate-derived patterns, but provide better spatial resolution and detailed information on the habitat characteristics. Among satellite data products, QSCAT backscatter, representing canopy moisture and roughness, and MODIS leaf area index (LAI) are the most important variables in almost all cases. Model simulations suggest that climate and remote sensing results are complementary and that the best distribution patterns can be achieved when the two data sets are combined. 相似文献
6.
Vidroha Debroy Author VitaeW. Eric WongAuthor Vitae 《Journal of Systems and Software》2011,84(4):587-602
Test set size in terms of the number of test cases is an important consideration when testing software systems. Using too few test cases might result in poor fault detection and using too many might be very expensive and suffer from redundancy. We define the failure rate of a program as the fraction of test cases in an available test pool that result in execution failure on that program. This paper investigates the relationship between failure rates and the number of test cases required to detect the faults. Our experiments based on 11 sets of C programs suggest that an accurate estimation of failure rates of potential fault(s) in a program can provide a reliable estimate of adequate test set size with respect to fault detection and should therefore be one of the factors kept in mind during test set construction. Furthermore, the model proposed herein is fairly robust to incorrect estimations in failure rates and can still provide good predictive quality. Experiments are also performed to observe the relationship between multiple faults present in the same program using the concept of a failure rate. When predicting the effectiveness against a program with multiple faults, results indicate that not knowing the number of faults in the program is not a significant concern, as the predictive quality is typically not affected adversely. 相似文献
7.
From the basis of the Gauss divergence theorem applied on a circular control volume that was put forward by Isshiki (2011) in deriving the MPS-based differential operators, a more general Laplacian model is further deduced from the current work which involves the proposal of an altered kernel function. The Laplacians of several functions are evaluated and the accuracies of various MPS Laplacian models in solving the Poisson equation that is subjected to both Dirichlet and Neumann boundary conditions are assessed. For regular grids, the Laplacian model with smaller N is generally more accurate, owing to the reduction of leading errors due to those higher-order derivatives appearing in the modified equation. For irregular grids, an optimal N value does exist in ensuring better global accuracy, in which this optimal value of N will increase when cases employing highly irregular grids are computed. Finally, the accuracies of these MPS Laplacian models are assessed in an incompressible flow problem. 相似文献
8.
9.
The partially adaptive estimation based on the assumed error distribution has emerged as a popular approach for estimating a regression model with non-normal errors. In this approach, if the assumed distribution is flexible enough to accommodate the shape of the true underlying error distribution, the efficiency of the partially adaptive estimator is expected to be close to the efficiency of the maximum likelihood estimator based on knowledge of the true error distribution. In this context, the maximum entropy distributions have attracted interest since such distributions have a very flexible functional form and nest most of the statistical distributions. Therefore, several flexible MaxEnt distributions under certain moment constraints are determined to use within the partially adaptive estimation procedure and their performances are evaluated relative to well-known estimators. The simulation results indicate that the determined partially adaptive estimators perform well for non-normal error distributions. In particular, some can be useful in dealing with small sample sizes. In addition, various linear regression applications with non-normal errors are provided. 相似文献
10.
Andreas Lindemann Christian L. Dunis Paulo Lisboa 《Neural computing & applications》2005,14(3):256-271
Dunis and Williams (Derivatives: use, trading and regulation 8(3):211–239, 2002; Applied quantitative methods for trading and investment. Wiley, Chichester, 2003) have shown the superiority of a Multi-layer perceptron network (MLP), outperforming its benchmark models such as a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA) and a logistic regression model (LOGIT) on a Euro/Dollar (EUR/USD) time series. The motivation for this paper is to investigate the use of different neural network architectures. This is done by benchmarking three different neural network designs representing a level estimator, a classification model and a probability distribution predictor. More specifically, we present the Mulit-layer perceptron network, the Softmax cross entropy model and the Gaussian mixture model and benchmark their respective performance on the Euro/Dollar (EUR/USD) time series as reported by Dunis and Williams. As it turns out, the Multi-layer perceptron does best when used without confirmation filters and leverage, while the Softmax cross entropy model and the Gaussian mixture model outperforms the Multi-layer perceptron when using more sophisticated trading strategies and leverage. This might be due to the ability of both models using probability distributions to identify successfully trades with a high Sharpe ratio.
相似文献
Paulo LisboaEmail: |
11.
提出一种新大数模幂与点乘m_ary算法中窗口大小的最优化估计方法.该方法不同于传统的暴力搜寻方法,也不同于在窗口的取值范围内通过逐一测试程序来获得最优窗口大小的方法.其基于以下理论分析:模幂 m_ary算法的基本运算为大数乘法,其中包括大数平方算法和一般大数乘法;椭圆曲线加密算法中点乘的m_ary算法步骤与模幂的m_ary算法相同,后者的基本运算为倍乘和加法.根据m_ary算法的基本运算的调用次数,推算出了最优窗口大小的估计公式.通过实验对m_ary算法进行实现,并测试分析了根据估计公式计算出窗口大小的算法实现时间效率与理论分析基本吻合. 相似文献
12.
In this paper, finite mixtures of the union of Logarithmic Series and Discrete Rectangular families are proved to be identifiable.
The sufficient condition for the identifiability of finite mixtures given by Atienza et al. (Metrika 63:215–221, 2006) is
applied instead of mostly used Teicher’s condition (Teicher, Ann Math Stat 34: 1265–1269, 1963). The choice of distribution
is made in order to stress the fact that, contrarily to Teicher’s approach, Atienza et al.’s theorem being used does apply
to heterogeneous parametric families. 相似文献
13.
H. Schneeweiss 《Computational statistics & data analysis》2007,52(2):1143-1148
The problem of consistent estimation in measurement error models in a linear relation with not necessarily normally distributed measurement errors is considered. Three possible estimators which are constructed as different combinations of the estimators arising from direct and inverse regression are considered. The efficiency properties of these three estimators are derived and the effect of non-normally distributed measurement errors is analyzed. A Monte-Carlo experiment is conducted to study the performance of these estimators in finite samples. 相似文献
14.
On the application of cross correlation function to subsample discrete time delay estimation 总被引:2,自引:0,他引:2
Cross correlation function (CCF) of signals is an important tool of multi-sensors signal processing. Parabola functions are commonly used as parametric models of the CCF in time delay estimation. The parameters are determined by fitting samples near the maximum of the CCF to a parabola function. In this paper we analyze the CCF for the stationary processes of exponential auto-correlation function, with respect to two important types of sensor sampling kernels. Our analysis explains why the parabola is an acceptable model of CCF in estimating the time delay. More importantly, we demonstrate that the Gaussian function is a better and more robust approximation of CCF than the parabola. This new approximation approach leads to higher precision in time delay estimation using the CCF peak locating strategy. Simulations are also carried out to evaluate the performance of the proposed estimation method for different sample window sizes and signal to noise ratios. The new method offers significant improvement over the current parabola based method. 相似文献
15.
I. S. Lowndes T. Fogarty Z. Y. Yang 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2005,9(7):493-506
This paper presents an application of genetic algorithms (GAs) to the solution of a real-world optimisation problem. The proposed GA method investigates the optimisation of a mine ventilation system to minimise the operational fan power costs by the determination of the most effective combination of the fan operational duties and locations. The paper examines the influence that both the encoding method and the population size have on the performance of the GA. The relative performance of the GA produced by the use of two different encoding methods (a binary and a hybrid code) and various solution population sizes is assessed by performing a two way ANOVA analysis. It is concluded that the genetic algorithm approach offers both an effective and efficient optimisation method in the selection and evaluation of the cost-effective solutions in the planning and operation of mine ventilation systems. 相似文献
16.
The combination of road accident frequencies before and after a similar change at a given number of sites are considered. Each target site includes different accident types and is linked to a specific control area. At any one target site it is assumed that the total number of accidents recorded is multinomially distributed between the before period and the after period and also between several mutually exclusive types. The parameter of the distribution depends on the different accident risks in the control area linked to each site as well as on the average effect of the change. A method of estimating simultaneously the average effect and the accident risks in control areas is suggested. Some simulated accidents data allow us to study the existence and consistence of the linear constrained estimator of the unknown vector parameter. 相似文献
17.
An effective estimation of distribution algorithm for the multi-mode resource-constrained project scheduling problem 总被引:3,自引:0,他引:3
In this paper, an estimation of distribution algorithm (EDA) is proposed to solve the multi-mode resource-constrained project scheduling problem (MRCPSP). In the EDA, the individuals are encoded based on the activity-mode list (AML) and decoded by the multi-mode serial schedule generation scheme (MSSGS), and a novel probability model and an updating mechanism are proposed for well sampling the promising searching region. To further improve the searching quality, a multi-mode forward backward iteration (MFBI) and a multi-mode permutation based local search method (MPBLS) are proposed and incorporated into the EDA based search framework to enhance the exploitation ability. Based on the design-of-experiment (DOE) test, suitable parameter combinations are determined and some guidelines are provided to set the parameters. Simulation results based on a set of benchmarks and comparisons with some existing algorithms demonstrate the effectiveness of the proposed EDA. 相似文献
18.
This article notes that it is now practical to use the method of enumerationto analyse the performance of estimators and hypothesis tests of fullyparametric binary data models. The general method is presented and thenemployed to investigate the power performance of a common misspecificationtest for the Probit model. The advantages, disadvantages and limitations ofenumeration compared with standard Monte Carlo simulation are thendiscussed. Finally, an example from experimental economics is used todemonstrate that the methodology can also be used in small empirical studies. 相似文献
19.
Our interest in this paper is on the choice of spatial and categorical scale, and their interaction, in creating classifications of land cover from remotely sensed measurements. We note that in discussing categorical scale, the concept of spatial scale naturally arises, and in discussing spatial scale, the issue of aggregation of measurements must be considered. Therefore, and working towards an ultimate goal of producing multiscale, multigranular characterizations of land cover, we address here successively and in a cumulative fashion the topics of (1) aggregation of measurements across multiple scales, (2) adaptive choice of spatial scale, and (3) adaptive choice of categorical scale jointly with spatial scale. We show that the use of statistical finite mixture models with groups of original pixel-scale measurements, at successive spatial scales, offers improved pixel-wise classification accuracy as compared to the commonly used technique of label aggregation. We then show how a statistical model selection strategy may be used with the finite mixture models to provide a data-adaptive choice of spatial scale, varying by location (i.e., multiscale), from which classifications at least as accurate as those of any single spatial scale may be achieved. Finally, we extend this paradigm to allow for jointly adaptive selection of spatial and categorical scale. Our emphasis throughout is on the empirical quantification of the role of the various elements above, and a comparison of their performance with standard methods, using various artificial landscapes. The methods proposed in this paper should be useful for a variety of scale-related land cover classification tasks. 相似文献
20.
A hybrid estimation of distribution algorithm for solving the resource-constrained project scheduling problem 总被引:1,自引:0,他引:1
In this paper, a hybrid estimation of distribution algorithm (HEDA) is proposed to solve the resource-constrained project scheduling problem (RCPSP). In the HEDA, the individuals are encoded based on the extended active list (EAL) and decoded by serial schedule generation scheme (SGS), and a novel probability model updating mechanism is proposed for well sampling the promising searching region. To further improve the searching quality, a Forward-Backward iteration (FBI) and a permutation based local search method (PBLS) are incorporated into the EDA based search to enhance the exploitation ability. Simulation results based on benchmarks and comparisons with some existing algorithms demonstrate the effectiveness of the proposed HEDA. 相似文献