首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Designing gaits and corresponding control policies is a key challenge in robot locomotion. Even with a viable controller parametrization, finding near-optimal parameters can be daunting. Typically, this kind of parameter optimization requires specific expert knowledge and extensive robot experiments. Automatic black-box gait optimization methods greatly reduce the need for human expertise and time-consuming design processes. Many different approaches for automatic gait optimization have been suggested to date. However, no extensive comparison among them has yet been performed. In this article, we thoroughly discuss multiple automatic optimization methods in the context of gait optimization. We extensively evaluate Bayesian optimization, a model-based approach to black-box optimization under uncertainty, on both simulated problems and real robots. This evaluation demonstrates that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments.  相似文献   

2.
Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.  相似文献   

3.
Decision support systems (DSSs) are increasingly being used in water management for the evaluation of impacts of policy measures under different scenarios. The exact impacts generally are unknown and surrounded with considerable uncertainties. It may therefore be difficult to make a selection of measures relevant for a particular water management problem. In order to support policy makers to make a strategic selection between different measures in a DSS while taking uncertainty into account, a methodology for the ranking of measures has been developed. The methodology has been applied to a pilot DSS for flood control in the Red River basin in Vietnam and China. The decision variable is the total flood damage and possible flood reducing measures are dike heightening, reforestation and the construction of a retention basin. The methodology consists of a Monte Carlo uncertainty analysis employing Latin Hypercube Sampling and a ranking procedure based on the significance of the difference between output distributions for different measures. The mean flood damage in the base situation is about 2.2 billion US$ for the year 1996 with a standard deviation due to parameter uncertainty of about 1 billion US$. Selected applications of the measures reforestation, dike heightening and the construction of a retention basin reduce the flood damage by about 5, 55 and 300 million US$, respectively. The construction of a retention basin significantly reduces flood damage in the Red River basin, while dike heightening and reforestation reduce flood damage, but not significantly.  相似文献   

4.
The increasing trend towards delegating tasks to autonomous artificial agents in safety–critical socio-technical systems makes monitoring an action selection policy of paramount importance. Agent behavior monitoring may profit from a stochastic specification of an optimal policy under uncertainty. A probabilistic monitoring approach is proposed to assess if an agent behavior (or policy) respects its specification. The desired policy is modeled by a prior distribution for state transitions in an optimally-controlled stochastic process. Bayesian surprise is defined as the Kullback–Leibler divergence between the state transition distribution for the observed behavior and the distribution for optimal action selection. To provide a sensitive on-line estimation of Bayesian surprise with small samples twin Gaussian processes are used. Timely detection of a deviant behavior or anomaly in an artificial pancreas highlights the sensitivity of Bayesian surprise to a meaningful discrepancy regarding the stochastic optimal policy when there exist excessive glycemic variability, sensor errors, controller ill-tuning and infusion pump malfunctioning. To reject outliers and leave out redundant information, on-line sparsification of data streams is proposed.  相似文献   

5.
6.
The advantage of multi-protocol label switching (MPLS) is its capability to route the packets through explicit paths. But the nodes in the paths may be possibly attacked by the adversarial uncertainty. Aiming at this problem in MPLS Network, this paper proposes a novel mechanism in MPLS network under adversarial uncertainty, making use of the theory of artificial intelligence. Firstly, the initialized label switching paths (LSPs) using the A* arithmetic is found. Secondly, during the process of data transmission, the transmission path is switched duly by taking advantage of the non-monotone reasoning mechanism. Compared with the traditional route mechanism, the experimental results show that the security can be improved if data transmission remarkably under this novel mechanism in MPLS network.  相似文献   

7.
Neural Computing and Applications - Artificial intelligence systems are becoming ubiquitous in everyday life as well as in high-risk environments, such as autonomous driving, medical treatment, and...  相似文献   

8.
The analysis of network effects in technology-based networks continues to be of significant managerial importance in e-commerce and traditional IS operations. Competitive strategy, economics and IS researchers share this interest, and have been exploring technology adoption, development and product launch contexts where understanding the issues is critical. This article examines settings involving countervailing and complementary network effects, which act as drivers of business value at several levels of analysis: the industry or market level, the firm or process level, the individual or product level, and the technology level. It leverages real options analysis for managerial decision-making under uncertainty across these contexts. We also identify a set of real options—compatibility, sponsorship and ownership option—which are unique to these settings, and which provide a template for managerial thinking and analysis when it is possible to delay an investment decision. We employ a hybrid jump-diffusion process to model countervailing and complementary network effects from the perspective of a user or a firm joining a network. We also do this from the perspective of a network developer. Our analysis shows that when countervailing and complementary network effects occur in the same network technology context, they give rise to real option value effects that may be used to control or modify the valuation trajectory of a network technology. The option value of waiting in these contexts jumps when the related business environment experiences shocks. Further, we find that the functional relationship between network value and the option value is not linear, and that taking into account a risk premium may not always result in a risk-neural investment. We also provide a managerial decision-making template through the different kinds of deferral options that we identify for this IT analysis context.
Ajay KumarEmail:
  相似文献   

9.
Bayesian networks (BNs) have gained increasing attention in recent years. One key issue in Bayesian networks is parameter learning. When training data is incomplete or sparse or when multiple hidden nodes exist, learning parameters in Bayesian networks becomes extremely difficult. Under these circumstances, the learning algorithms are required to operate in a high-dimensional search space and they could easily get trapped among copious local maxima. This paper presents a learning algorithm to incorporate domain knowledge into the learning to regularize the otherwise ill-posed problem, to limit the search space, and to avoid local optima. Unlike the conventional approaches that typically exploit the quantitative domain knowledge such as prior probability distribution, our method systematically incorporates qualitative constraints on some of the parameters into the learning process. Specifically, the problem is formulated as a constrained optimization problem, where an objective function is defined as a combination of the likelihood function and penalty functions constructed from the qualitative domain knowledge. Then, a gradient-descent procedure is systematically integrated with the E-step and M-step of the EM algorithm, to estimate the parameters iteratively until it converges. The experiments with both synthetic data and real data for facial action recognition show our algorithm improves the accuracy of the learned BN parameters significantly over the conventional EM algorithm.  相似文献   

10.
A large number of distance metrics have been proposed to measure the difference of two instances. Among these metrics, Short and Fukunaga metric (SFM) and minimum risk metric (MRM) are two probability-based metrics which are widely used to find reasonable distance between each pair of instances with nominal attributes only. For simplicity, existing works use naive Bayesian (NB) classifiers to estimate class membership probabilities in SFM and MRM. However, it has been proved that the ability of NB classifiers to class probability estimation is poor. In order to scale up the classification performance of NB classifiers, many augmented NB classifiers are proposed. In this paper, we study the class probability estimation performance of these augmented NB classifiers and then use them to estimate the class membership probabilities in SFM and MRM. The experimental results based on a large number of University of California, Irvine (UCI) data-sets show that using these augmented NB classifiers to estimate the class membership probabilities in SFM and MRM can significantly enhance their generalisation ability.  相似文献   

11.
The need for error modeling, multisensor fusion, and robust algorithms is becoming increasingly recognized in computer vision. Bayesian modeling is a powerful, practical, and general framework for meeting these requirements. This article develops a Bayesian model for describing and manipulating the dense fields, such as depth maps, associated with low-level computer vision. Our model consists of three components: a prior model, a sensor model, and a posterior model. The prior model captures a priori information about the structure of the field. We construct this model using the smoothness constraints from regularization to define a Markov Random Field. The sensor model describes the behavior and noise characteristics of our measurement system. We develop a number of sensor models for both sparse and dense measurements. The posterior model combines the information from the prior and sensor models using Bayes' rule. We show how to compute optimal estimates from the posterior model and also how to compute the uncertainty (variance) in these estimates. To demonstrate the utility of our Bayesian framework, we present three examples of its application to real vision problems. The first application is the on-line extraction of depth from motion. Using a two-dimensional generalization of the Kalman filter, we develop an incremental algorithm that provides a dense on-line estimate of depth whose accuracy improves over time. In the second application, we use a Bayesian model to determine observer motion from sparse depth (range) measurements. In the third application, we use the Bayesian interpretation of regularization to choose the optimal smoothing parameter for interpolation. The uncertainty modeling techniques that we develop, and the utility of these techniques in various applications, support our claim that Bayesian modeling is a powerful and practical framework for low-level vision.  相似文献   

12.
用于因果分析的混合贝叶斯网络结构学习   总被引:2,自引:0,他引:2  
目前主要结合扩展的熵离散化方法和打分一搜索方法进行混合贝叶斯网络结构学习,算法效率和可靠性低,而且易于陷入局部最优结构。针对问题建立了一种新的混合贝叶斯网络结构迭代学习方法.在迭代中,基于父结点结构和Gibbs sampling进行混合数据聚类,实现对连续变量的离散化,再结合贝叶斯网络结构优化调整,使贝叶斯网络结构序列逐渐趋于稳定,可避免使用扩展的熵离散化和打分——搜索所带来的主要问题.  相似文献   

13.
In this paper, the classical problem of supply chain network design is reconsidered to emphasize the rolc of contracts in uncertain environments. The supply chain addressed consists of four layers: suppliers, manufacturers, warehouses, and customers acting within a single period. The single owner of the manufacturing plants signs a contract with each of the suppliers to satisfy demand from downstream. Available contracts consist of long-term and option contracts, and unmet demand is satisfied by purchasing from the spot market. In this supply chain, customer demand, supplier capacity, plants and warehouses, transportation costs, and spot prices are uncertain. Two models are proposed here: a risk-neutral two-stage stochastic model and a risk-averse model that considers risk measures. A solution strategy based on sample average approximation is then proposed to handle large scale problems. Extensive computational studies prove the important role of contracts in the design process, especially a portfolio of contracts. For instance, we show that long-term contract alone has similar impacts to having no contracts, and that option contract alone gives inferior results to a combination of option and long-term contracts. We also show that the proposed solution methodology is able to obtain good quality solutions for large scale problems.  相似文献   

14.
Neural network classification: a Bayesian interpretation   总被引:3,自引:0,他引:3  
The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.  相似文献   

15.
Uncertainties exist in every aspect of a collaborative multidisciplinary design process. These uncertainties will have a great influence on design negotiations between various disciplines and may force designers to make conservative decisions. In this paper, a novel collaborative robust optimization (CRO) method based on constraints network under uncertainty is presented. The generalized dynamic constraints network (GDCN) is developed to analysis and management of uncertainties, and to ensure the parameter consistency in the collaborative design process. Given the feasible consistent parameter region, The CRO is formulated as a multi-criteria optimization problem, which brings both the objective robustness and the feasibility robustness of the constraint into account simultaneously. The CRO based on GDCN could bring both the design parameters dynamic consistency management and robust optimization into account simultaneously, which assures a product’s reliability and quality robustness. The efficiency of proposed method is evaluated in the design of crank and connecting rod in one V6 engine.  相似文献   

16.
17.
Jing Yang  Lian Li  Aiguo Wang 《Knowledge》2011,24(7):963-976
A new algorithm, the PCB (partial correlation-based) algorithm, is presented for Bayesian network structure learning. The algorithm effectively combines ideas from local learning with partial correlation techniques. It reconstructs the skeleton of a Bayesian network based on partial correlation and then performs a greedy hill-climbing search to orient the edges. Specifically, we make three contributions. First, we prove that in a linear SEM (simultaneous equation model) with uncorrelated errors, when the datasets are generated by linear SEM, subject to arbitrary distribution disturbances, we can use partial correlation as the criterion of the CI test. Second, we perform a series of experiments to find the best threshold value of the partial correlation. Finally, we show how partial correlation can be used in Bayesian network structure learning under linear SEM. The effectiveness of the method is compared with current state of the art methods on eight networks. A simulation shows that the PCB algorithm outperforms existing algorithms in both accuracy and run time.  相似文献   

18.
Evolutionary theory states that stronger genetic characteristics reflect the organism’s ability to adapt to its environment and to survive the harsh competition faced by every species. Evolution normally takes millions of generations to assess and measure changes in heredity. Determining the connections, which constrain genotypes and lead superior ones to survive is an interesting problem. In order to accelerate this process,we develop an artificial genetic dataset, based on an artificial life (AL) environment genetic expression (ALGAE). ALGAE can provide a useful and unique set of meaningful data, which can not only describe the characteristics of genetic data, but also simplify its complexity for later analysis.To explore the hidden dependencies among the variables, Bayesian Networks (BNs) are used to analyze genotype data derived from simulated evolutionary processes and provide a graphical model to describe various connections among genes. There are a number of models available for data analysis such as artificial neural networks, decision trees, factor analysis, BNs, and so on. Yet BNs have distinct advantages as analytical methods which can discern hidden relationships among variables. Two main approaches, constraint based and score based, have been used to learn the BN structure. However, both suit either sparse structures or dense structures. Firstly, we introduce a hybrid algorithm, called “the E-algorithm”, to complement the benefits and limitations in both approaches for BN structure learning. Testing E-algorithm against a standardized benchmark dataset ALARM, suggests valid and accurate results. BAyesian Network ANAlysis (BANANA) is then developed which incorporates the E-algorithm to analyze the genetic data from ALGAE. The resulting BN topological structure with conditional probabilistic distributions reveals the principles of how survivors adapt during evolution producing an optimal genetic profile for evolutionary fitness.  相似文献   

19.
For many optimization applications a complicated computational simulation is replaced with a simpler response surface model. These models are built by fitting a limited number of evaluations of the full simulation with a simple function that captures the trends in the evaluated data. In many cases the values of the data at the evaluation points have some uncertainty. This paper uses Bayesian model selection to derive two objective metrics that can be used to determine which response surface model provides the most appropriate representation of the evaluated data given the associated uncertainty. These metrics are shown to be consistent with modelling intuition based on Occam’s principle. The uncertainty may be due to numerical error, approximations, uncertain input conditions, or to higher order effects in the simulation that do not need to be fit by the response surface. Two metrics, Q and G, are derived in this paper. The metric Q assumes that a good estimate of the simulation uncertainty is available. The metric G assumes the uncertainty, although present, is unknown. Application of these metrics in one and two dimensions are demonstrated. Received June 28, 2000  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号