首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Advanced Robotics》2013,27(4):437-450
This paper presents a methodology for building a high-accuracy environmental map using a mobile robot. The design approach uses low-cost infrared range-finder sensors incorporating with neural networks. To enhance the map quality, the errors occurring from the sensors are corrected. The non-linearity error of the sensors is compensated using a backpropagation neural network and the random error of readings including the uncertainty of the environment is taken into a sensor model as a probabilistic approach. The map is represented by an occupancy grid framework and updated by the Bayesian estimation mechanism. The effectiveness of the proposed method is verified through a series of experiments.  相似文献   

2.
We present a robust fault diagnosis method for uncertain multiple input–multiple output (MIMO) linear parameter varying (LPV) parity equations. The fault detection methodology is based on checking whether measurements are inside the prediction bounds provided by the uncertain MIMO LPV parity equations. The proposed approach takes into account existing couplings between the different measured outputs. Modelling and prediction uncertainty bounds are computed using zonotopes. Also proposed is an identification algorithm that estimates model parameters and their uncertainty such that all measured data free of faults will be inside the predicted bounds. The fault isolation and estimation algorithm is based on the use of residual fault sensitivity. Finally, two case studies (one based on a water distribution network and the other on a four-tank system) illustrate the effectiveness of the proposed approach.  相似文献   

3.
This paper presents a methodology for structural design optimization of multiple objectives, or attributes. The method represents an improvement over Pareto optimizationbased methods by quantitatively representing trade-offs between conflicting objectives in a single multi-attribute objective function. Classical utility analysis is first used to determine a multi-attribute evaluation function for a particular structure from the designer's viewpoint. This viewpoint takes into account the attribute tradeoffs that are appropriate for a specific project. Since attributes are controlled only indirectly through specification of design decision variables, a new objective function is then formulated which expresses design utility directly in terms of those parameters over which the designer has direct control. A one-bay, three storey steel frame building example demonstrates the methodology for determining the design configuration with the best combination of cost and drift index.  相似文献   

4.
A copula density is the joint probability density function (PDF) of a random vector with uniform marginals. An approach to bivariate copula density estimation is introduced that is based on maximum penalized likelihood estimation (MPLE) with a total variation (TV) penalty term. The marginal unity and symmetry constraints for copula density are enforced by linear equality constraints. The TV-MPLE subject to linear equality constraints is solved by an augmented Lagrangian and operator-splitting algorithm. It offers an order of magnitude improvement in computational efficiency over another TV-MPLE method without constraints solved by the log-barrier method for the second order cone program. A data-driven selection of the regularization parameter is through K-fold cross-validation (CV). Simulation and real data application show the effectiveness of the proposed approach. The MATLAB code implementing the methodology is available online.  相似文献   

5.
In part I of this series of articles, the concept of overall system reliability was presented for two applications: reliable estimation of variables for steady state linear flow processes, and reliable fault detection and diagnosis for any process. In this part, systematic generation of the proposed system-wide reliability expression is discussed. In particular, an approach for generating the system reliability using the sum of disjoint product method is presented. This serves as the objective function to be maximized in various constrained optimization formulations for sensor network design, which are also proposed in this work for both applications. A heuristic is proposed to solve the resulting nonlinear integer programming problems. The sensor network design formulations are applied to two benchmark case studies: (1) Tennessee Eastman process, (2) Steam metering process. Results indicate the utility of the proposed sensor network design approach in designing optimal sensor networks that maximize system reliability.  相似文献   

6.
One technique used frequently among quality practitioners seeking solutions to multi-response optimization problems is the desirability function approach. The technique involves modeling each characteristic using response surface designs and then transforming the characteristics into a single performance measure. The traditional procedure, however, calls for estimating only the mean response; the variability among the characteristics is not considered. Furthermore, the approach typically relies on the accuracy of second-order polynomials in its estimation, which are not always suitable. This paper, in contrast, proposes a methodology that utilizes higher-order estimation techniques and incorporates the concepts of robust design to account for process variability. Several examples are provided to illustrate the effectiveness of the proposed methodology.  相似文献   

7.
In this paper, diagnosis for hybrid systems using a parity space approach that considers model uncertainty is proposed. The hybrid diagnoser is composed of modules which carry out the mode recognition and diagnosis tasks interacting each other, since the diagnosis module adapts accordingly to the current hybrid system mode. Moreover, the methodology takes into account the unknown but bounded uncertainty in parameters and additive errors (including noise and discretisation errors) using a passive robust strategy based on the set-membership approach. An adaptive threshold that bounds the effect of model uncertainty in residuals is generated for residual evaluation using zonotopes, and the parity space approach is used to design a set of residuals for each mode. The proposed fault diagnosis approach for hybrid systems is illustrated on a piece of the Barcelona sewer network.  相似文献   

8.
An R package is developed for the Generalized Extreme Value conditional density estimation network (GEVcdn). Parameters in a GEV distribution are specified as a function of covariates using a probabilistic variant of the multilayer perceptron neural network. If the covariate is time or is dependent on time, then the GEVcdn model can be used to perform nonlinear, nonstationary extreme value analysis. Due to the flexibility of the neural network architecture, the model is capable of representing a wide range of nonstationary relationships, including those involving interactions between covariates. Model parameters are estimated by generalized maximum likelihood, an approach that is tailored to the analysis of hydroclimatological extremes. Functions are included to assist in the calculation of parameter uncertainty via bootstrapping.  相似文献   

9.
Classical Bayesian spatial interpolation methods are based on the Gaussian assumption and therefore lead to unreliable results when applied to extreme valued data. Specifically, they give wrong estimates of the prediction uncertainty. Copulas have recently attracted much attention in spatial statistics and are used as a flexible alternative to traditional methods for non-Gaussian spatial modeling and interpolation. We adopt this methodology and show how it can be incorporated in a Bayesian framework by assigning priors to all model parameters. In the absence of simple analytical expressions for the joint posterior distribution we propose a Metropolis-Hastings algorithm to obtain posterior samples. The posterior predictive density is approximated by averaging the plug-in predictive densities. Furthermore, we discuss the deficiencies of the existing spatial copula models with regard to modeling extreme events. It is shown that the non-Gaussian χ2-copula model suffers from the same lack of tail dependence as the Gaussian copula and thus offers no advantage over the latter with respect to modeling extremes. We illustrate the proposed methodology by analyzing a dataset here referred to as the Helicopter dataset, which includes strongly skewed radioactivity measurements in the city of Oranienburg, Germany.  相似文献   

10.
11.
Conceptual process planning (CPP) is an important technique for assessing the manufacturability and estimating the cost of conceptual design in the early product design stage. This paper presents an approach to develop a quality/cost-based conceptual process planning (QCCPP). This approach aims to determine key process resources with estimation of manufacturing cost, taking into account the risk cost associated to the process plan. It can serve as a useful methodology to support the decision making during the initial planning stage of the product development cycle. Quality function deployment (QFD) method is used to select the process alternatives by incorporating a capability function for process elements called a composite process capability index (CCP). The quality characteristics and the process elements in QFD method have been taken as input to complete process failure mode and effects analysis (FMEA) table. To estimate manufacturing cost, the proposed approach deploys activity-based costing (ABC) method. Then, an extended technique of classical FMEA method is employed to estimate the cost of risks associated to the studied process plan, this technique is called cost-based FMEA. For each resource combination, the output data is gathered in a selection table that helps for detailed process planning in order to improve product quality/cost ratio. A case study is presented to illustrate this approach.  相似文献   

12.
Multivariate time series are ubiquitous among a broad array of applications and often include both categorical and continuous series. Further, in many contexts, the continuous variable behaves nonlinearly conditional on a categorical time series. To accommodate the complexity of this structure, we propose a multi-regime smooth transition model where the transition variable is derived from the categorical time series and the degree of smoothness in transitioning between regimes is estimated from the data. The joint model for the continuous and ordinal time series is developed using a Bayesian hierarchical approach and thus, naturally, quantifies different sources of uncertainty. Additionally, we allow a general number of regimes in the smooth transition model and, for estimation, propose an efficient Markov chain Monte Carlo algorithm by blocking the parameters. Moreover, the model can be effectively used to draw inference on the behavior within and between regimes, as well as inference on regime probabilities. In order to demonstrate the frequentist properties of the proposed Bayesian estimators, we present the results of a comprehensive simulation study. Finally, we illustrate the utility of the proposed model through the analysis of two macroeconomic time series.  相似文献   

13.
We propose a new methodology for designing decentralized random field estimation schemes that takes the tradeoff between the estimation accuracy and the cost of communications into account. We consider a sensor network in which nodes perform bandwidth limited two-way communications with other nodes located in a certain range. The in-network processing starts with each node measuring its local variable and sending messages to its immediate neighbors followed by evaluating its local estimation rule based on the received messages and measurements. Local rule design for this two-stage strategy can be cast as a constrained optimization problem with a Bayesian risk capturing the cost of transmissions and penalty for the estimation errors. A similar problem has been previously studied for decentralized detection. We adopt that framework for estimation, however, the corresponding optimization schemes involve integral operators that are impossible to evaluate exactly, in general. We employ an approximation framework using Monte Carlo methods and obtain an optimization procedure based on particle representations and approximate computations. The procedure operates in a message-passing fashion and generates results for any distributions if samples can be produced from, e.g., the marginals. We demonstrate graceful degradation of the estimation accuracy as communication becomes more costly.  相似文献   

14.
Smooth-CAR mixed models for spatial count data   总被引:1,自引:0,他引:1  
Penalized splines (P-splines) and individual random effects are used for the analysis of spatial count data. P-splines are represented as mixed models to give a unified approach to the model estimation procedure. First, a model where the spatial variation is modelled by a two-dimensional P-spline at the centroids of the areas or regions is considered. In addition, individual area-effects are incorporated as random effects to account for individual variation among regions. Finally, the model is extended by considering a conditional autoregressive (CAR) structure for the random effects, these are the so called “Smooth-CAR” models, with the aim of separating the large-scale geographical trend, and local spatial correlation. The methodology proposed is applied to the analysis of lip cancer incidence rates in Scotland.  相似文献   

15.
When two interventions are randomized to multiple sub-clusters within a whole cluster, accounting for the within sub-cluster (intra-cluster) and between sub-clusters (inter-cluster) correlations is needed to produce valid analyses of the effect of interventions. With the growing interest in copulas and their applications in statistical research, we demonstrate, through applications, how copula functions may be used to account for the correlation among responses across sub-clusters. Copulas having asymmetric dependence property may prove useful for modeling the relationship between random functions especially in clinical, health and environmental sciences where response data are in general skewed. These functions can in general be used to study scale-free measures of dependence, and they can be used as a starting point for constructing families of bivariate distributions, with a view to simulations. The core contribution of this paper is to provide an alternative approach for estimating the inter-cluster correlation using copula to accurately estimate the treatment effect when the outcome variable is measured on the dichotomous scale. Two data sets are used to illustrate the proposed methodology.  相似文献   

16.
In this paper, we propose a new likelihood-based methodology to represent epistemic uncertainty described by sparse point and/or interval data for input variables in uncertainty analysis and design optimization problems. A worst-case maximum likelihood-based approach is developed for the representation of epistemic uncertainty, which is able to estimate the distribution parameters of a random variable described by sparse point and/or interval data. This likelihood-based approach is general and is able to estimate the parameters of any known probability distributions. The likelihood-based representation of epistemic uncertainty is then used in the existing framework for robustness-based design optimization to achieve computational efficiency. The proposed uncertainty representation and design optimization methodologies are illustrated with two numerical examples including a mathematical problem and a real engineering problem.  相似文献   

17.
The detection and identification of faults in dynamic continuous processes has received considerable recent attention from researchers in academia and industry. In this paper, a canonical variate analysis (CVA)-based sensor fault detection and identification method via variable reconstruction is described. Several previous studies have shown that CVA-based monitoring techniques can effectively detect faults in dynamic processes. Here we define two monitoring indices in the state and noise spaces for fault detection and, for sensor fault identification, we propose three variable reconstruction algorithms based on the proposed monitoring indices. The variable reconstruction algorithms are based on the concepts of conditional mean replacement and object function minimization. The proposed approach is applied to a simulated continuous stirred tank reactor and the results are compared to those obtained using the traditional dynamic monitoring technique, dynamic principal component analysis (PCA). The results indicate that the proposed methodology is quite effective for monitoring dynamic processes in terms of sensor fault detection and identification.  相似文献   

18.
Joint modeling of multiple health related random variables is essential to develop an understanding for the public health consequences of an aging population. This is particularly true for patients suffering from multiple chronic diseases. The contribution is to introduce a novel model for multivariate data where some response variables are discrete and some are continuous. It is based on pair copula constructions (PCCs) and has two major advantages over existing methodology. First, expressing the joint dependence structure in terms of bivariate copulas leads to a computationally advantageous expression for the likelihood function. This makes maximum likelihood estimation feasible for large multidimensional data sets. Second, different and possibly asymmetric bivariate (conditional) marginal distributions are allowed which is necessary to accurately describe the limiting behavior of conditional distributions for mixed discrete and continuous responses. The advantages and the favorable predictive performance of the model are demonstrated using data from the Second Longitudinal Study of Aging (LSOA II).  相似文献   

19.
This paper presents a methodology for groundwater quality monitoring network design. This design takes into account uncertainties in aquifer properties, pollution transport processes, and climate. The methodology utilizes a statistical learning algorithm called relevance vector machines (RVM), which is a sparse Bayesian framework that can be used for obtaining solutions to regression and classification tasks. Application of the methodology is illustrated using the Eocene Aquifer in the northern part of the West Bank, Palestine. The procedure presented in this paper utilizes a Monte Carlo (MC) simulation process to capture the uncertainties in recharge, hydraulic conductivity, and nitrate reaction processes through the application of a groundwater flow model and a nitrate fate and transport model. This MC modeling approach provides several thousand realizations of nitrate distribution in the aquifer. Subsets of these realizations are then used to design the monitoring network. This is done by building a best-fit model of nitrate concentration distribution everywhere in the aquifer for each Monte Carlo subset using RVM. The outputs from the RVM model are the distribution of nitrate concentration everywhere in the aquifer, the uncertainty in the characterization of those concentrations, and the number and locations of “relevance vectors” (RVs). The RVs form the basis of the optimal characterization of nitrate throughout the aquifer and represent the optimal locations of monitoring wells. In this paper, the number of monitoring wells and their locations where chosen based on the performance of the RVM model runs. The results from 100 model runs show the consistency of the model in selecting the number and locations of RV‘s. After implementing the design, the data collected from the monitoring sites can be used to estimate nitrate concentration distribution throughout the entire aquifer and to quantify the uncertainty in those estimates.  相似文献   

20.
On Robust H2 Estimation   总被引:1,自引:0,他引:1  
The problem of state estimation for uncertain systems has attracted a recurring interest in the past decade. In this paper, we shall give an overview on some of the recent development in the area by focusing on the robust H2 (Kalman) filtering of uncertain discrete-time systems. The robust H2 estimation is concerned with the design of a fixed estimator for a family of plants under consideration such that the estimation error covariance is of a minimal upper bound. The uncertainty under consideration includes norm-bounded uncertainty and polytopic uncertainty. In the finite horizon case, we shall discuss a parameterized difference Riccati equation approach for systems with norm-bounded uncertainty and pinpoint the difference of state estimation between systems without uncertainty and those with uncertainty. In the infinite horizon case, we shall deal with both the norm-bounded and polytopic uncertainties using a linear matrix inequality (LMI) approach. In particular, we shall demonstrate how the conservatism of design can be improved using a slack variable technique. We also propose an iterative algorithm to refine a designed estimator. An example will be given to compare estimators designed using various techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号