首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In robust design, it is common to estimate empirical models that relate an output response variable to controllable input variables and uncontrollable noise variables from experimental data. However, when determining the optimal input settings that minimise output variability, parameter uncertainties in noise factors and response models are typically neglected. This article presents an interval robust design approach that takes parameter uncertainties into account through the confidence regions for these unknown parameters. To avoid obtaining an overly conservative design, the worst and best cases of mean squared error are both adopted to build an optimisation approach. The midpoint and radius of the interval are used to measure the location and dispersion performances, respectively. Meanwhile, a data-driven method is applied to obtain the relative weights of the location and dispersion performances in the optimisation approach. A simulation example and a case study using automobile manufacturing data from the dimensional tolerance design process are used to demonstrate the effectiveness of the proposed approach. The proposed approach of considering both uncertainties is shown to perform better than other approaches.  相似文献   

2.
We study the two-parameter maximum likelihood estimation (MLE) problem for the Weibull distribution with consideration of interval data. Without interval data, the problem can be solved easily by regular MLE methods because the restricted MLE of the scale parameter β for a given shape parameter α has an analytical form, thus α can be efficiently solved from its profile score function by traditional numerical methods. In the presence of interval data, however, the analytical form for the restricted MLE of β does not exist and directly applying regular MLE methods could be less efficient and effective. To improve efficiency and effectiveness in handling interval data in the MLE problem, a new approach is developed in this paper. The new approach combines the Weibull-to-exponential transformation technique and the equivalent failure and lifetime technique. The concept of equivalence is developed to estimate exponential failure rates from uncertain data including interval data. Since the definition of equivalent failures and lifetimes follows EM algorithms, convergence of failure rate estimation by applying equivalent failures and lifetimes is mathematically proved. The new approach is demonstrated and validated through two published examples, and its performance in different conditions is studied by Monte Carlo simulations. It indicates that the profile score function for α has only one maximum in most cases. Such good characteristic enables efficient search for the optimal value of α.  相似文献   

3.
The results of this paper show that neural networks could be a very promising tool for reliability data analysis. Identifying the underlying distribution of a set of failure data and estimating its distribution parameters are necessary in reliability engineering studies. In general, either a chi-square or a non-parametric goodness-of-fit test is used in the distribution identification process which includes the pattern interpretation of the failure data histograms. However, those procedures can guarantee neither an accurate distribution identification nor a robust parameter estimation when small data samples are available. Basically, the graphical approach of distribution fitting is a pattern recognition problem and parameter estimation is a classification problem where neural networks have been proved to be a suitable tool. This paper presents an exploratory study of a neural network approach, validated by simulated experiments, for analysing small-sample reliability data. A counter-propagation network is used in classifying normal, uniform, exponential and Weibull distributions. A back-propagation network is used in the parameter estimation of a two-parameter Weibull distribution.  相似文献   

4.
We develop here a three-stage nonparametric method to estimate the common, group and individual effects in a longitudinal data setting. Our three-stage additive model assumes that the dependence between performance in an audiologic test and time is a sum of three components. One of them is the same for all individuals, the second one corresponds to the group effect and the last one to the individual effects. We estimate these functional components by nonparametric kernel smoothing techniques. We give theoretical results concerning rates of convergence of our estimates. This method is then applied to the data set that motivated the methods proposed here, the speech recognition data from the Iowa Cochlear Implant Project.  相似文献   

5.
Principal Component Analysis (PCA) is a well-known technique, the aim of which is to synthesize huge amounts of numerical data by means of a low number of unobserved variables, called components. In this paper, an extension of PCA to deal with interval valued data is proposed. The method, called Midpoint Radius Principal Component Analysis (MR-PCA), recovers the underlying structure of interval valued data by using both the midpoints (or centers) and the radii (a measure of the interval width) information. In order to analyze how MR-PCA works, the results of a simulation study and two applications on chemical data are proposed.  相似文献   

6.
由于传统交通事故仿真根据经验选取仿真参数,通过参数修正反复调整模型,由此得到的仿真结果与真实事故过程存在很大差距,尤其是难以确定各输入参数取值区间分布对仿真结果的影响,使结果缺乏可信性和说服力。本文探讨了通过交通事故仿真与优化计算相集成对事故仿真结果进行可靠性分析,取得输入参数置信区间分布。通过车人碰撞交通事故实际案例,输入参数置信区间分析结果表明:在各置信区间内,事故再现结果可靠性及输出参数分布符合碰撞过程规律,能够真实反映输入参数的变化对再现结果的影响。  相似文献   

7.
Hysteresis parameter identification with limited experimental data   总被引:3,自引:0,他引:3  
The Preisach operator and its variants have been successfully used in the modeling of hysteresis observed in ferromagnetic, magnetostrictive, and piezoelectric materials. However, in designing with these "smart" materials, one has to determine a density function for the Preisach operator by using the input-output behavior of the material at hand. In this paper, we describe a method for numerically determining an approximation of the density function when there is not enough experimental data to uniquely solve for the actual density function by Mayergoyz's method. We present theoretical justification for our method by establishing links to regularization methods for ill-posed problems. We also present numerical results where we estimate an approximate density function from data published in the literature for a magnetostrictive actuator and two electroactive polymers.  相似文献   

8.
Pelegrín  Mercedes  Pelegrín  Blas 《OR Spectrum》2017,39(3):775-791
OR Spectrum - We study the existence and determination of Nash equilibria (NE) in location games where firms compete for the market with the aim of profit maximization. Each competing firm locates...  相似文献   

9.
This paper deals with the integrated facility location and supplier selection decisions for the design of supply chain network with reliable and unreliable suppliers. Two problems are addressed: (1) facility location/supplier selection; and (2) facility location/supplier reliability. We first consider the facility location and supplier selections problem where all the suppliers are reliable. The decisions concern the selection of suppliers, the location of distribution centres (DCs), the allocation of suppliers to DCs and the allocation of retailers to DCs. The objective is to minimise fixed DCs location costs, inventory and safety stock costs at the DCs and ordering costs and transportation costs across the network. The introduction of inventory costs and safety stock costs leads to a non-linear NP-hard optimisation problem. To solve this problem, a Lagrangian relaxation-based approach is developed. For the second problem, a two-period decision model is proposed in which selected suppliers are reliable in the first period and can fail in the second period. The corresponding facility location/supplier reliability problem is formulated as a non-linear stochastic programming problem. A Monte Carlo optimisation approach combining the sample average approximation scheme and the Lagrangian relaxation-based approach is proposed. Computational results are presented to evaluate the efficiency of the proposed approaches.  相似文献   

10.
针对当前配电网输电线路损耗异常无法溯源且定位难的问题,基于计量自动化系统采集的数据,通过对站、线、变、户基础数据的治理,采用自动最优聚类算法对用户用电行为分类,采用随机森林建立各类线损之间的关联关系模型,构建配电网线路损耗和台区损耗分析与定位方法,并开发基于线损异常精确定位的计量自动化运维平台。通过对贵州省某供电局辖区2 516个用户的数据进行分析和实验验证,该文所提出的线损分析与定位方法能对配电网线损异常进行溯源和精确定位。  相似文献   

11.
This paper presents the application of the spectral parameter power series method [Pauli, Math Method Appl Sci 33:459–468 (2010)] for constructing the Green’s function for the elliptic operator $-\nabla \cdot I\nabla $ in a rectangular domain $\varOmega \subset \mathbb R ^{2}$ , where $I$ admits separation of variables. This operator appears in the transport-of-intensity equation (TLE) for undulatory phenomena, which relates the phase of a coherent wave with the axial derivative of its intensity in the Fresnel regime. We present a method for solving the TIE with Dirichlet boundary conditions. In particular, we discuss the case of an inhomogeneous boundary condition, a problem that has not been addressed specifically in other works, under the restricted assumption that the intensity $I$ admits separation of variables. Several simulations show the validity of the method.  相似文献   

12.
俞靖  杨春亭  王学礼 《声学技术》1998,17(3):103-107
本文提出利用均匀线阵接收的数据构造一种矩阵进行奇异值分解,以对频率、方位两参数进行分离估计,用频率谱估计出声源频率,用方位谱估计出声源方位角。并在此基础上改变变阵列中心位置确定噪声源位置,计算机模拟和声学实验证明了上述方法的可行性。  相似文献   

13.
ABSTRACT

Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This article explores different strategies for choosing between possible inspection plans for interval-censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adaptable method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Results show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. This article uses a Monte Carlo simulation-based approach in addition to the commonly used approach based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. In addition, the article outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.  相似文献   

14.
Measuring the QT interval parameters on the electrocardiogram (ECG) is considered in calculating the inhomogeneity of the repolarization index in the cardiac ventricles in terms of the QT interval dispersion (QTD) in terms of vector cardiographic and biophysical models for the electrical heart activity. In accordance with the laws of electrodynamics, the length of the ventricular complex should be the same in all leads, with the possible exception of some special cases. QTD arises in practical measurements from various objective and subjective factors that influence the error in measuring the ECG repolarization parameters. __________ Translated from Izmeritel’naya Tekhnika, No. 2, pp. 68–73, February, 2007.  相似文献   

15.
Modelling the location decision of two competing firms that intend to build a new facility in a planar market can be done by a Huff-like Stackelberg location problem. In a Huff-like model, the market share captured by a firm is given by a gravity model determined by distance calculations to facilities. In a Stackelberg model, the leader is the firm that locates first and takes into account the actions of the competing chain (follower) locating a new facility after the leader. The follower problem is known to be a hard global optimisation problem. The leader problem is even harder, since the leader has to decide on location given the optimal action of the follower. So far, in literature only heuristic approaches have been tested to solve the leader problem. Our research question is to solve the leader problem rigorously in the sense of having a guarantee on the reached accuracy. To answer this question, we develop a branch-and-bound approach. Essentially, the bounding is based on the zero sum concept: what is gain for one chain is loss for the other. We also discuss several ways of creating bounds for the underlying (follower) sub-problems, and show their performance for numerical cases. This work has been supported by the Ministry of Education and Science of Spain through grant SEJ2005/06273/ECON. M. Elena Sáz was supported by a junior research grant of Mansholt Graduate School (Wageningen Universiteit).  相似文献   

16.
An efficient analytical method is presented for the calculation of blocking probabilities in a tandem queuing network with simultaneous resource possession. This queuing network model is motivated from the need to model optical burst switching networks, where the size of the data bursts varies and the link distance between two adjacent network elements also varies depending on the network?s topology. A fast single-node decomposition algorithm is developed to compute the blocking probabilities in the network. The algorithm extends the popular link-decomposition method from teletraffic theory by allowing dynamic simultaneous link possession. Simulation is used to validate the accuracy of the algorithm.  相似文献   

17.
In social science research there are a number of instruments that utilize a rating scale such as a Likert response scale. For a number of reasons a respondent's response vector may not contain responses to each item. This study investigated the effect on a respondent's location estimate when a respondent is presented an item, has ample time to answer the item, but decides to not respond to the item. For these situations different strategies have been developed for handling missing data. In this study, four different approaches for handling missing data were investigated for their capability to mitigate against the effect of omitted responses on person location estimation. These methods included ignoring the omitted response, selecting the "midpoint" response category, hot-decking, and a Likelihood-based approach. A Monte Carlo study was performed and the effect of different levels of omissions on the simulees' location estimates was determined. Results showed that the hot-decking procedure performed the best of methods examined. Implications for practitioners were discussed.  相似文献   

18.
This application note describes a computer-based method for automated evaluation of Weibull-distributed life-time data. It uses comercially available software, MathCad, to set up equations and graphs. The computer application is a simple and non-expensive way of evaluating and displaying your life-time data taking into consideration both parametric and non-parametric confidence intervals. It is valid for arbitrarily censored data.  相似文献   

19.
Equivalent network representation is derived for thickness vibration modes in piezoelectric plates with a linearly graded parameter. The network is composed of a transmission line of a finite length, which is linked to the electric excitation port via ideal transformers connected serially at both ends. The ratios of the two transformers are different from each other and frequency dependent. Two frequency-dependent capacitors of the same value but opposite sign appear at the electric port. The frequency characteristic of the input electric admittance of the resonator, which shows the unique feature of the graded piezoelectric plate, is demonstrated by the equivalent network analysis.  相似文献   

20.
Fatal neurodegenerative diseases such as bovine spongiform encephalopathy in cattle, scrapie in sheep and Creutzfeldt-Jakob disease in humans are caused by prions. Prion is a protein encoded by a normal cellular gene. The cellular form of the prion, namely PrP(C), is benign but can be converted into a disease-causing form (named scrapie), PrP(Sc), by a conformational change from -helix to -sheets. Prions replicate by this conformational change; that is, PrP(Sc) interacts with PrP(C) producing a new molecule of PrP(Sc). This kind of replication is modelled in this contribution as an autocatalytic process. The kinetic model accounts for two of the three epidemiological manifestations: sporadic and infectious. By assuming irreversibility of the PrP(Sc) replication and describing a first-order reaction for the degradation of cellular tissue, the authors explore dynamical scenarios for prion progression, such as oscillations and conditions for multiplicity of equilibria. Feinberg's chemical reaction network theory is exploited to identify multiple steady states and their associate kinetic constants.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号