首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Direct simulation for estimating unreliability of a highly reliable stochastic network often requires huge sample size to obtain statistically significant results. In this paper, a simple and efficient importance sampling estimator, based on the capacity of the minimum cut, for estimating network unreliability is proposed. Under mild conditions, the proposed estimator guarantees the variance reduction and an upperbound on the relative error of the proposed estimator is derived for the case when the network edges have common functioning probabilities. Empirical results show that the proposed importance sampling estimator achieves significant variance reduction, especially for highly reliable networks.  相似文献   

2.
Risk difference (RD) has played an important role in a lot of biological and epidemiological investigations to compare the risks of developing certain disease or tumor for two drugs or treatments. When the disease is rare and acute, inverse sampling (rather than binomial sampling) is usually recommended to collect the binary outcomes. In this paper, we derive an asymptotic confidence interval estimator for RD based on the score statistic. To compare its performance with three existing confidence interval estimators, we employ Monte Carlo simulation to evaluate their coverage probabilities, expected confidence interval widths, and the mean difference of the coverage probabilities from the nominal confidence level. Our simulation results suggest that the score-test-based confidence interval estimator is generally more appealing than the Wald, uniformly minimum variance unbiased estimator and likelihood ratio confidence interval estimators for it maintains the coverage probability close to the desired confidence level and yields the shortest expected width in most cases. We illustrate these confidence interval construction methods with real data sets from a drug comparison study and a congenital heart disease study.  相似文献   

3.
Cluster sampling is a viable sampling design for collecting reference data for the purpose of conducting an accuracy assessment of land-cover classifications obtained from remotely sensed data. The formulas for estimating various accuracy parameters such as the overall proportion of pixels correctly classified, the kappa coefficient of agreement, and user's and producer's accuracy are the same under cluster sampling and simple random sampling, but the formulas for estimating standard errors differ between the two designs. If standard error formulas appropriate for cluster sampling are not employed in an accuracy assessment based on this design, the reported variability of map accuracy statistics is likely to be grossly misleading. The proper standard error formulas for common map accuracy statistics are derived for one-stage cluster sampling. The validity of these standard error formulas is verified by a small simulation study, and the standard errors computed according to the usual simple random sampling formulas are shown to underestimate the true cluster sampling standard errors by 20–70% if the intracluster correlation is moderate.  相似文献   

4.
Despite their capability in monitoring the variability of the processes, control charts are not effective tools for identifying the real time of such changes. Identifying the real time of the change in a process is recognized as change-point estimation problem. Most of the change-point models in the literature are limited to fixed sampling control charts which are only a special case of more effective charts known as variable sampling charts. In this paper, we develop a general fuzzy-statistical clustering approach for estimating change-points in different types of control charts with either fixed or variable sampling strategy. For this purpose, we devise and evaluate a new similarity measure based on the definition of operation characteristics and power functions. We also develop and examine a new objective function and discuss its relation with maximum-likelihood estimator. Finally, we conduct extensive simulation studies to evaluate the performance of the proposed approach for different types of control charts with different sampling strategies.  相似文献   

5.
Assessing the accuracy of land cover maps is often prohibitively expensive because of the difficulty of collecting a statistically valid probability sample from the classified map. Even when post-classification sampling is undertaken, cost and accessibility constraints may result in imprecise estimates of map accuracy. If the map is constructed via supervised classification, then the training sample provides a potential alternative source of data for accuracy assessment. Yet unless the training sample is collected by probability sampling, the estimates are, at best, of uncertain quality, and may be substantially biased. This article discusses a new approach to map accuracy assessment based on maximum posterior probability estimators. Maximum posterior probability estimators are resistant to bias induced by non-representative sampling, and so are intended for situations in which the training sample is collected without using a statistical sampling design. The maximum posterior probability approach may also be used to increase the precision of estimates obtained from a post-classification sample. In addition to discussing maximum posterior probability estimators, this article reports on a simulation study comparing three approaches to estimating map accuracy: 1) post-classification sampling, 2) resampling the training sample via cross-validation, and 3) maximum posterior probability estimation. The simulation study showed substantial reductions in bias and improvements in precision in comparisons of maximum posterior probability and cross-validation estimators when the training sample was not representative of the map. In addition, combining an ordinary post-classification estimator and the maximum posterior probability estimator produced an estimator that was at least, and usually more precise than the ordinary post-classification estimator.  相似文献   

6.
Different multi-fidelity surrogate (MFS) frameworks have been used for optimization or uncertainty quantification. This paper investigates differences between various MFS frameworks with the aid of examples including algebraic functions and a borehole example. These MFS include three Bayesian frameworks using 1) a model discrepancy function, 2) low fidelity model calibration and 3) a comprehensive approach combining both. Three counterparts in simple frameworks are also included, which have the same functional form but can be built with ready-made surrogates. The sensitivity of frameworks to the choice of design of experiments (DOE) is investigated by repeating calculations with 100 different DOEs. Computational cost savings and accuracy improvement over a single fidelity surrogate model are investigated as a function of the ratio of the sampling costs between low and high fidelity simulations. For the examples considered, MFS frameworks were found to be more useful for saving computational time rather than improving accuracy. For the Hartmann 6 function example, the maximum cost saving for the same accuracy was 86 %, while the maximum accuracy improvement for the same cost was 51 %. It was also found that DOE can substantially change the relative standing of different frameworks. The cross-validation error appears to be a reasonable candidate for estimating poor MFS frameworks for a specific problem but it does not perform well compared to choosing single fidelity surrogates.  相似文献   

7.
We investigate the effect of martingale control as a smoother for MC/QMC methods. Numerical results of estimating low-biased solutions for American put option prices under the Black-Scholes model demonstrate that using QMC methods can be problematic. But it can be fixed by adding a (local) martingale control variate into the least-squares estimator to gain accuracy and efficiency. In examples of estimating European option prices under multi-factor stochastic volatility models, randomized QMC methods improve the variance by merely a single digit. After adding a martingale control, the variance reduction ratio raise up to 700 times for randomized QMC and about 50 times for MC simulations. When the delta estimation problem is considered, the efficiency of the martingale control variate method decreases. We propose an importance sampling method which performs better particularly in the presence of rare events.  相似文献   

8.
The shape of elements in the finite element analysis may be the most important of many factors which induce a discretizing error. In particular, the efficiency of adaptive refinement analysis depends on the shape of the elements, and so estimating the quality of element shape is requisite during the adaptive analysis being performed. Unfortunately, most posterior error estimates can not evaluate the shape error of an element, so that some difficulties remain in the application of an adaptive analysis. For this purpose, an error estimator which can separately evaluate size error and distortion error from Zienkiewicz-Zhu's error estimator is presented for bilinear and quadratic isoparametric finite elements. As deduced from the results of numerical experiments, the suggeted estimator gives a reasonable evaluation of error due to element shape as well as discretizing error.  相似文献   

9.
当马尔可夫系统规模较大时,需要采用蒙特卡罗方法计算其瞬态不可用度,如果系统的 不可用度很小,则需要采用高效率的蒙特卡罗方法.本文在马尔可夫系统寿命过程的积分方程的 基础上,给出了系统瞬态不可用度计算的蒙特卡罗方法的统一描述,由此设计了马尔可夫系统瞬 态不可用度计算的直接统计估计方法和加权统计估计方法.用直接仿真方法、拟仿真方法、基于 直接仿真的统计估计方法、基于拟方仿真的统计估计方法和加权统计估计方法计算了-可修 Con/3/30:F系统的瞬态不可用度.结果表明,由于同时采用了偏倚的抽样空间和逐次事件估计 量,加权统计估计方法的方差最小,当系统不可用度很小时,该方法效率最高.  相似文献   

10.
Crude simulation for estimating reliability of a stochastic network often requires large sample size to obtain statistically significant results. In this paper, we propose a simple recursive importance and stratified sampling estimator which is shown to be unbiased and achieve smaller variance. Preallocation of sampling efforts of size two to each undetermined subnetwork on each stage makes it possible to estimate the variance of the proposed estimator and significantly enhances the effectiveness of variance reduction from stratification by deferring the termination of recursive stratification. Empirical results show that the proposed estimator achieves significant variance reduction, especially for highly reliable networks.  相似文献   

11.
The inadequacy of the standard notions of detectability and observability to ascertain robust state estimation is shown. The notion of robust state estimation is defined, and for a class of processes the conditions under which the robust state estimation is possible, are given. A method of robust, nonlinear, multi-rate, state estimator design is presented. It can be used to improve robustness in an existing estimator or design a new robust estimator. Estimator tuning guidelines that ensure the asymptotic stability of the estimator error dynamics are given. To ensure that estimation error does not exceed a desired limit, the sampling period of infrequent measurements should be less than an upper bound that depends on factors such as the size of the process dominant time constant, the magnitude of measurement noise, and the level of process–model mismatch. An expression that can be used to calculate the upper bound on the sampling period of infrequent measurements, is presented. The upper bound is the latest time at which the next infrequent measurements should arrive to ensure that estimation error does not exceed a desired limit. The expression also allows one to calculate the highest quality of estimation achievable in a given process. A binary distillation flash tank and a free-radical polymerization reactor are considered to show the application and performance of the estimator.  相似文献   

12.
A method for estimating the speedup for asynchronous bottom-up parallel parsing is presented. Two models for bottom-up parallel parsing are proposed, and the speedup for each of the two models is estimated. The speedup obtained for model A is a very close to the simulation result already available in literature; however, the model is restrictive because it can only communicate with its immediate left and right neighbors. This increases the processor coordination and interprocessor communication times. Model B, while showing a greater speedup time, is expensive to construct when the number of processors is large  相似文献   

13.
研究了电子侦察信号处理中正弦信号幅度盲估计问题。提出了一种基于相关累加及线性回归的正弦信号幅度盲估计算法。先对接收到的信号进行频率估计,建立参考信号,将接收信号与参考信号相关后变换到基带并作累加,后将相关累加曲线进行最小二乘线性回归,以回归得到的直线斜率值作为信号幅度的估计值。仿真结果表明,当信噪比大于[-3 dB]时,方法的估计均方根误差小于1.1倍克拉美罗限,可在低信噪比条件下,实现正弦信号幅度的盲估计。  相似文献   

14.
分析了目前主流采样模拟技术中定长样本的不足,提出了一种基于编译器元数据的采样模拟技术(BigLoopSP).首先利用编译器收集各种可能的周期行为的边界信息作为元数据.然后为了处理程序中大量存在的动态行为,基于编译器产生的元数据结合程序的动态行为进行周期行为的划分和采样点的选取.以此方案划分的变长候选样本能够在保证样本质量的前提下有效地减少所需特征样本的总数.因此比较于定长采样技术SimPoint,BigLoopSP在提高精确性的同时,进一步降低了模拟所需的时间(相对于SimPoint的平均加速比为2.63).  相似文献   

15.
We study how to perform model selection for time series data where millions of candidate ARMA models may be eligible for selection. We propose a feasible computing method based on the Gibbs sampler. By this method model selection is performed through a random sample generation algorithm, and given a model of fixed dimension the parameter estimation is done through the maximum likelihood method. Our method takes into account several computing difficulties encountered in estimating ARMA models. The method is found to have probability of 1 in the limit in selecting the best candidate model under some regularity conditions. We then propose several empirical rules to implement our computing method for applications. Finally, a simulation study and an example on modelling China's Consumer Price Index (CPI) data are presented for purpose of illustration and verification.  相似文献   

16.
We study how to perform model selection for time series data where millions of candidate ARMA models may be eligible for selection. We propose a feasible computing method based on the Gibbs sampler. By this method model selection is performed through a random sample generation algorithm, and given a model of fixed dimension the parameter estimation is done through the maximum likelihood method. Our method takes into account several computing difficulties encountered in estimating ARMA models. The method is found to have probability of 1 in the limit in selecting the best candidate model under some regularity conditions. We then propose several empirical rules to implement our computing method for applications. Finally, a simulation study and an example on modelling China's Consumer Price Index (CPI) data are presented for purpose of illustration and verification.  相似文献   

17.
分布式并行模拟是提高体系结构模拟速度的有效技术手段之一.首先,建立了分布式并行模拟的通用性能分析模型,并对典型系统的并行加速比、并行效率等性质进行了理论分析,得出了一些有用的结论.在此基础上,提出了均衡可扩展分布式并行模拟方法SEDSim(scalable and evenly distributed simulation).SEDSim 针对模拟节点负载不均衡问题,提出了开销模型指导的指令区间均衡分割和分配策略CoMEPA(cost model guided evenly partitionand allocation);针对分布式并行模拟与非连续、任意数量抽样模拟区间的高效集成,提出了基于最小等价距离(minimum equivalent cost,简称MinEC)的指令区间分配策略MinEC.基于sim-outorder 实现了SEDSim,采用SPECCPU2000 中的部分程序对其速度和精度进行了测试,理论分析和测试结果均表明了SEDSim 的优势:相对于常用的方法或策略,CoMEPA 和MinEC 分别能够获得多达约1.6 倍和1.4 倍的性能提升.  相似文献   

18.
扫描图象细化后象点点位精度研究   总被引:5,自引:0,他引:5       下载免费PDF全文
该文提出了一种计算扫描图象细化后象点点位中误差的方法,通过实验,得出的主要结论是:扫描图象细化后的点位精度由固定误差和比例误差两部分组成;不同细化算法对点位精度影响不大,但扫描仪可能存在系统误差,其大小有待进一步研究。图象细化是扫描图象矢量化的基础,研究扫描图象细化后的精度对某些应用领域(如图象转换为GIS数据库的数据)有着重要的意义。  相似文献   

19.
It is not uncommon to encounter a randomized clinical trial (RCT), in which we need to account for both the noncompliance of patients to their assigned treatment and confounders to avoid making a misleading inference. In this paper, we focus our attention on estimation of the relative treatment efficacy measured by the odds ratio (OR) in large strata for a stratified RCT with noncompliance. We have developed five asymptotic interval estimators for the OR. We employ Monte Carlo simulation to evaluate the finite-sample performance of these interval estimators in a variety of situations. We note that the interval estimator using the weighted least squares (WLS) method may perform well when the number of strata is small, but tend to be liberal when the number of strata is large. We find that the interval estimator using weights which are not functions of unknown parameters required to be estimated from data can improve the accuracy of the interval estimator based on the WLS method, but lose precision. We note that the estimator using the logarithmic transformation of the WLS point estimator and the interval estimator using the logarithmic transformation of the Mantel-Haenszel (MH) type of point estimator can perform well with respect to both the coverage probability and the average length in all the situations considered here. We further note that the interval estimator derived from a quadratic equation using a randomization-based method can be of use as the number of strata is large. Finally, we use the data taken from a multiple risk factor intervention trial to illustrate the use of interval estimators appropriate for being employed when the number of strata is small or moderate.  相似文献   

20.
在覆盖率驱动的模拟验证方法中,模拟覆盖率分析时间直接影响模拟验证的效率.针对现有基于值变化导出(VCD)文件覆盖率分析方法中模拟重放效率低的问外,对模拟重放过程进行了改进,提出一种高效的基于VCD文件的模拟覆盖率分析方法.该方法模拟重放时只针对HDL描述中的控制语句进行求解.实现了模拟覆盖率分析系统原型和各种覆盖率测度的分析方法.实验结果表明该方法在获得与现有方法同等覆盖率分析精度的同时,模拟重放效率提高2倍多.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号