首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
A vertex reconstruction algorithm that is based on the Gaussian-sum filter (GSF) was developed and implemented in the framework of the CMS reconstruction program. While linear least-square estimators are optimal in case all observation errors are Gaussian distributed, the GSF offers a better treatment of non-Gaussian distributions of track parameter errors when these are modeled by Gaussian mixtures. The algorithm has been verified and evaluated with simulated data. The results are compared to the Kalman filter and to an adaptive vertex estimator.  相似文献   

2.
《Pattern recognition》2006,39(5):812-826
A feature selection methodology based on a novel Bhattacharyya space is presented and illustrated with a texture segmentation problem. The Bhattacharyya space is constructed from the Bhattacharyya distances of different measurements extracted with sub-band filters from training samples. The marginal distributions of the Bhattacharyya space present a sequence of the most discriminant sub-bands that can be used as a path for a wrapper algorithm. When this feature selection is used with a multiresolution classification algorithm on a standard set of texture mosaics, it produces the lowest misclassification errors reported.  相似文献   

3.
故障注入是研究软错误故障传播的传统手段,但随着程序复杂性不断增加,采用故障注入对大量软错误的故障传播进行研究将花费巨大的时间成本。提出一种基于程序动态指令进行分析和建模从而快速获取软错误结果的方法。将程序转化为动态指令序列,通过体系结构正确执行分析将所有可能的软错误划分为对程序运行结果有影响和没有影响两部分;基于动态依赖图建立软错误故障传播分析模型,并建立判断程序崩溃的标准,进而提出一个算法对任意制定的能够影响程序运行结果的软错误进行故障传播分析并重点预测程序崩溃的发生。实验显示,预测的漏报率和分析单个软错误的平均用时明显低于现有方法。  相似文献   

4.
《Computers & chemistry》1999,23(1):75-84
The vibrational energy distribution can now be determined confidently from experimental data thanks to the new program of the previously published algorithm that has been reprogrammed and improved to an extent that should make it useful for its application in experimental work. The program has been thoroughly tested using as initial energy distributions, functions mathematically well defined. The influence of unavoidable errors in the data have been carefully studied and found that random errors up to 2% in the experimental results introduce practically neglectful changes in the recovered energy distributions. Even data with 5% errors produced valuable approximations to the true function distributions.  相似文献   

5.
The standard discrete-time autoregressive model is poorly suited for modeling series obtained by sampling continuous-time processes at fairly rapid rates. Large computational errors can occur when the Levinson algorithm is used to estimate the parameters of this model, because the Toeplitz covariance matrix is ill-suited for inversion. An alternative model is developed based on an incremental difference operator, rather than the shift operator. It is shown that, as the sampling period goes to zero, unlike the standard autoregressive parameters, the coefficients of this model converge to certain parameters that depend directly on the statistics of the continuous-time process. A Levinson-type algorithm for efficiently estimating the parameters of this model is derived. Numerical examples are given to show that when the sampling interval is small this algorithm is considerably less sensitivity to arithmetic roundoff errors than the Levinson algorithm  相似文献   

6.
A mixture vector autoregressive model has recently been introduced to the literature. Although this model is a promising candidate for nonlinear multiple time series modeling, high dimensionality of the parameters and lack of method for computing the standard errors of estimates limit its application to real data. The contribution of this paper is threefold. First, a form of parameter constraints is introduced with an efficient EM algorithm for estimation. Second, an accurate method for computing standard errors is presented for the model with and without parameter constraints. Lastly, a hypothesis-testing approach based on likelihood ratio tests is proposed, which aids in the selection of unnecessary parameters and leads to the greater efficiency at the estimation. A case study employing U.S. Treasury constant maturity rates illustrates the applicability of the mixture vector autoregressive model with parameter constraints, and the importance of using a reliable method to compute standard errors.  相似文献   

7.
GRAINUM is a program that can be used in the genetic interpretation of grain-size distributions of sands and coarse silts. The algorithm is based on a transformation of weight-percent distributions to a number-frequency base and a following determination of mean, standard deviation, skewness, and excess. These statistics are plotted in two diagrams and interpreted in terms of the principal physical environmental factors (currents, waves, aeolian activity, and relevant combinations). The program is menu driven and easy-to-use.  相似文献   

8.
A BASIC program for the Macintosh family of computers has been developed that provides an iterative nonlinear least-squares analysis of biological rhythm data using Marquardt's modification of the Gauss-Newton algorithm. A cosinor model is used, and multiple cosine functions can be fit simultaneously to any equi- or unequispaced time-series data. The program computes optimal frequency, mesor, amplitude, and phase, with standard errors of measurement and both parametric and nonparametric estimates of goodness-of-fit. Multiple time-series can be analyzed simultaneously, allowing one to test for shared characteristics between series. Statistical differences between parameter values of select curves are ascertained by ANOVA. The program is interactive and designed to be user-friendly for the scientist or clinician.  相似文献   

9.
An extension of the Shapiro-Wilk test to verify the hypothesis of normality in the presence of nuisance regression and scale has been previously considered. Such a test is typically based on the pair of the maximum likelihood and BLUE estimators of the standard deviation in the linear regression model. It has been shown that the asymptotic null distribution of the test criterion, extended to the regression model, is equivalent to that of the original Shapiro-Wilk test for the location-scale model. A simulation study is performed in order to show that both criteria are close under the normality hypothesis for moderate as well for large data sets. The power of the test against various alternative distributions of the model errors is illustrated. Furthermore, it is shown that the probabilities of errors of both the first and second kinds do not depend on the design matrix or on the parameters of the linear model.  相似文献   

10.
A model is introduced for measurements obtained in collaborative interlaboratory studies, comprising measurement errors and random laboratory effects that have Laplace distributions, possibly with heterogeneous, laboratory-specific variances. Estimators are suggested for the common median and for its standard deviation. We provide predictors of the laboratory effects, and of their pairwise differences, along with the standard errors of these predictors. Explicit formulas are given for all estimators, whose sampling performance is assessed in a Monte Carlo simulation study.  相似文献   

11.
Sequence segmentation is a well-studied problem, where given a sequence of elements, an integer K, and some measure of homogeneity, the task is to split the sequence into K contiguous segments that are maximally homogeneous. A classic approach to find the optimal solution is by using a dynamic program. Unfortunately, the execution time of this program is quadratic with respect to the length of the input sequence. This makes the algorithm slow for a sequence of non-trivial length. In this paper we study segmentations whose measure of goodness is based on log-linear models, a rich family that contains many of the standard distributions. We present a theoretical result allowing us to prune many suboptimal segmentations. Using this result, we modify the standard dynamic program for 1D log-linear models, and by doing so reduce the computational time. We demonstrate empirically, that this approach can significantly reduce the computational burden of finding the optimal segmentation.  相似文献   

12.
An empirical study of a model for program error prediction   总被引:2,自引:0,他引:2  
A model is presented for estimating the number of errors remaining in a program at the beginning of the testing phase of development. The relationships between the errors occurring in a program and the various factors that affect software development, such as programmer skill, are statistically analyzed. The model is then derived using the factors significantly identified in the analysis. On the basis of data collected during the development of large-scale software systems, it is shown that factors such as frequency of program specification change, programmer skill, and volume of program design documentation are significant and that the model based on these factors is more reliable than conventional error prediction methods based on program size alone  相似文献   

13.
A computer-assisted procedure for statistically analyzing red cell survival experiments is presented. The procedure utilizes a widely available statistical software (BMDP). The program allows the fitting of a wide range of parametric curves to the survival data. It calculates estimates of the parameters and their standard errors, and produces residuals plots that can be used to pick objectively the "best" fitting curve to the data. A listing of the computer statements required to produce the above analysis is given in the Appendix.  相似文献   

14.
The goal of this paper is to design a new control algorithm for open-loop control of complex systems. This control approach is based on a genealogical decision tree for both regulation and tracking control problems. The idea behind this control strategy consists of associating Gaussian distributions to both the norms of the control actions and the tracking errors. This stochastic search model can be interpreted as a simple genetic particle evolution model with a natural birth and death interpretation. It converges on probability. A numerical example dealing with the control of a fluidized bed combustion power plant illustrates the feasibility and the performance of this control algorithm.K. Najim was partially supported by UK EPSRC Research cluster project, grant no. GR/S63779/. E. Ikonen was supported by the Academy of Finland, projects nos. 48545 and 203231.  相似文献   

15.
We will describe a wirelist compare program that, together with a VLSI node extractor, is used to verify VLSI IC layout connectivity. Engineers at Digital Equipment Corporation have successfully used this tool in a production environment to debug layout errors. The program is based on a graph isomorphism algorithm and provides graphical and textual guides to pinpoint errors. We will examine this algorithm, its error outputs, and provide run-time statistics.  相似文献   

16.
张协力  祝跃飞  顾纯祥  陈熹 《软件学报》2021,32(6):1581-1596
形式化方法为安全协议分析提供了理论工具,但经过形式化验证过的协议标准在转换为具体程序实现时,可能无法满足相应的安全属性.为此,提出了一种检测安全协议代码语义逻辑错误的形式化验证方法.通过将协议C源码自动化抽象为Pi演算模型,基于Pi演算模型对协议安全属性形式化验证.最后给出了方案转换的正确性证明,并通过对Kerbero...  相似文献   

17.
空间辐射环境中,大量的宇宙射线经常导致星载计算机出现瞬时故障,这些瞬时故障致使程序执行出现数据错误或者控制流错误。针对瞬时故障导致的程序错误,本文提出了一种软件实现的故障检测算法SITFT,它结合软件复算和标签分析的方法,既可以检测程序运行中的数据错误,又可以有效检测控制流错误。故障注入实验的结果表明,SIT-FT算法在性能开销比源程序增加58%~111%,存储开销增加153%~225%的前提下,使程序执行出现错误结果的情形比源程序减少了49.0%~73.2%。  相似文献   

18.
邹宇  薛小平  张芳  潘勇  潘腾 《计算机应用》2015,35(12):3450-3455
为确保安全苛求系统中程序执行的正确性,针对程序循环控制中内存未更新、循环提前结束和循环滞后结束的问题,提出一种基于含签名和时间戳的算术(ANBD)码的循环控制错误检测算法。该算法通过ANBD码,将程序变量编码为含签名的码字。通过校验码字签名,检测循环控制错误。运用ANBD码中的时间标签,可检测内存未更新错误。另外,在ANBD码基础上,通过采用在线语句块签名分配算法、语句块签名函数和变量签名补偿函数,检测循环提前结束错误和循环滞后结束错误。该算法理论错误漏检概率为1/A,其中A为编码素数,选取97~10993的素数进行错误漏检概率测试,得到理论模型与测试结果的归一化均方误差(NMSE)约为-30 dB。测试结果表明,该算法可检测循环控制中出现的各类错误,且编码素数A接近232时,错误漏检概率可达10-9,能够满足安全苛求系统的应用要求。  相似文献   

19.
提出一套适用于空间机器人的高可信软件设计模型和算法,将空间机器人软件错误检测分为单元级和系统级2个层级,针对单元级检测设计程序基本块模型以及基于该模型的数据流和控制流错误检测算法;针对分布式软件系统级的错误检测,设计多节点自适应冗余模型,在此基础上设计基于微检查点的错误检测算法,对空间机器人软件系统的错误检测形成一个完整覆盖,该方法已得到成功应用。  相似文献   

20.
The use of satellites to monitor the color of the ocean requires effective removal of the atmospheric signal. This can be performed by extrapolating the aerosol optical properties in the visible from the near-infrared (NIR) spectral region assuming that the seawater is totally absorbant in this latter part of the spectrum. However, the non-negligible water-leaving radiance in the NIR which is characteristic of turbid waters may lead to an overestimate of the atmospheric radiance in the whole visible spectrum with increasing severity at shorter wavelengths. This may result in significant errors, if not complete failure, of various algorithms for the retrieval of chlorophyll-a concentration, inherent optical properties and biogeochemical parameters of surface waters.This paper presents results of an inter-comparison study of three methods that compensate for NIR water-leaving radiances and that are based on very different hypothesis: 1) the standard SeaWiFS algorithm (Stumpf et al., 2003; Bailey et al., 2010) based on a bio-optical model and an iterative process; 2) the algorithm developed by Ruddick et al. (2000) based on the spatial homogeneity of the NIR ratios of the aerosol and water-leaving radiances; and 3) the algorithm of Kuchinke et al. (2009) based on a fully coupled atmosphere-ocean spectral optimization inversion. They are compared using normalized water-leaving radiance nLw in the visible. The reference source for comparison is ground-based measurements from three AERONET-Ocean Color sites, one in the Adriatic Sea and two in the East Coast of USA.Based on the matchup exercise, the best overall estimates of the nLw are obtained with the latest SeaWiFS standard algorithm version with relative error varying from 14.97% to 35.27% for λ = 490 nm and λ = 670 nm respectively. The least accurate estimates are given by the algorithm of Ruddick, the relative errors being between 16.36% and 42.92% for λ = 490 nm and λ = 412 nm, respectively. The algorithm of Kuchinke appears to be the most accurate algorithm at 412 nm (30.02%), 510 (15.54%) and 670 nm (32.32%) using its default optimization and bio-optical model coefficient settings.Similar conclusions are obtained for the aerosol optical properties (aerosol optical thickness τ(865) and the Ångström exponent, α(510, 865)). Those parameters are retrieved more accurately with the SeaWiFS standard algorithm (relative error of 33% and 54.15% for τ(865) and α(510, 865)).A detailed analysis of the hypotheses of the methods is given for explaining the differences between the algorithms. The determination of the aerosol parameters is critical for the algorithm of Ruddick et al. (2000) while the bio-optical model is critical for the algorithm of Stumpf et al. (2003) utilized in the standard SeaWiFS atmospheric correction and both aerosol and bio-optical model for the coupled atmospheric-ocean algorithm of Kuchinke. The Kuchinke algorithm presents model aerosol-size distributions that differ from real aerosol-size distribution pertaining to the measurements. In conclusion, the results show that for the given atmospheric and oceanic conditions of this study, the SeaWiFS atmospheric correction algorithm is most appropriate for estimating the marine and aerosol parameters in the given turbid waters regions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号