共查询到20条相似文献,搜索用时 15 毫秒
1.
Using regression trees to classify fault-prone software modules 总被引:3,自引:0,他引:3
Software faults are defects in software modules that might cause failures. Software developers tend to focus on faults, because they are closely related to the amount of rework necessary to prevent future operational software failures. The goal of this paper is to predict which modules are fault-prone and to do it early enough in the life cycle to be useful to developers. A regression tree is an algorithm represented by an abstract tree, where the response variable is a real quantity. Software modules are classified as fault-prone or not, by comparing the predicted value to a threshold. A classification rule is proposed that allows one to choose a preferred balance between the two types of misclassification rates. A case study of a very large telecommunications systems considered software modules to be fault-prone, if any faults were discovered by customers. Our research shows that classifying fault-prone modules with regression trees and the using the classification rule in this paper, resulted in predictions with satisfactory accuracy and robustness. 相似文献
2.
Confidence intervals for regression (MEM) spectral estimates 总被引:2,自引:0,他引:2
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1976,22(5):534-545
The probability density and confidence intervals for the maximum entropy (or regression) method (MEM) of spectral estimation are derived using a Wishart model for the estimated covariance. It is found that the density for the estimated transfer function of the regression filter may be interpreted as a generalization of the student's t distribution. Asymptotic expressions are derived which are the same as those of Akaike. These expressions allow a direct comparison between the performance of the maximum entropy (regression) and maximum likelihood methods under these asymptotic conditions. Confidence intervals are calculated for an example consisting of several closely space tones in a background of white noise. These intervals are compared with those for the maximum likelihood method (MLM). It is demonstrated that, although the MEM has higher peak to background ratios than the MLM, the confidence intervals are correspondingly larger. Generalizations are introduced for frequency wavenumber spectral estimation and for the joint density at different frequencies. 相似文献
3.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1986,32(5):668-679
Both nonrecursive and recursive nonparametric regression estimates are studied. The rates of weak and strong convergence of kernel estimates, as well as corresponding multiple classification errors, are derived without assuming the existence of the density of the measurements. An application of the obtained results to a nonparametric Bayes predication is presented. 相似文献
4.
5.
6.
A novel, Lyapunov-based visual servo controller is presented that stabilizes both the entire image and pose error vectors simultaneously, rather than a subset of the errors. Furthermore, the controller uses adaptive depth estimation to eliminate the need to measure depth or obtain knowledge of the scene. A stability proof is presented. Simulation and experimental results compare the performance of the proposed method to PBVS, IBVS and 2.5D VS approaches. 相似文献
7.
Salimi-Khorshidi G Nichols TE Smith SM Woolrich MW 《IEEE transactions on medical imaging》2011,30(7):1401-1416
The purpose of neuroimaging meta-analysis is to localize the brain regions that are activated consistently in response to a certain intervention. As a commonly used technique, current coordinate-based meta-analyses (CBMA) of neuroimaging studies utilize relatively sparse information from published studies, typically only using (x,y,z) coordinates of the activation peaks. Such CBMA methods have several limitations. First, there is no way to jointly incorporate deactivation information when available, which has been shown to result in an inaccurate statistic image when assessing a difference contrast. Second, the scale of a kernel reflecting spatial uncertainty must be set without taking the effect size (e.g., Z-stat) into account. To address these problems, we employ Gaussian-process regression (GPR), explicitly estimating the unobserved statistic image given the sparse peak activation "coordinate" and "standardized effect-size estimate" data. In particular, our model allows estimation of effect size at each voxel, something existing CBMA methods cannot produce. Our results show that GPR outperforms existing CBMA techniques and is capable of more accurately reproducing the (usually unavailable) full-image analysis results. 相似文献
8.
A method is described to represent the human-torso geometry, as obtained from, e.g. MR imaging, in terms of the surface harmonic expansion. Three specific torso geometries, two male and one female, were reconstructed with the root-mean-square (rms) error <5 mm using 168 and 248 parameters, respectively. The method can be used in radiation therapy and enhances the accuracy of forward and inverse modeling in electrocardiology 相似文献
9.
Electrically conductive adhesive (ECA) is an alternative for the toxic lead-based solders. However, unstable electrical conductivity
has long been a haunting problem. Galvanic corrosion at the ECA/pad interface has recently been found to be the major mechanism
for this decay. Applying a more active metal or alloy on a dissimilar metal couple in contact can prohibit galvanic corrosion.
In this study, powders of aluminum, magnesium, zinc, and two aluminum alloys were added in an ECA and applied on five pad
surfaces. The aging of the bulk resistivity and contact resistance of the ECA/metal surface pairs were studied. The two alloys
significantly suppressed the increase of the contact resistance on all tested metal surfaces. 相似文献
10.
A method for stabilizing the frequency of a single-J - value CO2 laser to the center of its output power versus frequency curve based upon the variation of the impedance of the plasma tube with the optical power extracted is described. Frequency modulation of the laser produces an ac component of voltage drop across the plasma tube, which is synchronously detected to generate a frequency-error signal. 相似文献
11.
Use of the finite element method to determine epicardial from body surface potentials under a realistic torso model 总被引:1,自引:0,他引:1
This paper presents a new method of solution for the inverse problem in electrocardiography using the finite element procedure. It is an application of the authors' earlier work which derived a solution method by means of an integral equation under a generalized configuration of geometry and conductivity of the torso. Based on prior geometry information, the human torso region is discretized into a series offinite elements and, then, electric fields are computed when a set of linearly independent functions chosen as a basis is imposed on the epicardial surface. The set of these forward solutions defines the forward transfer coefficients which relate epicardial to body surface potentials. By the use of the forward transfer coefficients, a constrained least-squares estimate of the epicardial potential distribution can be obtained from measured body surface potentials. The solution method is examined through numerical experiments carried out for a realistic model of the human torso. It is demonstrated that the rapid decrease in voltage far from the heart generator makes this inverse problem ill conditioned and, as a result, the accuracy of the inverse epicardial potentials calculated depends greatly upon both the signal-to-noise ratio and the number of lead points in measuring the body surface potentials. 相似文献
12.
脑卒中俗称脑中风,是一种严重威胁人类健康和生命的常见病,已成为我国重大的公共卫生问题。因此了解脑卒中的危险因素和发病机制对于有效的预防和治疗脑卒中都具有十分重要的意义。首先,我们对数据进行统计处理,得出了各年份各月人员发病人数及发病率,以及不同性别,年龄段,职业的发病数据。发现男性发病率比女性大;60—89岁年龄段的人最容易得病;农民的发病率最高。其次,我们对前面得出的数据进行整合处理,得到了四年各月平均气压、气压差、平均温度、温差、平均湿度及发病率数据,用多元逐步回归分析法建立数学模型。最后,根据以上数据,总结得出脑卒中的发病原因并对高危人群提出了预警方案。 相似文献
13.
Currently, constraint-free realistic context training is nonexistent in infant physiotherapy. In order to enhance the vocational learning of novices, in close collaboration with expert physiotherapists, we designed an innovative simulator dedicated to the training of infant respiratory physiotherapy. This paper describes the simulator’s functionalities and the method used to design its physical structure and the learning paradigm. Firstly, regarding a cognitive approach, relevant vocational and didactic criteria were defined in order to characterize the gesture and determine its limits for a nondangerous practice. Subsequently, we chose physical parameters to assess the criteria and define the specifications of the simulator. The mechatronic functions arose from a didactic transposition of the expected simulation-based functionalities. A 6-month-old infant torso physical structure has been designed with the use of finite element simulations. Its mechanical behaviour provides the possibility to deform the mannequin like a real infant during physiotherapy manoeuvres. A prototype has been realized and validated. 相似文献
14.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1971,17(6):665-669
This paper contains an analysis of the performance of Bayes conditional-mean parameter estimators. The main result is that on a finite parameter space such estimates exhibit a mean-square error that diminishes exponentially with the number of observations, the observations being assumed to be independent. Two situations are discussed: true parameter included in the parameter space and true parameter not included in the parameter space. In the former instance only very general assumptions are required to demonstrate the exponential convergence rate. In the latter case the existence of an information function must be invoked. Comments on the continuous-parameter-space realization of the estimator and a discussion of the convergence mechanism are also included. 相似文献
15.
Vaclav Dolezal 《Circuits, Systems, and Signal Processing》1994,13(5):545-570
The estimates derived in this paper strengthen the available results on sensitivity and robust stability of input-output systems.
Two types of estimates are discussed: the “sensitivity type”, which establishes a bound for the output change when the system
is perturbed but the input remains the same, and the “robustness type”, which gives a bound for the output change when the
input changes but the perturbation does not. First, estimates for general systems over abstract extended spaces are derived;
these results are then applied to (1) two frequently used control configurations, and (2) systems governed by vector integral
and differential equations on the time domain [0, ∞). The applications of the estimates are illustrated by several examples.
This research was supported by the National Science Foundation under Grant #DMS-9102910 相似文献
16.
17.
Vajda I. van der Meulen E.C. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2001,47(5):1867-1883
We investigate a nonparametric estimator of the probability density introduced by Barron (1988, 1989). Earlier papers established its consistency in a strong sense, e.g., in the expected information divergence or expected chi-square divergence. This paper pays main attention to the expected chi-square divergence criterion. We give a new motivation of the Barron estimator by showing that a maximum-likelihood estimator (MLE) of a density from a family important in practice is consistent in expected information divergence but not in expected chi-square divergence. We also present new and practically applicable conditions of consistency in the expected chi-square divergence. Main attention is paid to optimization (in the sense of the mentioned criterion) of the two objects specifying the Barron estimator: the dominating probability density and the decomposition of the observation space into finitely many bins. Both problems are explicitly solved under certain regularity assumptions about the estimated density. A simulation study illustrates the results in exponential, Rayleigh, and Weibull families 相似文献
18.
Relative bias comparison between the PSD-MINQMBE (Positive Semi Definite Minimum Norm Quadratic Minimum Biased Estimates) introduced by Hartung (1981) and the BNNQE (Biased Non Negative Quadratic Estimates) introduced by Chauby (1983) for σ2a in the unbalanced one way random effect model with two groups are investigated. The values of the efficiency of the MINQUE (Minimum Norm Quadratic Unbiased Estimate) introduced by Rao (1971) of σ2a relative to PSD-MINQMBE BNNQE are studied. 相似文献
19.
Spectral estimates of heart rate variability (HRV) often involve the use of techniques such as the fast Fourier transform (FFT), which require an evenly sampled time series. HRV is calculated from the variations in the beat-to-beat (RR) interval timing of the cardiac cycle which are inherently irregularly spaced in time. In order to produce an evenly sampled time series prior to FFT-based spectral estimation, linear or cubic spline resampling is usually employed. In this paper, by using a realistic artificial RR interval generator, interpolation and resampling is shown to result in consistent over-estimations of the power spectral density (PSD) compared with the theoretical solution. The Lomb-Scargle (LS) periodogram, a more appropriate spectral estimation technique for unevenly sampled time series that uses only the original data, is shown to provide a superior PSD estimate. Ectopy removal or replacement is shown to be essential regardless of the spectral estimation technique. Resampling and phantom beat replacement is shown to decrease the accuracy of PSD estimation, even at low levels of ectopy or artefact. A linear relationship between the frequency of ectopy/artefact and the error (mean and variance) of the PSD estimate is demonstrated. Comparisons of PSD estimation techniques performed on real RR interval data during minimally active segments (sleep) demonstrate that the LS periodogram provides a less noisy spectral estimate of HRV. 相似文献
20.
Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples. 相似文献