首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The authors (Proc. Eighth Int. Conf. Software Eng., London, England, p.343-4, 1985) previously introduced a nonparametric model for software-reliability growth which is based on complete monotonicity of the failure rate. The authors extend the completely monotone software model by developing a method for providing long-range predictions of reliability growth, based on the model. They derive upper and lower bounds on extrapolation of the failure rate and the mean function. These are then used to obtain estimates for the future software failure rate and the mean future number of failures. Preliminary evaluation indicates that the method is competitive with parametric approaches, while being more robust  相似文献   

2.
Bayes inference for a nonhomogeneous Poisson process with an S-shaped mean value function is studied. In particular, the authors consider the model of Ohba et al. (1983), and its generalization to a class of gamma distribution growth curves. Two Gibbs sampling approaches are proposed to compute the Bayes estimates of the mean number of errors remaining and the current system reliability. One algorithm is a Metropolis within Gibbs algorithm, The other is a stochastic substitution algorithm with data augmentation. Model selection based on the posterior Bayes factor is studied. A numerical example with simulated data is given  相似文献   

3.
This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle  相似文献   

4.
A general theory of software reliability that proposes that software failure rates are the product of the software average error size, apparent error density, and workload is developed. Models of these factors that are consistent with the assumptions of classical software-reliability models are developed. The linear, geometric and Rayleigh models are special cases of the general theory. Linear reliability models result from assumptions that the average size of remaining errors and workload are constant and that its apparent error density equals its real error density. Geometric reliability models differ from linear models in assuming that the average-error size decreases geometrically as errors are corrected, whereas the Rayleigh model assumes that the average size of remaining errors increases linearly with time. The theory shows that the abstract proportionality constants of classical models are composed of more fundamental and more intuitively meaningful factors, namely, the initial values of average size of remaining errors, real error density, workload, and error content. It is shown how the assumed behavior of the reliability primitives of software (average-error size, error density, and workload) is modeled to accommodate diverse reliability factors  相似文献   

5.
This paper presents a novel fuzzy-segmentation method for diffusion tensor (DT) and magnetic resonance (MR) images. Typical fuzzy-segmentation schemes, e.g., those based on fuzzy C means (FCM), incorporate Gaussian class models that are inherently biased towards ellipsoidal clusters characterized by a mean element and a covariance matrix. Tensors in fiber bundles, however, inherently lie on specific manifolds in Riemannian spaces. Unlike FCM-based schemes, the proposed method represents these manifolds using nonparametric data-driven statistical models. The paper describes a statistically-sound (consistent) technique for nonparametric modeling in Riemannian DT spaces. The proposed method produces an optimal fuzzy segmentation by maximizing a novel information-theoretic energy in a Markov-random-field framework. Results on synthetic and real, DT and MR images, show that the proposed method provides information about the uncertainties in the segmentation decisions, which stem from imaging artifacts including noise, partial voluming, and inhomogeneity. By enhancing the nonparametric model to capture the spatial continuity and structure of the fiber bundle, we exploit the framework to extract the cingulum fiber bundle. Typical tractography methods for tract delineation, incorporating thresholds on fractional anisotropy and fiber curvature to terminate tracking, can face serious problems arising from partial voluming and noise. For these reasons, tractography often fails to extract thin tracts with sharp changes in orientation, such as the cingulum. The results demonstrate that the proposed method extracts this structure significantly more accurately as compared to tractography.  相似文献   

6.
Pattern recognition procedures based on the Cesaro mean of orthogonal series are presented and their Bayes risk consistency is established. No restrictions are put on the class conditional densities.  相似文献   

7.
This paper presents a novel method for Bayesian denoising of magnetic resonance (MR) images that bootstraps itself by inferring the prior, i.e., the uncorrupted-image statistics, from the corrupted input data and the knowledge of the Rician noise model. The proposed method relies on principles from empirical Bayes (EB) estimation. It models the prior in a nonparametric Markov random field (MRF) framework and estimates this prior by optimizing an information-theoretic metric using the expectation-maximization algorithm. The generality and power of nonparametric modeling, coupled with the EB approach for prior estimation, avoids imposing ill-fitting prior models for denoising. The results demonstrate that, unlike typical denoising methods, the proposed method preserves most of the important features in brain MR images. Furthermore, this paper presents a novel Bayesian-inference algorithm on MRFs, namely iterated conditional entropy reduction (ICER). This paper also extends the application of the proposed method for denoising diffusion-weighted MR images. Validation results and quantitative comparisons with the state of the art in MR-image denoising clearly depict the advantages of the proposed method.  相似文献   

8.
A nonparametric generalization of the locally optimum Bayes (LOB) parametric theory of signal detection in additive non-Gaussian noise with independent sampling is presented. From a locally asymptotically normal (LAN) expansion of the log-likelihood ratio the nonparametric detector structure, in both coherent and incoherent modes, is determined. Moreover, its statistics under both hypotheses are obtained. The nonparametric LAN log-likelihood ratio is then reduced to a least informative (i.e., having minimum variance under the hypothesis, H/sub 0/) local parametric submodel, which is referred to as adaptive. In the adaptive submodel, certain nonlinearities are replaced by their efficient estimates. This is accomplished such that no information is lost when the noise first-order density is no longer parametrically defined. Adaptive nonparametric LOB detectors are thus shown to be asymptotically optimum (AO), canonical in signal waveform, distribution free in noise statistics, and identical in form (in the symmetric cases) to their parametric counterparts. A numerical example is provided when the underlying density is Middleton's (see ibid., vol.45, p.1129-49, May 1999)Class-A noise, which demonstrates that even with a relatively small sample size (O(10/sup 2/)) adaptive LOB nonparametric detectors perform nearly as well as the classical LOB detectors.  相似文献   

9.
A birth-process approach to Moranda's geometric software-reliability model   总被引:1,自引:0,他引:1  
To alleviate some of the objections to the basic Jelinski Moranda (JM) model for software failures, Moranda proposed a geometric de-eutrophication model. This model assumes that the times between failures are statistically-independent exponential random variables with given failure rates. In this model the failure rates decrease geometrically with the detection of a fault. Using an intuitive approach, Musa, Iannino, Okumoto , see also Farr , derived expressions for the mean and the intensity functions of the process N (t) which counts the number of faults detected in the time interval [O, t] for the Moranda geometric de-eutrophication model. N (t) is studied as a pure birth stochastic process; its probability generating function is derived, as well as its mean, intensity and reliability functions. The expressions for the mean and intensity functions derived by MIO are only approximations and can be quite different from the true functions for certain choices of the failure rates. The exact expressions for the mean function and the intensity function of N (t) are used to find the optimum release time of software based on a cost structure for Moranda's geometric de-eutrophication model.  相似文献   

10.
This paper presents an automated video analysis framework for the detection of colonic polyps in optical colonoscopy. Our proposed framework departs from previous methods in that we include spatial frame-based analysis and temporal video analysis using time-course image sequences. We also provide a video quality assessment scheme including two measures of frame quality. We extract colon-specific anatomical features from different image regions using a windowing approach for intraframe spatial analysis. Anatomical features are described using an eigentissue model. We apply a conditional random field to model interframe dependences in tissue types and handle variations in imaging conditions and modalities. We validate our method by comparing our polyp detection results to colonoscopy reports from physicians. Our method displays promising preliminary results and shows strong invariance when applied to both white light and narrow-band video. Our proposed video analysis system can provide objective diagnostic support to physicians by locating polyps during colon cancer screening exams. Furthermore, our system can be used as a cost-effective video annotation solution for the large backlog of existing colonoscopy videos.  相似文献   

11.
The authors propose a simple and practical probabilistic model, using multiple incomplete test concepts, for fault location in distributed systems using a Bayes analysis procedure. Since it is easier to compare test results among processing units, their model is comparison-based. This approach is realistic and complete in the sense that it does not assume conditions such as permanently faulty units, complete tests, and perfect or nonmalicious environments. It can handle, without any overhead, fault-free systems so that the test procedure can be used to monitor a functioning system. Given a system S with a specific test graph, the corresponding conditional distribution between the comparison test results (syndrome) and the fault patterns of S can be generated. To avoid the complex global Bayes estimation process, the authors develop a simple bitwise Bayes algorithm for fault location of S, which locates system failures with linear complexity, making it suitable for hard real-time systems. Hence, their approach is appealing both from the practical and theoretical points of view  相似文献   

12.
A new feature extraction method, called nearest neighbour line nonparametric discriminant analysis (NNL-NDA), is proposed. The previous nonparametric discriminant analysis methods only use point-to-point distance to measure the class difference. In NNL-NDA, point-to-line distance with nearest neighbour line (NNL) theory is adopted, and thereby more intrinsic structure information of training samples is preserved in the feature space. NNL-NDA does not assume that the class densities belong to any particular parametric family nor encounter the singularity difficulty of the within-class scatter matrix. Experimental results on ORL face database demonstrate the effectiveness of the proposed method.  相似文献   

13.
In this work, we propose a method to segment a 1-D histogram without a priori assumptions about the underlying density function. Our approach considers a rigorous definition of an admissible segmentation, avoiding over and under segmentation problems. A fast algorithm leading to such a segmentation is proposed. The approach is tested both with synthetic and real data. An application to the segmentation of written documents is also presented. We shall see that this application requires the detection of very small histogram modes, which can be accurately detected with the proposed method.  相似文献   

14.
A conditional rank test for nonparametric detection   总被引:1,自引:0,他引:1  
A one-input nonparametric detector employing the Wilcoxon signed-rank test and conditional testing is described. The rank test is applied to input samples which are larger in absolute value than a positive constant, so that the number of samples ranked is a random variable. A conditional test allows nonparametric operation; the detector is shown to maintain efficient performance with reduced ranking requirements, especially when used in conjunction with the technique of "mixed" statistical tests.  相似文献   

15.
Skilled cardiologists perform cardiac auscultation, acquiring and interpreting heart sounds, by implicitly carrying out a sequence of steps. These include discarding clinically irrelevant beats, selectively tuning in to particular frequencies and aggregating information across time to make a diagnosis. In this paper, we formalize a series of analytical stages for processing heart sounds, propose algorithms to enable computers to approximate these steps, and investigate the effectiveness of each step in extracting relevant information from actual patient data. Through such reasoning, we provide insight into the relative difficulty of the various tasks involved in the accurate interpretation of heart sounds. We also evaluate the contribution of each analytical stage in the overall assessment of patients. We expect our framework and associated software to be useful to educators wanting to teach cardiac auscultation, and to primary care physicians, who can benefit from presentation tools for computer-assisted diagnosis of cardiac disorders. Researchers may also employ the comprehensive processing provided by our framework to develop more powerful, fully automated auscultation applications.  相似文献   

16.
Computer-aided design of horn arrays, even by using state-of-the-art electromagnetic (EM) formulations, requires a considerable numerical effort and is, therefore, an ideal candidate for parallelization. We introduce a rigorous methodology, based on Petri nets and recursive bisection, in order to migrate a standard integral equation code toward parallel multiprocessor machines, following a data-flow design and a multiple-level parallelism. Significant (quasi-linear) speed-ups are demonstrated  相似文献   

17.
The time between failures is a very useful measurement to analyze reliability models for time-dependent systems. In many cases, the failure-generation process is assumed to be stationary, even though the process changes its statistics as time elapses. This paper presents a new estimation procedure for the probabilities of failures; it is based on estimating time-between-failures. The main characteristics of this procedure are that no probability distribution function is assumed for the failure process, and that the failure process is not assumed to be stationary. The model classifies the failures in Q different types, and estimates the probability of each type of failure s-independently from the others. This method does not use histogram techniques to estimate the probabilities of occurrence of each failure-type; rather it estimates the probabilities directly from the values of the time-instants at which the failures occur. The method assumes quasistationarity only in the interval of time between the last 2 occurrences of the same failure-type. An inherent characteristic of this method is that it assigns different sizes for the time-windows used to estimate the probabilities of each failure-type. For the failure-types with low probability, the estimator uses wide windows, while for those with high probability the estimator uses narrow windows. As an example, the model is applied to software reliability data.  相似文献   

18.
Zhao  Dong  Ma  Huadong  Li  Qi  Tang  Shaojie 《Wireless Networks》2018,24(4):1313-1325
Wireless Networks - The opportunistic data collection paradigm leverages human mobility to improve sensing coverage and data transmission for collecting data from a number of Points of Interest...  相似文献   

19.
The authors present a model for the behavior of software failures. Their model fits into the general framework of empirical Bayes problems; however, they take a proper Bayes approach for inference by viewing the situation as a Bayes empirical-Bayes problem. An approximation due to D.V. Lindley (1980) plays a central role in the analysis. They show that the Littlewood-Verall model (1973) is an empirical Bayes model and discuss a fully Bayes analysis of it using the Bayes empirical-Bayes setup. Finally, they apply both models to some actual software failure data and compare their predictive performance  相似文献   

20.
Semantic video analysis is a key issue in digital video applications, including video retrieval, annotation, and management. Most existing work on semantic video analysis is mainly focused on event detection for specific video genres, while the genre classification is treated as another independent issue. In this paper, we present a semantic framework for weakly supervised video genre classification and event analysis jointly by using probabilistic models for MPEG video streams. Several computable semantic features that can accurately reflect the event attributes are derived. Based on an intensive analysis on the connection between video genres and the contextual relationship among events, as well as the statistical characteristics of dominant event, a hidden Markov model (HMM) and naive Bayesian classifier (NBC) based analysis algorithm is proposed for video genre classification. Another Gaussian mixture model (GMM) is built to detect the contained events using the same semantic features, whilst an event adjustment strategy is proposed according to an analysis on the GMM structure and pre-definition of video events. Subsequently, a special event is recognized based on the detected events by another HMM. The simulative experiments on video genre classification and event analysis using a large number of video data sets demonstrate the promising performance of the proposed framework for semantic video analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号