首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
The effective design and engineering of communication networks require accurate mathematical models that should capture the important statistical properties of traffic sources. The ATM Forum has specified some QoS requirements for video traffic, which are essential for the effective management and operation of ATM networks. In this paper, a model for VBR video sources using a modified first-order autoregressive model is presented. Two levels of states with arbitrary average duration are used to represent the source bit-rate. The corresponding ratio of the statistical duration averages in the two levels is used as a new parameter to quantify the activity behaviour of the pertinent sources. A derivation for the resulting model statistical equations is presented and matched to some experimental data. Then, the obtained matched parameters are plugged into the simulation model. The major results are presented for various parameter values, along with an assessment of their effects on the performance of an ATM multiplexer.  相似文献   

3.
A general approach to analyzing multivariate time-dependent system processes with discrete-valued (both nominal and ordinal) and/or continuous-valued outcomes is presented. The approach is based on an event-covering method which selects (or covers) a subspace from the outcome space of an n-tuple of variables for estimation purposes. From the covered subspace, statistically interdependent events are selected as statistical knowledge for forecasting unknown events. The event-covering method presented is based on the use of restricted variables with only a subset of the outcomes considered. An extension to the event-covering method based on the selection of joint outcomes is discussed. The testing of this method using climatic data and simulated data which model situations in real life is described. The experiments show that the method is able to detect statistically relevant information, describe it in a meaningful and comprehensible way, and use this information for a reliable estimation (or forecast) of the missing values that will occur at some future time  相似文献   

4.
The LISp-Miner system for data mining and knowledge discovery uses the GUHA method to comb through a large data base and finds 2 × 2 contingency tables that satisfy a certain condition given by generalised quantifiers and thereby suggest the existence of possible relations between attributes. In this paper, we show how a more detailed interpretation of the data in the tables that were found by GUHA can be obtained using Bayesian statistical methods. Using a multinomial sampling model and Dirichlet prior, we derive posterior distributions for parameters that correspond to GUHA generalised quantifiers. Examples are presented illustrating the new Bayesian post-processing tools implemented in LISp-Miner. A statistical model for the analysis of contingency tables for data from two subpopulations is also presented.  相似文献   

5.
The use of a tree growth model to provide statistical information about the microwave scattering components of boreal-type forests (in this case, Scots pine and Norwegian spruce), as an alternative to data obtained through intensive fieldwork, is described. The total backscatter from six test stands at C- and L-band frequency for three polarization combinations (HH, VV and HV) was predicted. Differences between measured C- and L-band data from a polarimetric airborne Synthetic Aperture Radar (EMISAR) and simulated backscatter values compare favourably with previous studies, with like- and cross-polarization differences generally less than 2.5 dB. Modelled backscatter values were consistently less than those observed. A likely explanation for such a discrepancy is the unrealistic manner in which the model incorporates the spatial distribution of tree needles.  相似文献   

6.
7.
A simplistic model to forecast aerosol optical depth (AOD) over north India is presented in this study. The forecasts are generated by integrating the available high-resolution AOD data using time series modelling techniques. The forecasts are done using the autoregressive integrated moving average (ARIMA) method. It is found that the modelled values show good fit with the multiangle imaging spectroradiometer data during the years 2000–2010. This long-term statistical dependence shows that AOD over the north Indian region exhibits a long memory. The forecasts for the next 12 months were done at a 95% level of confidence. Our analysis confirms that using time series models prediction of AOD is possible, particularly during the summer months when the region is dominated by dust aerosols. The results obtained using the chosen ARIMA model suggest that this model proposes a simple and efficient method for determining the future values of AOD compared to more complex deterministic models.  相似文献   

8.
Model averaging or combining is often considered as an alternative to model selection. Frequentist Model Averaging (FMA) is considered extensively and strategies for the application of FMA methods in the presence of missing data based on two distinct approaches are presented. The first approach combines estimates from a set of appropriate models which are weighted by scores of a missing data adjusted criterion developed in the recent literature of model selection. The second approach averages over the estimates of a set of models with weights based on conventional model selection criteria but with the missing data replaced by imputed values prior to estimating the models. For this purpose three easy-to-use imputation methods that have been programmed in currently available statistical software are considered, and a simple recursive algorithm is further adapted to implement a generalized regression imputation in a way such that the missing values are predicted successively. The latter algorithm is found to be quite useful when one is confronted with two or more missing values simultaneously in a given row of observations. Focusing on a binary logistic regression model, the properties of the FMA estimators resulting from these strategies are explored by means of a Monte Carlo study. The results show that in many situations, averaging after imputation is preferred to averaging using weights that adjust for the missing data, and model average estimators often provide better estimates than those resulting from any single model. As an illustration, the proposed methods are applied to a dataset from a study of Duchenne muscular dystrophy detection.  相似文献   

9.
Most physical models are approximate. It is therefore important to find out how accurate are the predictions of a given model. This can be done by validating the model, i.e., by comparing its predictions with the experimental data. In some practical situations, it is difficult to directly compare the predictions with the experimental data, since models usually contain (physically meaningful) parameters, and the exact values of these parameters are often not known. One way to overcome this difficulty is to get a statistical distribution of the corresponding parameters. Once we substitute these distributions into a model, we get statistical predictions—and we can compare the resulting probability distribution with the actual distribution of measurement results. In this approach, we combine all the measurement results, and thus, we are ignoring the information that some of these results correspond to the same values of the parameters—e.g., they come from measuring the same specimen under different conditions. In this paper, we propose an interval approach that takes into account this important information. This approach is illustrated on the example of a benchmark thermal problem presented at the Sandia Validation Challenge Workshop (Albuquerque, New Mexico, May 2006).  相似文献   

10.
林怀清 《微计算机信息》2007,23(1Z):75-76,277
在嵌入式天线交换柜系统设计过程中,由于资源有限,不能无限制的增强系统的处理能力,又不能使系统的吞吐率太低,使应用程序无法正常运转,系统性能与资源使用率如何最佳配置是一个问题。本文根据排队论中的开环排队网络模型,将嵌入式天线交换柜系统抽象为一个排队服务系统,而将网络上到达的数据作为顾客,这些顾客的到来是符合一定的概率分布的,这样就可以根据系统要达到的性能推算出系统资源的配置方案,同时也可以确定系统相应的目标参量,根据这个方案来分配系统资源可以达到性能和资源最优组合。  相似文献   

11.
A novel model for Fisher discriminant analysis is developed in this paper. In the new model, maximal Fisher criterion values of discriminant vectors and minimal statistical correlation between feature components extracted by discriminant vectors are simultaneously required. Then the model is transformed into an extreme value problem, in the form of an evaluation function. Based on the evaluation function, optimal discriminant vectors are worked out. Experiments show that the method presented in this paper is comparative to the winner between FSLDA and ULDA.  相似文献   

12.
A hierachical view of fault-tolerant distributed computers is presented, viewing a distributed computing system as composed of interconnected, interacting, functional modules. Each module, modeled by a directed-state graph, is governed by internal random failure events and counteracting recovery processes, and also by coupling of external random events from other modules. It is shown that, under certain assumptions, the system is governed by a multidimensional Markov process, with non-Markov module processes as components. Mathematical properties of this model are formally analyzed. Performance measures are found from the steady-state distribution and visitation rate of each system and module state. A numerical example is presented exemplifying its practical application. The results are shown to fit very well the actual statistical data collected on an AT&T Bell Laboratories Electronic Switching System.  相似文献   

13.
A model–based engineering diagnostic method is typically based on the evaluation of the residuals generated from a comparison of important variable values from a simulated system and the corresponding measured values from the system's performance. Consequently, a model should describe the dynamic behaviour of the system as accurately as possible using suitably selected parameter values. This implies the need for validation of the performance of the model by comparison with the measurements of the actual system. This process is especially important when the detection of faults is performed in real–time conditions. In this paper, the modelling process for hydraulic systems as well as a new parameter validation method that has been developed using the DASYLab data acquisition and control software for the estimation of the uncertain parameter values of the model is presented. This model validation process led to the establishment of a model–based expert system that is able to diagnose real–time faults working in parallel with actual dynamic industrial automated processes.  相似文献   

14.
在篮球比赛中,统计数据可以反映一名球员的能力,但不一定全面、客观,通过对统计数据进行处理与修正可以更客观地评估篮球运动员的个人能力。使用MySql数据库实现了PER算法;选取有代表性的几名球员的PER值与hoopdata.com网站给出的PER值进行比较,然后将PER算法应用于CBA联盟2012-2013赛季中。  相似文献   

15.
A Rule-based Approach for the Conflation of Attributed Vector Data   总被引:16,自引:0,他引:16  
In this paper we present a complete approach for the conflation of attributed vector digital mapping data such as the Vector Product Format (VPF) datasets produced and disseminated by the National Imagery and Mapping Agency (NIMA). While other work in the field of conflation has traditionally used statistical techniques based on proximity of features, the approach presented here utilizes all information associated with data, including attribute information such as feature codes from a standardized set, associated data quality information of varying levels, and topology, as well as more traditional measures of geometry and proximity. In particular, we address the issues associated with the problem of matching features and maintaining accuracy requirements. A hierarchical rule-based approach augmented with capabilities for reasoning under uncertainty is presented for feature matching as well as for the determination of attribute sets and values for the resulting merged features. Additionally, an in-depth analysis of horizontal accuracy considerations with respect to point features is given. An implementation of the attribute and geometrical matching phases within the scope of an expert system has proven the efficacy of the approach and is discussed within the context of the VPF data.  相似文献   

16.
Scientists conducting microarray and other experiments use circular Venn and Euler diagrams to analyze and illustrate their results. As one solution to this problem, this paper introduces a statistical model for fitting area-proportional Venn and Euler diagrams to observed data. The statistical model outlined in this paper includes a statistical loss function and a minimization procedure that enables formal estimation of the Venn/Euler area-proportional model for the first time. A significance test of the null hypothesis is computed for the solution. Residuals from the model are available for inspection. As a result, this algorithm can be used for both exploration and inference on real data sets. A Java program implementing this algorithm is available under the Mozilla Public License. An R function venneuler() is available as a package in CRAN and a plugin is available in Cytoscape.  相似文献   

17.
18.
User modelling within tutoring systems often concentrates on the representation of the learner's status with respect to the domain, paying little attention to the user's individual characteristics in terms of capabilities and preferences. A composite learner model, incorporating both domain related data and information about personal attributes is useful in determining not only which items should be presented, but how the student may best be able to learn them. A model of users' individual characteristics has been developed using multivariate statistical techniques as a means of generating user stereotypes from empirical data. Each stereotype has an associated profile in terms of attributes which are useful for the application in which the model is used.This paper describes the development of the model of learner attributes and its use within an adaptive tutoring system. The representation of the domain related information was in this case a basic overlay model. The results of experiments using the system with two classes of students in two successive academic years are discussed. The possibilities for application of the user model in other areas and the potential effects of combining an attribute learner model of this type with more sophisticated domain models are considered.  相似文献   

19.
In medical information system, the data that describe patient health records are often time stamped. These data are liable to complexities such as missing data, observations at irregular time intervals and large attribute set. Due to these complexities, mining in clinical time-series data, remains a challenging area of research. This paper proposes a bio-statistical mining framework, named statistical tolerance rough set induced decision tree (STRiD), which handles these complexities and builds an effective classification model. The constructed model is used in developing a clinical decision support system (CDSS) to assist the physician in clinical diagnosis. The STRiD framework provides the following functionalities namely temporal pre-processing, attribute selection and classification. In temporal pre-processing, an enhanced fuzzy-inference based double exponential smoothing method is presented to impute the missing values and to derive the temporal patterns for each attribute. In attribute selection, relevant attributes are selected using the tolerance rough set. A classification model is constructed with the selected attributes using temporal pattern induced decision tree classifier. For experimentation, this work uses clinical time series datasets of hepatitis and thrombosis patients. The constructed classification model has proven the effectiveness of the proposed framework with a classification accuracy of 91.5% for hepatitis and 90.65% for thrombosis.  相似文献   

20.
A modification of the estimation algorithm stochastic approximation is presented. With assumptions to the statistical distribution of the training data it becomes possible, to estimate not only the mean value but also well directed deviating values of the data distribution. Thus, detailed error models can be identified by means of parameter-linear formulation of the new algorithm. By definition of suitable probabilities, these parametric error models are estimating soft error bounds. That way, an experimental identification method is provided that is able to support a robust controller design. The method was applied at an industrial robot, which is controlled by feedback linearisation. Based on a dynamic model realised by a neural network, the presented approach is utilised for the robust design of the stabilising decentral controllers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号