首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
High-level semi-Markov modelling paradigms such as semi-Markov stochastic Petri nets and process algebras are used to capture realistic performance models of computer and communication systems but often have the drawback of generating huge underlying semi-Markov processes. Extraction of performance measures such as steady-state probabilities and passage-time distributions therefore relies on sparse matrix-vector operations involving very large transition matrices. Previous studies have shown that exact state-by-state aggregation of semi-Markov processes can be applied to reduce the number of states. This can, however, lead to a dramatic increase in matrix density caused by the creation of additional transitions between remaining states. Our paper addresses this issue by presenting the concept of state space partitioning for aggregation.We present a new deterministic partitioning method which we term barrier partitioning. We show that barrier partitioning is capable of splitting very large semi-Markov models into a number of partitions such that first passage-time analysis can be performed more quickly and using up to 99% less memory than existing algorithms.  相似文献   

2.
The lengths of certain passage-time intervals (random time intervals) in discrete-event stochastic systems correspond to delays in computer, communication, manufacturing, and transportation systems. Simulation is often the only available means for analyzing a sequence of such lengths. It is sometimes possible to obtain meaningful estimates for the limiting average delay indirectly, that is, without measuring lengths of individual passage-time intervals. For general time-average limits of a sequence of delays, however, it is necessary to measure individual lengths and combine them to form point and interval estimates. We consider sequences of delays determined by state transitions of a generalized semi-Markov process and introduce a recursively-generated sequence of realvalued random vectors, called start vectors, to provide the link between the starts and terminations of passage-time intervals. This method of start vectors for measuring delays avoids the need to tag entities in the system. We show that if the generalized semi-Markov process has a recurrent single-state, then the sample paths of any sequence of delays can be decomposed into one-dependent, identically distributed cycles. We then show that an extension of the regenerative method for analysis of simulation output can be used to obtain meaningful point estimates and confidence intervals for time-average limits. This estimation procedure is valid not only when there are no ongoing passage times at any regeneration point but, unlike previous methods, also when the sequence of delays does not inherit regenerative structure. Application of these methods to a manufacturing cell with robots is discussed.  相似文献   

3.
Considerable amounts of data, including process events, are collected and stored by organisations nowadays. Discovering a process model from such event data and verification of the quality of discovered models are important steps in process mining. Many discovery techniques have been proposed, but none of them combines scalability with strong quality guarantees. We would like such techniques to handle billions of events or thousands of activities, to produce sound models (without deadlocks and other anomalies), and to guarantee that the underlying process can be rediscovered when sufficient information is available. In this paper, we introduce a framework for process discovery that ensures these properties while passing over the log only once and introduce three algorithms using the framework. To measure the quality of discovered models for such large logs, we introduce a model–model and model–log comparison framework that applies a divide-and-conquer strategy to measure recall, fitness, and precision. We experimentally show that these discovery and measuring techniques sacrifice little compared to other algorithms, while gaining the ability to cope with event logs of 100,000,000 traces and processes of 10,000 activities on a standard computer.  相似文献   

4.
He  Hujun   《Neurocomputing》2009,72(16-18):3529
Nowadays a great deal of effort has been made in order to gain advantages in foreign exchange (FX) rates predictions. However, most existing techniques seldom excel the simple random walk model in practical applications. This paper describes a self-organising network formed on the basis of a mixture of adaptive autoregressive models. The proposed network, termed self-organising mixture autoregressive (SOMAR) model, can be used to describe and model nonstationary, nonlinear time series by means of a number of underlying local regressive models. An autocorrelation coefficient-based measure is proposed as the similarity measure for assigning input samples to the underlying local models. Experiments on both benchmark time series and several FX rates have been conducted. The results show that the proposed method consistently outperforms other local time series modelling techniques on a range of performance measures including the mean-square-error, correct trend predication percentage, accumulated profit and model variance.  相似文献   

5.
As databases increasingly integrate different types of information such as multimedia, spatial, time-series, and scientific data, it becomes necessary to support efficient retrieval of multidimensional data. Both the dimensionality and the amount of data that needs to be processed are increasing rapidly. Reducing the dimension of the feature vectors to enhance the performance of the underlying technique is a popular solution to the infamous curse of dimensionality. We expect the techniques to have good quality of distance measures when the similarity distance between two feature vectors is approximated by some notion of distance between two lower-dimensional transformed vectors. Thus, it is desirable to develop techniques resulting in accurate approximations to the original similarity distance. We investigate dimensionality reduction techniques that directly target minimizing the errors made in the approximations. In particular, we develop dynamic techniques for efficient and accurate approximation of similarity evaluations between high-dimensional vectors based on inner-product approximations. Inner-product, by itself, is used as a distance measure in a wide area of applications such as document databases. A first order approximation to the inner-product is obtained from the Cauchy-Schwarz inequality. We extend this idea to higher order power symmetric functions of the multidimensional points. We show how to compute fixed coefficients that work as universal weights based on the moments of the probability density function of the data set. We also develop a dynamic model to compute the universal coefficients for data sets whose distribution is not known. Our experiments on synthetic and real data sets show that the similarity between two objects in high-dimensional space can be accurately approximated by a significantly lower-dimensional representation.  相似文献   

6.
A data mining algorithm builds a model that captures interesting aspects of the underlying data. We develop a framework for quantifying the difference, called the deviation, between two datasets in terms of the models they induce. In addition to being a quantitative, intuitively interpretable measure of difference, the deviation between two datasets can also be computed very fast. Our framework covers a wide variety of models including frequent itemsets, decision tree classifiers, and clusters, and captures standard measures of deviation such as the misclassification rate and the chi-squared metric as special cases. We also show how statistical techniques can be applied to the deviation measure to assess whether the difference between two models is significant (i.e., whether the underlying datasets have statistically significant differences in their characteristics), and discuss several practical applications.  相似文献   

7.

Soccer match attendance is an example of group behavior with noisy context that can only be approximated by a limited set of quantifiable factors. However, match attendance is representative of a wider spectrum of context-based behaviors for which only the aggregate effect of otherwise individual decisions is observable. Modeling of such behaviors is desirable from the perspective of economics, psychology, and other social studies with prospective use in simulators, games, product planning, and advertising. In this paper, we evaluate the efficiency of different neural network architectures as models of context in attendance behavior by comparing the achieved prediction accuracy of a multilayer perceptron (MLP), an Elman recurrent neural network (RNN), a time-lagged feedforward neural network (TLFN), and a radial basis function network (RBFN) against a multiple linear regression model, an autoregressive moving average model with exogenous inputs, and a naive cumulative mean model. We show that the MLP, TLFN, and RNN are superior to the RBFN and achieve comparable prediction accuracy on datasets of three teams from the English Football League Championship, which indicates weak importance of context transition modeled by the TLFN and the RNN. The experiments demonstrate that all neural network models outperform linear predictors by a significant margin. We show that neural models built on individual datasets achieve better performance than a generalized neural model constructed from pooled data. We analyze the input parameter influences extracted from trained networks and show that there is an agreement between nonlinear and linear measures about the most significant attributes.

  相似文献   

8.
Recently proposed formal reliability analysis techniques have overcome the inaccuracies of traditional simulation based techniques but can only handle problems involving discrete random variables. In this paper, we extend the capabilities of existing theorem proving based reliability analysis by formalizing several important statistical properties of continuous random variables like the second moment and the variance. We also formalize commonly used concepts about the reliability theory such as survival, hazard, cumulative hazard and fractile functions. With these extensions, it is now possible to formally reason about important measures of reliability (the probabilities of failure, the failure risks and the mean-time-to failure) associated with the life of a system that operates in an uncertain and harsh environment and is usually continuous in nature. We illustrate the modeling and verification process with the help of examples involving the reliability analysis of essential electronic and electrical system components.  相似文献   

9.
We address the problem of curvature estimation from sampled compact sets. The main contribution is a stability result: we show that the Gaussian, mean or anisotropic curvature measures of the offset of a compact set K with positive μ-reach can be estimated by the same curvature measures of the offset of a compact set K' close to K in the Hausdorff sense. We show how these curvature measures can be computed for finite unions of balls. The curvature measures of the offset of a compact set with positive μ-reach can thus be approximated by the curvature measures of the offset of a point-cloud sample.  相似文献   

10.
In the past, much emphasis has been given to the data throughput of VOD servers. In Interactive Video-on-Demand (IVOD) applications, such as digital libraries, service availability and response times are more visible to the user than the underlying data throughput. Data throughput is a measure of how efficiently resources are utilized. Higher throughput may be achieved at the expense of deteriorated user-perceived performance metrics such as probability of admission and queuing delay prior to admission. In this paper, we propose and evaluate a number of strategies to sequence the admission of pending video requests. Under different request arrival rates and buffer capacities, we measure the probability of admission, queueing delay and data throughput of each strategy. Results of our experiments show that simple hybrid strategies can improve the number of admitted requests and reduce the queuing time, without jeopardizing the data throughput. The techniques we propose are independent of the underlying disk scheduling techniques used. So, they can be employed to improve the user-perceived performance of VOD servers, in general.  相似文献   

11.
In this paper we show that integrated environmental modelling (IEM) techniques can be used to generate a catastrophe model for groundwater flooding. Catastrophe models are probabilistic models based upon sets of events representing the hazard and weights their likelihood with the impact of such an event happening which is then used to estimate future financial losses. These probabilistic loss estimates often underpin re-insurance transactions. Modelled loss estimates can vary significantly, because of the assumptions used within the models. A rudimentary insurance-style catastrophe model for groundwater flooding has been created by linking seven individual components together. Each component is linked to the next using an open modelling framework (i.e. an implementation of OpenMI). Finally, we discuss how a flexible model integration methodology, such as described in this paper, facilitates a better understanding of the assumptions used within the catastrophe model by enabling the interchange of model components created using different, yet appropriate, assumptions.  相似文献   

12.
We consider the problem of estimating the relative orientation of a number of individual photocells – or pixels – that hold fixed relative positions. The photocells measure the intensity of light traveling on a pencil of lines. We assume that the light-field thus sampled is changing, e.g. as the result of motion of the sensors and use the obtained measurements to estimate the orientations of the photocells.Our approach is based on correlation and information-theory dissimilarity measures. Experiments with real-world data show that the dissimilarity measures are strongly related to the angular separation between the photocells, and the relation can be modeled quantitatively. In particular we show that this model allows to estimate the angular separation from the dissimilarity. Although the resulting estimators are not very accurate, they maintain their performance throughout different visual environments, suggesting that the model encodes a very general property of our visual world. Finally, leveraging this method to estimate angles from signal pairs, we show how distance geometry techniques allow to recover the complete sensor geometry.  相似文献   

13.
14.
Queueing network formalisms are very good at describing the spatial movement of customers, but typically poor at describing how customers change as they move through the network. We present the PEPA Queues formalism, which uses the popular stochastic process algebra PEPA to represent the individual state and behaviour of customers and servers. We offer a formal semantics for PEPA Queues, plus a direct translation to PEPA, allowing access to the existing tools for analysing PEPA models. Finally, we use the ipc/DNAmaca tool-chain to provide passage-time analysis of a dual Web server example.  相似文献   

15.
16.
In this paper we present a new algorithm for accurate rendering of translucent materials under Spherical Gaussian (SG) lights. Our algorithm builds upon the quantized‐diffusion BSSRDF model recently introduced in [ [dI11] ]. Our main contribution is an efficient algorithm for computing the integral of the BSSRDF with an SG light. We incorporate both single and multiple scattering components. Our model improves upon previous work by accounting for the incident angle of each individual SG light. This leads to more accurate rendering results, notably elliptical profiles from oblique illumination. In contrast, most existing models only consider the total irradiance received from all lights, hence can only generate circular profiles. Experimental results show that our method is suitable for rendering of translucent materials under finite‐area lights or environment lights that can be approximated by a small number of SGs.  相似文献   

17.
In a manufacturing system, we need to capture collaborative processes among its components in order to clearly define supporting functions of a system. However, pervasive process modeling techniques, including IDEF3, Petri Nets, and UML, are not sufficient for modeling collaborative processes. Therefore, we have developed a novel modeling method referred to as collaborative process modeling (CPM) to describe collaborative processes. CPM models can be transformed into marked graph models so that we can use the analysis power of Petri Nets. In this paper, we first briefly discuss these process modeling techniques. Then, we illustrate the CPM method and transformation rules with illustrative examples. CPM allows us to develop collaborative process models, understand and facilitate the realization of collaboration, and verify models before moving onto development.  相似文献   

18.
We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity–expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.  相似文献   

19.
This paper deals with the finite approximation of the first passage models for discrete-time Markov decision processes with varying discount factors. For a given control model \(\mathcal {M}\) with denumerable states and compact Borel action sets, and possibly unbounded reward functions, under reasonable conditions we prove that there exists a sequence of control models \(\mathcal {M}_{n}\) such that the first passage optimal rewards and policies of \(\mathcal {M}_{n}\) converge to those of \(\mathcal {M}\), respectively. Based on the convergence theorems, we propose a finite-state and finite-action truncation method for the given control model \(\mathcal {M}\), and show that the first passage optimal reward and policies of \(\mathcal {M}\) can be approximated by those of the solvable truncated finite control models. Finally, we give the corresponding value and policy iteration algorithms to solve the finite approximation models.  相似文献   

20.
Molecular biology's advent in the 20th century has exponentially increased our knowledge about the inner workings of life. We have dozens of completed genomes and an array of high-throughput methods to characterize gene encodings and gene product operation. The question now is how we will assemble the various pieces. In other words, given sufficient information about a living cell's molecular components, can we predict its behavior? We introduce the major classes of cellular processes relevant to modeling, discuss software engineering's role in cell simulation, and identify cell simulation requirements. Our E-Cell project aims to develop the theories, techniques, and software platforms necessary for whole-cell-scale modeling, simulation, and analysis. Since the project's launch in 1996, we have built a variety of cell models, and we are currently developing new models that vary with respect to species, target subsystem, and overall scale.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号