共查询到20条相似文献,搜索用时 15 毫秒
1.
Richard A. Hayden Anton Stefanek Jeremy T. Bradley 《Theoretical computer science》2012,413(1):106-141
Recent developments in the analysis of large Markov models facilitate the fast approximation of transient characteristics of the underlying stochastic process. Fluid analysis makes it possible to consider previously intractable models whose underlying discrete state space grows exponentially as model components are added. In this work, we show how fluid-approximation techniques may be used to extract passage-time measures from performance models. We focus on two types of passage measure: passage times involving individual components, as well as passage times which capture the time taken for a population of components to evolve.Specifically, we show that for models of sufficient scale, global passage-time distributions can be well approximated by a deterministic fluid-derived passage-time measure. Where models are not of sufficient scale, we are able to generate upper and lower approximations for the entire cumulative distribution function of these passage-time random variables, using moment-based techniques. Additionally, we show that, for passage-time measures involving individual components, the cumulative distribution function can be directly approximated by fluid techniques.Finally, using the GPA tool, we take advantage of the rapid fluid computation of passage times to show how a multi-class client-server system can be optimised to satisfy multiple service level agreements. 相似文献
2.
Brette R 《Neural computation》2006,18(8):2004-2027
Computational neuroscience relies heavily on the simulation of large networks of neuron models. There are essentially two simulation strategies: (1) using an approximation method (e.g., Runge-Kutta) with spike times binned to the time step and (2) calculating spike times exactly in an event-driven fashion. In large networks, the computation time of the best algorithm for either strategy scales linearly with the number of synapses, but each strategy has its own assets and constraints: approximation methods can be applied to any model but are inexact; exact simulation avoids numerical artifacts but is limited to simple models. Previous work has focused on improving the accuracy of approximation methods. In this article, we extend the range of models that can be simulated exactly to a more realistic model: an integrate-and-fire model with exponential synaptic conductances. 相似文献
3.
Chee-Way ChongAuthor Vitae P. RaveendranAuthor VitaeR. MukundanAuthor Vitae 《Pattern recognition》2003,36(3):731-742
This paper details a comparative analysis on time taken by the present and proposed methods to compute the Zernike moments, Zpq. The present method comprises of Direct, Belkasim's, Prata's, Kintner's and Coefficient methods. We propose a new technique, denoted as q-recursive method, specifically for fast computation of Zernike moments. It uses radial polynomials of fixed order p with a varying index q to compute Zernike moments. Fast computation is achieved because it uses polynomials of higher index q to derive the polynomials of lower index q and it does not use any factorial terms. Individual order of moments can be calculated independently without employing lower- or higher-order moments. This is especially useful in cases where only selected orders of Zernike moments are needed as pattern features. The performance of the present and proposed methods are experimentally analyzed by calculating Zernike moments of orders 0 to p and specific order p using binary and grayscale images. In both the cases, the q-recursive method takes the shortest time to compute Zernike moments. 相似文献
4.
This paper deals with the problem of parameter estimation in the generalized Mallows model (GMM) by using both local and global search metaheuristic (MH) algorithms. The task we undertake is to learn parameters for defining the GMM from a dataset of complete rankings/permutations. Several approaches can be found in the literature, some of which are based on greedy search and branch and bound search. The greedy approach has the disadvantage of usually becoming trapped in local optima, while the branch and bound approach, basically A* search, usually comes down to approximate search because of memory requirements, losing in this way its guaranteed optimality. Here, we carry out a comparative study of several MH algorithms (iterated local search (ILS) methods, variable neighborhood search (VNS) methods, genetic algorithms (GAs) and estimation of distribution algorithms (EDAs)) and a tailored algorithm A* to address parameter estimation in GMMs. We use 22 real datasets of different complexity, all but one of which were created by the authors by preprocessing real raw data. We provide a complete analysis of the experiments in terms of accuracy, number of iterations and CPU time requirements. 相似文献
5.
Empirical comparison of fast partitioning-based clustering algorithms for large data sets 总被引:3,自引:0,他引:3
Several fast algorithms for clustering very large data sets have been proposed in the literature, including CLARA, CLARANS, GAC-R3, and GAC-RARw. CLARA is a combination of a sampling procedure and the classical PAM algorithm, while CLARANS adopts a serial randomized search strategy to find the optimal set of medoids. GAC-R3 and GAC-RARw exploit genetic search heuristics for solving clustering problems. In this research, we conducted an empirical comparison of these four clustering algorithms over a wide range of data characteristics described by data size, number of clusters, cluster distinctness, cluster asymmetry, and data randomness. According to the experimental results, CLARANS outperforms its counterparts both in clustering quality and execution time when the number of clusters increases, clusters are more closely related, more asymmetric clusters are present, or more random objects exist in the data set. With a specific number of clusters, CLARA can efficiently achieve satisfactory clustering quality when the data size is larger, whereas GAC-R3 and GAC-RARw can achieve satisfactory clustering quality and efficiency when the data size is small, the number of clusters is small, and clusters are more distinct and symmetric. 相似文献
6.
7.
8.
Mehrdad Aliasgari Marina Blanton Fattaneh Bayatbabolghani 《International Journal of Information Security》2017,16(6):577-601
Hidden Markov model (HMM) is a popular statistical tool with a large number of applications in pattern recognition. In some of these applications, such as speaker recognition, the computation involves personal data that can identify individuals and must be protected. We thus treat the problem of designing privacy-preserving techniques for HMM and companion Gaussian mixture model computation suitable for use in speaker recognition and other applications. We provide secure solutions for both two-party and multi-party computation models and both semi-honest and malicious settings. In the two-party setting, the server does not have access in the clear to either the user-based HMM or user input (i.e., current observations) and thus the computation is based on threshold homomorphic encryption, while the multi-party setting uses threshold linear secret sharing as the underlying data protection mechanism. All solutions use floating-point arithmetic, which allows us to achieve high accuracy and provable security guarantees, while maintaining reasonable performance. A substantial part of this work is dedicated to building secure protocols for floating-point operations in the two-party setting, which are of independent interest. 相似文献
9.
Andrzej Janczak 《International journal of systems science》2013,44(1):21-35
A gradient-based approach to training neural network Wiener models is presented. Calculation of the gradient or approximate gradient for the series-parallel and parallel Wiener models by the backpropagation, the sensitivity method (SM) and the backpropagation through time (BPTT) is considered in a unified framework. Four different recursive learning algorithms are derived, analysed and compared. For the truncated BPTT, it is shown that the determination of the number of unfolding time steps can be made on the basis of an impulse response function of sensitivity models. Analysis of the computational complexity of these algorithms shows that, contrary to the other recurrent neural network models, computation of the gradient in parallel Wiener models with the sensitivity method or backpropagation through time requires only a little more computational burden than the backpropagation. A simulated data example and a real data example of a laboratory two-tank system are also included to make comparison of different methods and their effectiveness and practical feasibility are shown. 相似文献
10.
Degradable fault-tolerant systems can be evaluated using rewarded continuous-time Markov chain (CTMC) models. In that context, a useful measure to consider is the distribution of the cumulative reward over a time interval [0,t]. All currently available numerical methods for computing that measure tend to be very expensive when the product of the maximum output rate of the CTMC model and t is large and, in that case, their application is limited to CTMC models of moderate size. In this paper, we develop two methods for computing bounds for the cumulative reward distribution of CTMC models with reward rates associated with states: BT/RT (Bounding Transformation/Regenerative Transformation) and BT/BRT (Bounding Transformation/Bounding Regenerative Transformation). The methods require the selection of a regenerative state, are numerically stable and compute the bounds with well-controlled error. For a class of rewarded CTMC models, class , and a particular, natural selection for the regenerative state the BT/BRT method allows us to trade off bound tightness with computational cost and will provide bounds at a moderate computational cost in many cases of interest. For a class of models, class , slightly wider than class , and a particular, natural selection for the regenerative state, the BT/RT method will yield tighter bounds at a higher computational cost. Under additional conditions, the bounds obtained using the less expensive version of BT/BRT and BT/RT seem to be tight for any value of t or not small values of t, depending on the initial probability distribution of the model. Class and class models with these additional conditions include both exact and bounding typical failure/repair performability models of fault-tolerant systems with exponential failure and repair time distributions and repair in every state with failed components and a reward rate structure which is a non-increasing function of the collection of failed components. We illustrate both the applicability and the performance of the methods using a large CTMC performability example of a fault-tolerant multiprocessor system. 相似文献
11.
Simoní Da Ros Gabriel Colusso Thiago A. Weschenfelder Lisiane de Marsillac Terra Fernanda de Castilhos Marcos L. Corazza Marcio Schwaab 《Applied Soft Computing》2013,13(5):2205-2214
Mathematical models in biochemical engineering field are usually composed by nonlinear kinetic equations, where the number of parameters that must be estimated from a set of experimental measurements is usually very high. In these cases, the estimation of the model parameters comprises numerical iterative methods for minimization of the objective function. Classical methods for minimization of the objective function, like the Newton method, requires a good initial guess for all parameters and differentiation of the objective function and/or model equations with respect to the model parameters. Besides, the use of stochastic optimization methods for parameter estimation has gained attention, since these methods do not require a good initial guesses of all model parameters and neither the evaluation of derivatives. In this work, some stochastic optimization methods (Artificial Bee Colony, Differential Evolution, Particle Swarm Optimization and Simulated Annealing) were used in the estimation of kinetic parameters of a biochemical model for an alcoholic fermentation of cassava hydrolyzed. The results indicated that Differential Evolution provides better results among the stochastic optimization methods evaluated. 相似文献
12.
《Automatica》1987,23(4):523-533
In this paper, a general class of nonquadratic convex Nash games is studied, from the points of view of existence, stability and iterative computation of noncooperative equilibria. Conditions for contraction of general nonlinear operators are obtained, which are then used in the stability study of such games. These lead to existence and uniqueness conditions for stable Nash equilibrium solutions, under both global and local analysis. Also, convergence of an algorithm which employs inaccurate search techniques is verified. It is shown in the context of a fish war example that the algorithm given is in some aspects superior to various algorithms found in the literature, and is furthermore more meaningful for real world implementation. 相似文献
13.
This paper studies the generalized hop-constrained minimum spanning tree problem (GHMSTP) which has applications in backbone network design subject to quality-of-service constraints that restrict the maximum number of intermediate routers along each communication path. Different possibilities to model the GHMSTP as an integer linear program and strengthening valid inequalities are studied. The obtained formulations are compared theoretically, i.e., by means of their linear programming relaxation. In addition, branch-and-cut approaches based on these formulations are developed and compared in a computational study. 相似文献
14.
Analog methods for computation of the generalized inverse 总被引:2,自引:0,他引:2
In this paper some properties of the generalized matrix inverse are discussed leading up to two analog techniques for its computation. Analog procedures are given for the computation of the unique vectorx of least Euclidean norm that minimizes the Euclidean norm of the error vectorAx - b . 相似文献
15.
The implementation and evaluation of the performances on the ICL DAP of two algorithms for the parallel computation of eigenvalues and eigenvectors of moderately large real symmetric matrices of order N, where 64 < N 256, is reported. The first of the algorithms is a modified form of a Parallel Orthogonal Transformation algorithm proposed by Clint et al., which has already been implemented on the DAP for matrices of order N, where N < 65. The second, which has also been implemented on the DAP for matrices of order N, where N < 65, is Jacobi's algorithm, in the modified form proposed by Modi and Pryce. A comparison of the efficiency of the two algorithms for the solution of a variety of large matrices is given. 相似文献
16.
17.
S. NiroomandiI. Alfaro E. Cueto F. Chinesta 《Computer methods and programs in biomedicine》2012,105(1):1-12
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500 Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. 相似文献
18.
19.
The burstiness of a video source can be characterized by its burstiness curve. The burstiness curve is useful in the optimal allocation of resources to satisfy a desired quality of service for the video stream in a packet network. In this paper, we present deterministic algorithms for exact computation of the burstiness curve of a video source, for both elementary video streams and MPEG-2 Transport Streams. The algorithms exploit the piecewise linearity of the burstiness curve and compute only the points at which the slope of the burstiness curve changes. We also present approximate versions of these algorithms, which save computational effort by considering only a small number of candidate points at which the slope of the burstiness curve may change. The approximate algorithm was able to compute the burstiness curve of a 2-h long elementary video stream in approximately 10 s, as compared to over 6 h for the exact algorithm, with virtually no loss of accuracy in the computation. The efficiency of the proposed algorithms makes them suitable for quality-of-service (QoS) provisioning not only in off-line environments such as in video-on-demand (VoD) servers, but also in real-time applications such as in live TV distribution systems. 相似文献
20.
The loss curve of a video source characterizes the loss rate of the video stream generated by the source as a function of the allocated buffer size for a given transmission rate. The loss curve is useful in the optimal allocation of resources when the video stream is transmitted over a packet network, so that the desired tradeoff can be reached among the loss rate, bandwidth and the buffer space to be allocated in the network. We present an algorithm for computation of the entire loss curve of an elementary video stream. In contrast to earlier algorithms which employ statistical approaches, our algorithm is deterministic and computes the exact loss curve of the video stream. The algorithm exploits the piecewise linearity of the loss curve and computes only the points at which the slope of the loss curve changes. We also present an extension of the algorithm to MPEG-2 transport streams. The efficiency of the algorithm is demonstrated by results from several example video streams. For example, the algorithm was able to compute the entire loss curve of a 2-h elementary video stream in approximately 11 s on a Sun Ultra-2 workstation. 相似文献