首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 172 毫秒
1.
A common statistical problem is that of finding the median element in a set of data. This paper presents an efficient randomized high-level parallel algorithm for finding the median given a set of elements distributed across a parallel machine. In fact, our algorithm solves the general selection problem that requires the determination of the element of rank k, for an arbitrarily given integer k.Our general framework is an SPMD distributed-memory programming model that is enhanced by a set of communication primitives. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The algorithms have been coded in the message-passing standard MPI, and our experimental results from the IBM SP-2 illustrate the scalability and efficiency of our algorithm and improve upon all the related experimental results known to the author.The main contributions of this paper are(1) New techniques for speeding the performance of certain randomized algorithms, such as selection, which are efficient with likely probability.(2) A new, practical randomized selection algorithm (UltraFast) with significantly improved convergence.  相似文献   

2.
We propose a method to determine camera parameters for character motion, which considers the motion by itself. The basic idea is to approximately compute the area swept by the motion of the character’s links that are orthogonally projected onto the image plane, which we call “motion area”. Using the motion area, we can determine good fixed camera parameters and camera paths for a given character motion in the off-line or real-time camera control. In our experimental results, we demonstrate that our camera path generation algorithms can compute a smooth moving camera path while the camera effectively displays the dynamic features of character motion. Our methods can be easily used in combination with the method for generating occlusion-free camera paths. We expect that our methods can also be utilized by the general camera planning method as one of heuristics for measuring the visual quality of the scenes that include dynamically moving characters.  相似文献   

3.
The problem of convergence of least squares (LS) estimates in a stochastic linear regression model with white noise is considered. It is well known that if the parameter estimates are known to converge, the convergence analysis for many adaptive systems can be rendered considerably less arduous. For an important case where the regression vector is a measurable function of the observations and the noise is Gaussian, it has been shown, by using a Bayesian embedding argument, that the LS estimates converge almost surely for almost all true parameters in the parameter space except for a zero-measure set. However, nothing can be said about a particular given system, which is usually the objective. It has long been conjectured that such a “bad” zero measure set in the parameter space does not actually exist. A conclusive answer to this important question is provided and it is shown that the set can indeed exist. This then shows that to provide conclusive convergence results for stochastic adaptive systems, it is necessary to resort to a sample pathwise analysis instead of the Bayesian embedding approach  相似文献   

4.
This paper presents an approach for reasoning about the effects of sensor error on high-level robot behavior. We consider robot controllers that are synthesized from high-level, temporal logic task specifications, such that the resulting robot behavior is guaranteed to satisfy these specifications when assuming perfect sensors and actuators. We relax the assumption of perfect sensing, and calculate the probability with which the controller satisfies a set of temporal logic specifications. We consider parametric representations, where the satisfaction probability is found as a function of the model parameters, and numerical representations, allowing for the analysis of large examples. We also consider models in which some parts of the environment and sensor have unknown transition probabilities, in which case we can determine upper and lower bounds for the probability. We illustrate our approach with two examples that provide insight into unintuitive effects of sensor error that can inform the specification design process.  相似文献   

5.
This paper studies compositional reasoning theories for stochastic systems. A specification theory combines notions of specification and implementation with satisfaction and refinement relations, and a set of operators that together support stepwise design. One of the first behavioral specification theories introduced for stochastic systems is the one of Interval Markov Chains (IMCs), which are Markov Chains whose probability distributions are replaced by a conjunction of intervals. In this paper, we show that IMCs are not closed under conjunction, which gives a formal proof of a conjecture made in several recent works.In order to leverage this problem, we suggested to work with Constraint Markov Chains (CMCs) that is another specification theory where intervals are replaced with general constraints. Contrary to IMCs, one can show that CMCs enjoy the closure properties of a specification theory. In addition, we propose aggressive abstraction procedures for CMCs. Such abstractions can be used either to combat the state-space explosion problem, or to simplify complex constraints. In particular, one can show that, under some assumptions, the behavior of any CMC can be abstracted by an IMC.Finally, we propose an algorithm for counter-example generation, in case a refinement of two CMCs does not hold. We present a tool that implements our results. Implementing CMCs is a complex process and relies on recent advances made in decision procedures for theory of reals.  相似文献   

6.
 In this paper, we propose Stochastic sketching for global optimization based on a simulation of human behaviour. Stochastic sketching tries to do things simply in the human way without too much interpretation instead of modeling the thought and strategies of human beings and applying an artificial model to problems. We introduce and discuss concepts and components essential to stochastic sketching in detail, including sampling guide, zooming controller, sketching model, precision threshold, and satisfaction probability. Experimental results of stochastic sketching on several test functions and a set of recommended parameter settings are given, as well as preliminary comparisons between stochastic sketching and related evolutionary algorithms including evolution strategies, evolutionary programming, and genetic algorithms.  相似文献   

7.
Structural and behavioral parameters of many real networks such as social networks are unpredictable, uncertain, and have time-varying parameters, and for these reasons, deterministic graphs for modeling such networks are too restrictive to solve most of the real-network problems. It seems that stochastic graphs, in which weights associated to the vertices are random variables, might be better graph models for real-world networks. Once we use a stochastic graph as the model for a network, every feature of the graph such as path, spanning tree, clique, dominating set, and cover set should be treated as a stochastic feature. For example, choosing a stochastic graph as a graph model of an online social network and defining community structure in terms of clique, the concept of a stochastic clique may be used to study community structures’ properties or define spreading of influence according to the coverage of influential users; the concept of stochastic vertex covering may be used to study spread of influence. In this article, minimum vertex covering in stochastic graphs is first defined, and then four learning, automata-based algorithms are proposed for solving a minimum vertex-covering problem in stochastic graphs where the probability distribution functions of the weights associated with the vertices of the graph are unknown. It is shown that through a proper choice of the parameters of the proposed algorithms, one can make the probability of finding minimum vertex cover in a stochastic graph as close to unity as possible. Experimental results on synthetic stochastic graphs reveal that at a certain confidence level the proposed algorithms significantly outperform the standard sampling method in terms of the number of samples needed to be taken from the vertices of the stochastic graph.  相似文献   

8.
In the process of learning the naive Bayes, estimating probabilities from a given set of training samples is crucial. However, when the training samples are not adequate, probability estimation method will inevitably suffer from the zero-frequency problem. To avoid this problem, Laplace-estimate and M-estimate are the two main methods used to estimate probabilities. The estimation of two important parameters m (integer variable) and p (probability variable) in these methods has a direct impact on the underlying experimental results. In this paper, we study the existing probability estimation methods and carry out a parameter Cross-test by experimentally analyzing the performance of M-estimate with different settings for the two parameters m and p. This part of experimental result shows that the optimal parameter values vary corresponding to different data sets. Motivated by these analysis results, we propose an estimation model based on self-adaptive differential evolution. Then we propose an approach to calculate the optimal m and p value for each conditional probability to avoid the zero-frequency problem. We experimentally test our approach in terms of classification accuracy using the 36 benchmark machine learning repository data sets, and compare it to a naive Bayes with Laplace-estimate and M-estimate with a variety of setting of parameters from literature and those possible optimal settings via our experimental analysis. The experimental results show that the estimation model is efficient and our proposed approach significantly outperforms the traditional probability estimation approaches especially for large data sets (large number of instances and attributes).  相似文献   

9.
10.

Machine learning algorithms typically rely on optimization subroutines and are well known to provide very effective outcomes for many types of problems. Here, we flip the reliance and ask the reverse question: can machine learning algorithms lead to more effective outcomes for optimization problems? Our goal is to train machine learning methods to automatically improve the performance of optimization and signal processing algorithms. As a proof of concept, we use our approach to improve two popular data processing subroutines in data science: stochastic gradient descent and greedy methods in compressed sensing. We provide experimental results that demonstrate the answer is “yes”, machine learning algorithms do lead to more effective outcomes for optimization problems, and show the future potential for this research direction. In addition to our experimental work, we prove relevant Probably Approximately Correct (PAC) learning theorems for our problems of interest. More precisely, we show that there exists a learning algorithm that, with high probability, will select the algorithm that optimizes the average performance on an input set of problem instances with a given distribution.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号