首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 255 毫秒
1.
In genetic search algorithms and optimization routines, the representation of the mutation and crossover operators are typically defaulted to the canonical basis. We show that this can be influential in the usefulness of the search algorithm. We then pose the question of how to find a basis for which the search algorithm is most useful. The conjugate schema is introduced as a general mathematical construct and is shown to separate a function into smaller dimensional functions whose sum is the original function. It is shown that conjugate schema, when used on a test suite of functions, improves the performance of the search algorithm on 10 out of 12 of these functions. Finally, a rigorous but abbreviated mathematical derivation is given in the appendices.  相似文献   

2.
What is a sorting function—not a sorting function for a given ordering relation, but a sorting function with nothing given?Formulating four basic properties of sorting algorithms as defining requirements, we arrive at intrinsic notions of sorting and stable sorting: A function is a sorting function if and only it is an intrinsically parametric permutation function. It is a stable sorting function if and only if it is an intrinsically stable permutation function.We show that ordering relations can be represented isomorphically as inequality tests, comparators and stable sorting functions, each with their own intrinsic characterizations, which in turn provide a basis for run-time monitoring of their expected I/O behaviors. The isomorphisms are parametrically polymorphically definable, which shows that it is sufficient to provide any one of the representations since the others are derivable without compromising data abstraction.Finally we point out that stable sorting functions as default representations of ordering relations have the advantage of permitting linear-time sorting algorithms; inequality tests forfeit this possibility.  相似文献   

3.
The mathematical theory of evidence is a generalization of the Bayesian theory of probability. It is one of the primary tools for knowledge representation and uncertainty and probabilistic reasoning and has found many applications. Using this theory to solve a specific problem is critically dependent on the availability of a mass function (or basic belief assignment). In this paper, we consider the important problem of how to systematically derive mass functions from the common multivariate data spaces and also the ensuing problem of how to compute the various forms of belief function efficiently. We also consider how such a systematic approach can be used in practical pattern recognition problems. More specifically, we propose a novel method in which a mass function can be systematically derived from multivariate data and present new methods that exploit the algebraic structure of a multivariate data space to compute various belief functions including the belief, plausibility, and commonality functions in polynomial-time. We further consider the use of commonality as an equality check. We also develop a plausibility-based classifier. Experiments show that the equality checker and the classifier are comparable to state-of-the-art algorithms.  相似文献   

4.
Linear combinations of translates of a given basis function have long been successfully used to solve scattered data interpolation and approximation problems. We demonstrate how the classical basis function approach can be transferred to the projective space ℙ d−1. To be precise, we use concepts from harmonic analysis to identify positive definite and strictly positive definite zonal functions on ℙ d−1. These can then be applied to solve problems arising in tomography since the data given there consists of integrals over lines. Here, enhancing known reconstruction techniques with the use of a scattered data interpolant in the “space of lines”, naturally leads to reconstruction algorithms well suited to limited angle and limited range tomography. In the medical setting algorithms for such incomplete data problems are desirable as using them can limit radiation dosage.  相似文献   

5.
Lee and Leung make practical the consistency enforcement of global cost functions in Weighted Constraint Satisfaction Problems (WCSPs). The main idea of their approach lies in the derivation of polynomial time algorithms for the computation of the minimum cost of global cost functions. In this paper, we investigate how soft arc consistency can also be applied on global cost functions with no known efficient minimum cost computation algorithms. We propose polytime linear projection-safe (PLPS) cost functions, which have a polynomial size integer linear formulation and can maintain this good property across projection/extension operations. We observe that the minimum of the linear relaxation gives a good approximation to the minimum of the integer formulation. This is used as the basis for the enforcement of relaxed forms of existing soft arc consistencies. By using the linear formulations, we can easily enforce conjunctions of overlapping PLPS, which give stronger pruning power. We further propose polytime integral linear projection-safe (PILPS) cost functions, which are PLPS cost functions with guaranteed integral solutions to the linear relaxation. We prove theorems to compare the consistency strengths among PLPS, PILPS and their conjunctions. Extensive experimentations are conducted to compare our proposed algorithms against state of the art global cost functions consistency enforcement algorithms and integer programming. Empirical results agree with our theoretical predictions, and confirm orders of magnitude improvement in terms of pruning and runtime by our proposals.  相似文献   

6.
Our interest is to characterize the spline-like integer-shift-invariant bases capable of reproducing exponential polynomial curves. We prove that any compact-support function that reproduces a subspace of the exponential polynomials can be expressed as the convolution of an exponential B-spline with a compact-support distribution. As a direct consequence of this factorization theorem, we show that the minimal-support basis functions of that subspace are linear combinations of derivatives of exponential B-splines. These minimal-support basis functions form a natural multiscale hierarchy, which we utilize to design fast multiresolution algorithms and subdivision schemes for the representation of closed geometric curves. This makes them attractive from a computational point of view. Finally, we illustrate our scheme by constructing minimal-support bases that reproduce ellipses and higher-order harmonic curves.  相似文献   

7.
We provide an analytical comparison between discounted and average reward temporal-difference (TD) learning with linearly parameterized approximations. We first consider the asymptotic behavior of the two algorithms. We show that as the discount factor approaches 1, the value function produced by discounted TD approaches the differential value function generated by average reward TD. We further argue that if the constant function—which is typically used as one of the basis functions in discounted TD—is appropriately scaled, the transient behaviors of the two algorithms are also similar. Our analysis suggests that the computational advantages of average reward TD that have been observed in some prior empirical work may have been caused by inappropriate basis function scaling rather than fundamental differences in problem formulations or algorithms.  相似文献   

8.
Multi-dimensional transfer functions are commonly used in rectilinear volume renderings to effectively portray materials, material boundaries and even subtle variations along boundaries. However, most unstructured grid rendering algorithms only employ one-dimensional transfer functions. This paper proposes a novel pre-integrated Projected Tetrahedra (PT) rendering technique that applies bivariate transfer functions on unstructured grids. For each type of bivariate transfer function, an analytical form that pre-integrates the contribution of a ray segment in one tetrahedron is derived, and can be precomputed as a lookup table to compute the color and opacity in a projected tetrahedron on-the-fly. Further, we show how to approximate the integral using the pre-integration method for faster unstructured grid rendering. We demonstrate the advantages of our approach with a variety of examples and comparisons with one-dimensional transfer functions.  相似文献   

9.
Label embedding (LE) is an important family of multi-label classification algorithms that digest the label information jointly for better performance. Different real-world applications evaluate performance by different cost functions of interest. Current LE algorithms often aim to optimize one specific cost function, but they can suffer from bad performance with respect to other cost functions. In this paper, we resolve the performance issue by proposing a novel cost-sensitive LE algorithm that takes the cost function of interest into account. The proposed algorithm, cost-sensitive label embedding with multidimensional scaling (CLEMS), approximates the cost information with the distances of the embedded vectors by using the classic multidimensional scaling approach for manifold learning. CLEMS is able to deal with both symmetric and asymmetric cost functions, and effectively makes cost-sensitive decisions by nearest-neighbor decoding within the embedded vectors. We derive theoretical results that justify how CLEMS achieves the desired cost-sensitivity. Furthermore, extensive experimental results demonstrate that CLEMS is significantly better than a wide spectrum of existing LE algorithms and state-of-the-art cost-sensitive algorithms across different cost functions.  相似文献   

10.
It is widely assumed and observed in experiments that the use of diversity mechanisms in evolutionary algorithms may have a great impact on its running time. Up to now there is no rigorous analysis pointing out how different diversity mechanisms influence the runtime behavior. We consider evolutionary algorithms that differ from each other in the way they ensure diversity and point out situations where the right mechanism is crucial for the success of the algorithm. The considered evolutionary algorithms either diversify the population with respect to the search points or with respect to function values. Investigating simple plateau functions, we show that using the “right” diversity strategy makes the difference between an exponential and a polynomial runtime. Later on, we examine how the drawback of the “wrong” diversity mechanism can be compensated by increasing the population size.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号