共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In 1994, S.G. Matthews introduced the notion of partial metric space in order to obtain a suitable mathematical tool for program
verification (Ann. N.Y. Acad. Sci. 728:183–197, 1994). He gave an application of this new structure to parallel computing by means of a partial metric version of the celebrated
Banach fixed point theorem (Theor. Comput. Sci. 151:195–205, 1995). Later on, M.P. Schellekens introduced the theory of complexity (quasi-metric) spaces as a part of the development of a
topological foundation for the asymptotic complexity analysis of programs and algorithms (Electron. Notes Theor. Comput. Sci.
1:211–232, 1995). The applicability of this theory to the asymptotic complexity analysis of Divide and Conquer algorithms was also illustrated
by Schellekens. In particular, he gave a new proof, based on the use of the aforenamed Banach fixed point theorem, of the
well-known fact that Mergesort algorithm has optimal asymptotic average running time of computing. In this paper, motivated
by the utility of partial metrics in Computer Science, we discuss whether the Matthews fixed point theorem is a suitable tool
to analyze the asymptotic complexity of algorithms in the spirit of Schellekens. Specifically, we show that a slight modification
of the well-known Baire partial metric on the set of all words over an alphabet constitutes an appropriate tool to carry out
the asymptotic complexity analysis of algorithms via fixed point methods without the need for assuming the convergence condition
inherent to the definition of the complexity space in the Schellekens framework. Finally, in order to illustrate and to validate
the developed theory we apply our results to analyze the asymptotic complexity of Quicksort, Mergesort and Largesort algorithms.
Concretely we retrieve through our new approach the well-known facts that the running time of computing of Quicksort (worst
case behaviour), Mergesort and Largesort (average case behaviour) are in the complexity classes O(n2)\mathcal{O}(n^{2}), O(nlog2(n))\mathcal{O}(n\log_{2}(n)) and O(2(n-1)-log2(n))\mathcal{O}(2(n-1)-\log_{2}(n)), respectively. 相似文献
3.
Vincent Gardeux Rachid Chelouah Patrick Siarry Fred Glover 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2011,15(11):2275-2285
This paper presents a performance study of a one-dimensional search algorithm for solving general high-dimensional optimization
problems. The proposed approach is a hybrid between a line search algorithm of Glover (The 3-2-3, stratified split and nested
interval line search algorithms. Research report, OptTek Systems, Boulder, CO, 2010) and an improved variant of a global method of Gardeux et al. (Unidimensional search for solving continuous high-dimensional
optimization problems. In: ISDA ’09: Proceedings of the 2009 ninth international conference on intelligent systems design
and applications, IEEE Computer Society, Washington, DC, USA, pp 1096–1101, 2009) that uses line search algorithms as subroutines. The resulting algorithm, called EM323, was tested on 19 scalable benchmark
functions, with a view to observing how optimization techniques for continuous optimization problems respond with increasing
dimension. To this end, we report the algorithm’s performance on the 50, 100, 200, 500 and 1,000-dimension versions of each
function. Computational results are given comparing our method with three leading evolutionary algorithms. Statistical analysis
discloses that our method outperforms the other methods by a significant margin. 相似文献
4.
Differential Evolution is known for its simplicity and effectiveness as an evolutionary optimizer. In recent years, many researchers
have focused on the exploration of Differential Evolution (DE). The objective of this paper is to show the evolutionary and
population dynamics for the empirical testing on 3-Parents Differential Evolution (3PDE) for unconstrained function optimization
(Teng et al. 2007). In this paper, 50 repeated evolutionary runs for each of 20 well-known benchmarks were carried out to test the proposed
algorithms against the original 4-parents DE algorithm. As a result of the observed evolutionary dynamics, 3PDE-SAF performed
the best among the preliminary proposed algorithms that included 3PDE-SACr and 3PDE-SACrF. Subsequently, 3PDE-SAF is chosen
for the self-adaptive population size for testing dynamic population sizing methods using the absolute (3PDE-SAF-Abs) and
relative (3PDE-SAF-Rel) population size encodings. The final result shows that 3PDE-SAF-Rel produced a better performance
and convergence overall compared to all the other proposed algorithms, including the original DE. In terms of population dynamics,
the population size in 3PDE-SAF-Abs exhibited disadvantageously high dynamics that caused less efficient results. On the other
hand, the population size in 3PDE-SAF-Rel was observed to be approximately constant at ten times the number of variables being
optimized, hence giving a better and more stable performance. 相似文献
5.
Neutral genotype-phenotype mappings can be observed in natural evolution and are often used in evolutionary computation. In this article, important aspects of such encodings are analyzed.First, it is shown that in the absence of external control neutrality allows a variation of the search distribution independent of phenotypic changes. In particular, neutrality is necessary for self-adaptation, which is used in a variety of algorithms from all main paradigms of evolutionary computation to increase efficiency.Second, the average number of fitness evaluations needed to find a desirable (e.g., optimally adapted) genotype depending on the number of desirable genotypes and the cardinality of the genotype space is derived. It turns out that this number increases only marginally when neutrality is added to an encoding presuming that the fraction of desirable genotypes stays constant and that the number of these genotypes is not too small. 相似文献
6.
7.
Anne-Muriel Arigon Maryvonne Miquel Anne Tchounikine 《Multimedia Tools and Applications》2007,35(1):91-108
Histogram feature representation is important in many classification applications for characterization of the statistical
distribution of different pattern attributes, such as the color and edge orientation distribution in images. While the construction
of these feature representations is simple, this very simplicity may compromise the classification accuracy in those cases
where the original histogram does not provide adequate discriminative information for making a reliable classification. In
view of this, we propose an optimization approach based on evolutionary computation (Back, Evolutionary algorithms in theory
and practice, Oxford University Press, New York, 1996; Fogel, Evolutionary computation: toward a new philosophy of machine intelligence, 2nd edn. IEEE, Piscataway, NJ 1998) to identify a suitable transformation on the histogram feature representation, such that the resulting classification performance
based on these features is maximally improved while the original simplicity of the representation is retained. To facilitate
this optimization process, we propose a hierarchical classifier structure to demarcate the set of categories in such a way
that the pair of category subsets with the highest level of dissimilarities is identified at each stage for partition. In
this way, the evolutionary search process for the required transformation can be considerably simplified due to the reduced
level of complexities in classification for two widely separated category subsets. The proposed approach is applied to two
problems in multimedia data classification, namely the categorization of 3D computer graphics models and image classification
in the JPEG compressed domain. Experimental results indicate that the evolutionary optimization approach, facilitated by the
hierarchical classification process, is capable of significantly improving the classification performance for both applications
based on the transformed histogram representations.
相似文献
Hau-San WongEmail: |
8.
《Journal of Computer and System Sciences》2007,73(1):123-136
Grover's search algorithm, one of the most popular quantum algorithms, provides a good solution to solve NP complexity problems, but requires a large number of quantum bits (qubits) for its functionality. In this paper, a novel algorithm called quantum cooperative search is proposed to make Grover's search algorithm work on 3-SAT problems with a small number of qubits. The proposed algorithm replaces some qubits with classical bits and finds assignments to these classical bits using the traditional 3-SAT algorithms including evolutionary algorithms and heuristic local search algorithms. In addition, the optimal configuration of the proposed algorithm is suggested by mathematical analysis. The experimental results show that the quantum cooperative search algorithm composed by Grover's search and heuristic local search performs better than other pure traditional 3-SAT algorithms in most cases. The mathematical analysis of the appropriate number of qubits is also verified by the experiments. 相似文献
9.
Yi Hong Qingsheng Ren Jin Zeng 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2007,11(2):115-122
In Darwinian-type binary-coded genetic algo- rithms, there exist bit importance and convergent order of bit. Inspired from this phenomenon, we propose an evolutionary algorithm called Unit Bit Importance Evolutionary Algorithm (UBIEA). Compared with Darwinian-type genetic algorithms, UBIEA completely abandons commonly used genetic operators such as mutation and crossover. Its main idea is that: detecting bit importance, speeding up the convergence of important bits and maintaining the diversity of unimportant bits. Although our theoretic analyses are based on the assumption that all bits are independent and have different salience, UBIEA perhaps is usable in a larger framework which can be verified by numerical experiments. 相似文献
10.
Jianlin Cheng Michael J. Sweredoski Pierre Baldi 《Data mining and knowledge discovery》2006,13(1):1-10
Protein domains are the structural and functional units of proteins. The ability to parse protein chains into different domains
is important for protein classification and for understanding protein structure, function, and evolution. Here we use machine
learning algorithms, in the form of recursive neural networks, to develop a protein domain predictor called DOMpro. DOMpro
predicts protein domains using a combination of evolutionary information in the form of profiles, predicted secondary structure,
and predicted relative solvent accessibility. DOMpro is trained and tested on a curated dataset derived from the CATH database.
DOMpro correctly predicts the number of domains for 69% of the combined dataset of single and multi-domain chains. DOMpro
achieves a sensitivity of 76% and specificity of 85% with respect to the single-domain proteins and sensitivity of 59% and
specificity of 38% with respect to the two-domain proteins. DOMpro also achieved a sensitivity and specificity of 71% and
71% respectively in the Critical Assessment of Fully Automated Structure Prediction 4 (CAFASP-4) (Fisher et al., 1999; Saini and Fischer, 2005) and was ranked among the top ab initio domain predictors. The DOMpro server, software, and dataset are available at http://www.igb.uci.edu/servers/psss.html. 相似文献
11.
While there exist efficient algorithms to slice sequential programs precisely, there are only two algorithms for precise slicing
of concurrent interprocedural programs with recursive procedures (Krinke in Proc. ESEC/FSE’03, pp. 178–187, 2003; Nanda and Ramesh in ACM Toplas. 28(6):1088–1144, 2006). We present an empirical evaluation of both algorithms for Java. We demonstrate that both algorithms provide the same precision
up to the model of concurrency in use and show that the concurrency model has strong impact on slice precision and computation
costs. Furthermore, we extend both algorithms to support dynamic thread creation both in loops and recursion—a feature that
the original algorithms could not fully handle. The worst case complexity of the algorithms being exponential, we developed
several optimizations and compared these with each other and with algorithms that trade precision for speed. Finally, we show
that one algorithm may produce incorrect slices and present a remedy. 相似文献
12.
In this paper, we consider multi-objective evolutionary algorithms for the Vertex Cover problem in the context of parameterized complexity. We consider two different measures for the problem. The first measure is a very natural multi-objective one for the use of evolutionary algorithms and takes into account the number of chosen vertices and the number of edges that remain uncovered. The second fitness function is based on a linear programming formulation and proves to give better results. We point out that both approaches lead to a kernelization for the Vertex Cover problem. Based on this, we show that evolutionary algorithms solve the vertex cover problem efficiently if the size of a minimum vertex cover is not too large, i.e., the expected runtime is bounded by O(f(OPT)?n c ), where c is a constant and f a function that only depends on OPT. This shows that evolutionary algorithms are randomized fixed-parameter tractable algorithms for the vertex cover problem. 相似文献
13.
Ruben A. Gamboa 《Journal of Automated Reasoning》2009,43(2):139-172
In Misra (ACM Trans Program Lang Syst 16(6):1737–1767, 1994), Misra introduced the powerlist data structure, which is well suited to express recursive, data-parallel algorithms. Moreover,
Misra and other researchers have shown how powerlists can be used to prove the correctness of several algorithms. This success
has encouraged some researchers to pursue automated proofs of theorems about powerlists (Kapur 1997; Kapur and Subramaniam 1995, Form Methods Syst Des 13(2):127–158, 1998). In this paper, we show how ACL2 can be used to verify theorems about powerlists. We depart from previous approaches in
two significant ways. First, the powerlists we use are not the regular structures defined by Misra; that is, we do not require
powerlists to be balanced trees. As we will see, this complicates some of the proofs, but on the other hand it allows us to
state theorems that are otherwise beyond the language of powerlists. Second, we wish to prove the correctness of powerlist
algorithms as much as possible within the logic of powerlists. Previous approaches have relied on intermediate lemmas which
are unproven (indeed unstated) within the powerlist logic. However, we believe these lemmas must be formalized if the final
theorems are to be used as a foundation for subsequent work, e.g., in the verification of system libraries. In our experience,
some of these unproven lemmas presented the biggest obstacle to finding an automated proof. We illustrate our approach with
two case studies involving Batcher sorting and prefix sums. 相似文献
14.
Vladimir Ya. Krakovsky 《Journal of Real-Time Image Processing》2006,1(2):153-161
The moving-window discrete Fourier transform (MWDFT) is a dynamic spectrum analysis in which the next analysis interval differs from the previous one by including the next signal sample and excluding the first one from the previous analysis interval (Dillard in IEEE Trans Inform Theory 13:2–6, 1967, Comput Elect Eng 1:143–152, 1973, USA Patent 4023028, May 10, 1977). Such a spectrum analysis is necessary for time–frequency localization of analyzed signals with given peculiarities (Tolimieri and An in Time–frequency representations. Birkhauser, Basel, 1998). Using the well-known fast Fourier transform (FFT) towards this aim is not effective. Recursive algorithms which use only one complex multiplication for computing one spectrum sample during each analysis interval are more effective. The author improved one algorithm so that it is possible to use only one complex multiplication for computing two, four, and even eight (for complex signals) spectrum samples simultaneously. Problems of realization and application of the MWDFT are also considered in the paper. 相似文献
15.
We present a deterministic Logspace procedure, which, given a bipartite planar graph on n vertices, assigns O(log n) bits long weights to its edges so that the minimum weight perfect matching in the graph becomes unique. The Isolation Lemma as described in Mulmuley et al. (Combinatorica 7(1):105–131, 1987) achieves the same for general graphs using randomness, whereas we can do it deterministically when restricted to bipartite
planar graphs. As a consequence, we reduce both decision and construction versions of the perfect matching problem in bipartite
planar graphs to testing whether a matrix is singular, under the promise that its determinant is 0 or 1, thus obtaining a
highly parallel
SPL\mathsf{SPL}
algorithm for both decision and construction versions of the bipartite perfect matching problem. This improves the earlier
known bounds of non-uniform
SPL\mathsf{SPL}
by Allender et al. (J. Comput. Syst. Sci. 59(2):164–181, 1999) and
NC\mathsf{NC}
2 by Miller and Naor (SIAM J. Comput. 24:1002–1017, 1995), and by Mahajan and Varadarajan (Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing (STOC), pp.
351–357, 2000). It also rekindles the hope of obtaining a deterministic parallel algorithm for constructing a perfect matching in non-bipartite
planar graphs, which has been open for a long time. Further we try to find the lower bound on the number of bits needed for
deterministically isolating a perfect matching. We show that our particular method for isolation will require Ω(log n) bits. Our techniques are elementary. 相似文献
16.
To solve many-objective optimization problems (MaOPs) by evolutionary algorithms (EAs), the maintenance of convergence and diversity is essential and difficult. Improved multi-objective optimization evolutionary algorithms (MOEAs), usually based on the genetic algorithm (GA), have been applied to MaOPs, which use the crossover and mutation operators of GAs to generate new solutions. In this paper, a new approach, based on decomposition and the MOEA/D framework, is proposed: model and clustering based estimation of distribution algorithm (MCEDA). MOEA/D means the multi-objective evolutionary algorithm based on decomposition. The proposed MCEDA is a new estimation of distribution algorithm (EDA) framework, which is intended to extend the application of estimation of distribution algorithm to MaOPs. MCEDA was implemented by two similar algorithm, MCEDA/B (based on bits model) and MCEDA/RM (based on regular model) to deal with MaOPs. In MCEDA, the problem is decomposed into several subproblems. For each subproblem, clustering algorithm is applied to divide the population into several subgroups. On each subgroup, an estimation model is created to generate the new population. In this work, two kinds of models are adopted, the new proposed bits model and the regular model used in RM-MEDA (a regularity model based multi-objective estimation of distribution algorithm). The non-dominated selection operator is applied to improve convergence. The proposed algorithms have been tested on the benchmark test suite for evolutionary algorithms (DTLZ). The comparison with several state-of-the-art algorithms indicates that the proposed MCEDA is a competitive and promising approach. 相似文献
17.
In this paper we describe algorithms for computing the Burrows-Wheeler Transform (bwt) and for building (compressed) indexes in external memory. The innovative feature of our algorithms is that they are lightweight
in the sense that, for an input of size n, they use only n bits of working space on disk while all previous approaches use Θ(nlog n) bits. This is achieved by building the bwt directly without passing through the construction of the Suffix Array/Tree data structure. Moreover, our algorithms access
disk data only via sequential scans, thus they take full advantage of modern disk features that make sequential disk accesses
much faster than random accesses. We also present a scan-based algorithm for inverting the bwt that uses Θ(n) bits of working space, and a lightweight internal-memory algorithm for computing the bwt which is the fastest in the literature when the available working space is o(n) bits. Finally, we prove lower bounds on the complexity of computing and inverting the bwt via sequential scans in terms of the classic product: internal-memory space × number of passes over the disk data, showing
that our algorithms are within an O(log n) factor of the optimal. 相似文献
18.
We present an improved technique for data hiding in polygonal meshes, which is based on the work of Bogomjakov et al. (Comput.
Graph. Forum 27(2):637–642, 2008). Like their method, we use an arrangement on primitives relative to a reference ordering to embed a message. But instead
of directly interpreting the index of a primitive in the reference ordering as the encoded/decoded bits, our method slightly
modifies the mapping so that our modification doubles the chance of encoding an additional bit compared to Bogomjakov et al.’s
(Comput. Graph. Forum 27(2):637–642, 2008). We illustrate the inefficiency in the original mapping of Bogomjakov et al. (Comput. Graph. Forum 27(2):637–642, 2008) with an intuitive representation using a binary tree. 相似文献
19.
Mutual exclusion is a fundamental distributed coordination problem. Shared-memory mutual exclusion research focuses on local-spin algorithms and uses the remote memory references (RMRs) metric. Attiya, Hendler, and Woelfel (40th STOC, 2008) established an Ω(log N) lower bound on the number of RMRs incurred by processes as they enter and exit the critical section, where N is the number of processes in the system. This matches the upper bound of Yang and Anderson (Distrib. Comput. 9(1):51–60,
1995). The upper and lower bounds apply for algorithms that only use read and write operations. The lower bound of Attiya et al.,
however, only holds for deterministic algorithms. The question of whether randomized mutual exclusion algorithms, using reads
and writes only, can achieve sub-logarithmic expected RMR complexity remained open. We answer this question in the affirmative
by presenting starvation-free randomized mutual exclusion algorithms for the cache coherent (CC) and the distributed shared
memory (DSM) model that have sub-logarithmic expected RMR complexity against the strong adversary. More specifically, each
process incurs an expected number of O(log N / log log N) RMRs per passage through the entry and exit sections, while in the worst case the number of RMRs is O(log N). 相似文献
20.
Moving point object data can be analyzed through the discovery of patterns in trajectories. We consider the computational
efficiency of detecting four such spatio-temporal patterns, namely flock, leadership, convergence, and encounter, as defined
by Laube et al., Finding REMO—detecting relative motion patterns in geospatial lifelines, 201–214, (2004). These patterns are large enough subgroups of the moving point objects that exhibit similar movement in the sense of direction,
heading for the same location, and/or proximity. By the use of techniques from computational geometry, including approximation
algorithms, we improve the running time bounds of existing algorithms to detect these patterns.
相似文献
Bettina SpeckmannEmail: |