首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Games provide competitive, dynamic environments that make ideal test beds for computational intelligence theories, architectures, and algorithms. Natural evolution can be considered to be a game in which the rewards for an organism that plays a good game of life are the propagation of its genetic material to its successors and its continued survival. In natural evolution, the fitness of an individual is defined with respect to its competitors and collaborators, as well as to the environment. Within the evolutionary computation (EC) literature, this is known as co-evolution and within this paradigm, expert game-playing strategies have been evolved without the need for human expertise.  相似文献   

2.
CODE: a unified approach to parallel programming   总被引:1,自引:0,他引:1  
Browne  J.C. Azam  M. Sobek  S. 《Software, IEEE》1989,6(4):10-18
The authors describe CODE (computation-oriented display environment), which can be used to develop modular parallel programs graphically in an environment built around fill-in templates. It also lets programs written in any sequential language be incorporated into parallel programs targeted for any parallel architecture. Broad expressive power was obtained in CODE by including abstractions of all the dependency types that occur in the widely used parallel-computation models and by keeping the form used to specify firing rules general. The CODE programming language is a version of generalized dependency graphs designed to encode the unified parallel-computation model. A simple example is used to illustrate the abstraction level in specifying dependencies and how they are separated from the computation-unit specification. The most important CODE concepts are described by developing a declarative, hierarchical program with complex firing rules and multiple dependency types  相似文献   

3.
4.
5.
Two process-algebraic approaches have been developed for comparing two bisimulation-equivalent processes with respect to speed: the one of Moller/Tofts equips actions with lower time bounds, while the other by Lüttgen/Vogler considers upper time bounds instead.

This article sheds new light on both approaches by testifying to their close relationship. We introduce a general, intuitive concept of “faster-than”, which is formalised by a notion of amortised faster-than preorder. When closing this preorder under all contexts, exactly the two faster-than preorders investigated by Moller/Tofts and Lüttgen/Vogler arise. For processes incorporating both lower and upper time bounds we also show that the largest precongruence contained in the amortised faster-than preorder is not a proper preorder but a timed bisimulation. In the light of this result we systematically investigate under which circumstances the amortised faster-than preorder degrades to an equivalence.  相似文献   


6.
Evolutionary computation: comments on the history and current state   总被引:9,自引:0,他引:9  
Evolutionary computation has started to receive significant attention during the last decade, although the origins can be traced back to the late 1950's. This article surveys the history as well as the current state of this rapidly growing field. We describe the purpose, the general structure, and the working principles of different approaches, including genetic algorithms (GA) (with links to genetic programming (GP) and classifier systems (CS)), evolution strategies (ES), and evolutionary programming (EP) by analysis and comparison of their most important constituents (i.e. representations, variation operators, reproduction, and selection mechanism). Finally, we give a brief overview on the manifold of application domains, although this necessarily must remain incomplete  相似文献   

7.
We apply evolutionary algorithms to the approximation of a three-dimensional image of a human face using a triangular mesh. The problem is how to locate a limited number of node points such that the mesh approximates the facial surface as closely as possible. Two evolutionary algorithms are implemented and compared. The first does selection and reproduction in the population of node points in a single triangulation. The second is a genetic algorithm in which a set of different triangulations is regarded as a population. We expect that such evolutionary computation can be used in other engineering applications which share the same problem of surface approximation  相似文献   

8.
By introducing a form of reorder for multidimensional data, we propose a unified fast algo-rithm that jointly employs one-dimensional W transform and multidimensional discrete polynomial trans-form to compute eleven types of multidimensional discrete orthogonal transforms, which contain three types of m-dimensional discrete cosine transforms ( m-D DCTs) ,four types of m-dimensional discrete W transforms ( m-D DWTs) ( m-dimensional Hartley transform as a special case), and four types of generalized discrete Fourier transforms ( m-D GDFTs). For real input, the number of multiplications for all eleven types of the m-D discrete orthogonal transforms needed by the proposed algorithm are only 1/m times that of the commonly used corresponding row-column methods, and for complex input, it is further reduced to 1/(2m) times. The number of additions required is also reduced considerably. Furthermore, the proposed algorithm has a simple computational structure and is also easy to be im-plemented on computer, and th  相似文献   

9.

Semantic similarity assessment between concepts is an important task in many language related applications. In the past, many approaches to assess similarity of concepts have been proposed by using one knowledge source. In this paper, some limitations of the existing similarity measures are identified. To tackle these problems, we propose an extensive study for semantic similarity of concepts from which a unified framework for semantic similarity computation is presented. Based on our framework, we give some generic and flexible approaches to semantic similarity measures resulting from instantiations of the framework. In particular, we obtain some new approaches to similarity measures that existing methods cannot deal with by introducing multiple knowledge sources. The evaluation based on eight benchmarks, three widely used benchmarks (i.e., M&C, R&G, and WordSim-353 benchmarks) and five benchmarks developed in ourselves (i.e, Jiang-1, Jiang-2, Jiang-3, Jiang-4, and Jiang-5 benchmarks), sustains the intuitions with respect to human judgements. Overall, some methods proposed in this paper have a good human correlation (Pearson correlation with human judgments and Spearman correlation with human judgments) and constitute some effective ways of determining semantic similarity between concepts.

  相似文献   

10.
We give an overview of evolutionary robotics research at Sussex over the last five years. We explain and justify our distinctive approaches to (artificial) evolution, and to the nature of robot control systems that are evolved. Results are presented from research with evolved controllers for autonomous mobile robots, simulated robots, co-evolved animats, real robots with software controllers, and a real robot with a controller directly evolved in hardware.  相似文献   

11.
The prevalence of creativity in the emergent online media language calls for more effective computational approach to semantic change. Two divergent metaphysical understandings are found with the task: juxtaposition-view of change and succession-view of change. This paper argues that the succession-view better reflects the essence of semantic change and proposes a successive framework for automatic semantic change detection. The framework analyzes the semantic change at both the word level and the individual-sense level inside a word by transforming the task into change pattern detection over time series data. At the word level, the framework models the word’s semantic change with S-shaped model and successfully correlates change patterns with classical semantic change categories such as broadening, narrowing, new word coining, metaphorical change, and metonymic change. At the sense level, the framework measures the conventionality of individual senses and distinguishes categories of temporary word usage, basic sense, novel sense and disappearing sense, again with S-shaped model. Experiments at both levels yield increased precision rate as compared with the baseline, supporting the succession-view of semantic change.  相似文献   

12.
13.
In many applications, we need to measure similarity between nodes in a large network based on features of their neighborhoods. Although in-network node similarity based on proximity has been well investigated, surprisingly, measuring in-network node similarity based on neighborhoods remains a largely untouched problem in literature. One challenge is that in different applications we may need different measurements that manifest different meanings of similarity. Furthermore, we often want to make trade-offs between specificity of neighborhood matching and efficiency. In this paper, we investigate the problem in a principled and systematic manner. We develop a unified parametric model and a series of four instance measures. Those instance similarity measures not only address a spectrum of various meanings of similarity, but also present a series of trade-offs between computational cost and strictness of matching between neighborhoods of nodes being compared. By extensive experiments and case studies, we demonstrate the effectiveness of the proposed model and its instances.  相似文献   

14.
Within the field of e-Learning, a learning path represents a match between a learner profile and his preferences from one side, and the learning content presentation and the pedagogical requirements from the other side. The Curriculum Sequencing problem (CS) concerns the dynamic generation of a personal optimal learning path for a learner. This problem has gained an increased research interest in the last decade, as it is not possible to have a single learning path that suits every learner in the widely heterogeneous e-Learning environment. Since this problem is NP-hard, heuristics and meta-heuristics are usually used to approximate its solutions, in particular Evolutionary Computation approaches (EC). In this paper, a review of recent developments in the application of EC approaches to the CS problem is presented. A classification of these approaches is provided with emphasis on the tools necessary for facilitating learning content reusability and automated sequencing.  相似文献   

15.
16.
In this work, we present an evolutionary omputation-based solution to the circle packing problem (ECPP). The circle packing problem consists of placing a set of circles into a larger containing circle without overlaps: a problem known to be NP-hard. Given the impossibility to solve this problem efficiently, traditional and heuristic methods have been proposed to solve it. A naïve representation for chromosomes in a population-based heuristic search leads to high probabilities of violation of the problem constraints, i.e., overlapping. To convert solutions that violate constraints into ones that do not (i.e., feasible solutions), in this paper we propose two repair mechanisms. The first one considers every circle as an elastic ring and overlaps create repulsion forces that lead the circles to positions where the overlaps are resolved. The second one forms a Delaunay triangulation with the circle centers and repairs the circles in each triangle at a time, making sure repaired triangles are not modified later on. Based on the proposed repair heuristics, we present the results of the solution to the CPP problem to a set of unit circle problems (whose exact optimal solutions are known). These benchmark problems are solved using genetic algorithms, evolutionary strategies, particle swarm optimization, and differential evolution. The performance of the solutions is compared to those known solutions based on the packing density. We then perform a series of experiments to determine the performance of ECPP with non-unitary circles. First, we compare ECPP’s results to those of a public competition, which stand as the world record for that particular instance of the non-unitary CPP. On a second set of experiments, we control the variance of the size of the circles. In all experiments, ECPP yields satisfactory near-optimal solutions.  相似文献   

17.
An attempt is made to overview an emerging area of research devoted to analysis and design of control systems under constraints caused by limited information capacity of communication channels. The problem’s prehistory dating back to the 1960s–1970s, as well as the new approaches that appeared during the last decade were analyzed. Much attention was paid to various versions of the celebrated data rate theorem. Consideration was given to the problems of control through the communication networks and some results obtained for the nonlinear systems. The basic application areas were listed in brief.  相似文献   

18.
This paper describes a program written for a microcomputer which allows shaft (or beam) calculations to be made for a wide range of geometrical, loading and support conditions. Expressions for generalized support conditions are developed including support flexibility and unloaded support misalignment. The program, occupying 6100 bytes of memory without problem data statements, is shown applied to a simple flexible support problem and to a complex paper-making machine shaft mounted on 21 supports.  相似文献   

19.
20.
B-Splines, in general, and Non-Uniform Rational B-Splines (NURBS), in particular, have become indispensable modeling primitives in computer graphics and geometric modeling applications. In this paper, a novel high-performance architecture for the computation of uniform, nonuniform, rational, and nonrational B-Spline curves and surfaces is presented. This architecture has been derived through a sequence of steps. First, a systolic architecture for the computation of the basis function values, the basis function evaluation array (the BFEA), is developed. Using the BFEA as its core, an architecture for the computation of NURBS curves is constructed. This architecture is then extended to compute NURBS surfaces. Finally, this architecture is augmented to compute the surface normals, so that the output from this architecture can be directly used for rendering the NURBS surface  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号