共查询到20条相似文献,搜索用时 62 毫秒
1.
R. Huber T. Schell 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2002,6(6):449-455
In genetic algorithms, tournament schemes are often applied as selection operators. The advantage is simplicity and efficiency.
On the other hand, major deficiencies related to tournament selection are the coarse scaling of the selection pressure and
the poor sampling accuracy. We introduce a new variant of tournament selection which provides an adjustable probability distribution,
a fine-tuning facility for the selection pressure and an improved sampling accuracy at the cost of a minimal increase of the
complexity and with almost no loss of efficiency. 相似文献
2.
Prefetching in Content Distribution Networks via Web Communities Identification and Outsourcing 总被引:1,自引:0,他引:1
Antonis Sidiropoulos George Pallis Dimitrios Katsaros Konstantinos Stamos Athena Vakali Yannis Manolopoulos 《World Wide Web》2008,11(1):39-70
Content distribution networks (CDNs) improve scalability and reliability, by replicating content to the “edge” of the Internet.
Apart from the pure networking issues of the CDNs relevant to the establishment of the infrastructure, some very crucial data
management issues must be resolved to exploit the full potential of CDNs to reduce the “last mile” latencies. A very important
issue is the selection of the content to be prefetched to the CDN servers. All the approaches developed so far, assume the
existence of adequate content popularity statistics to drive the prefetch decisions. Such information though, is not always
available, or it is extremely volatile, turning such methods problematic. To address this issue, we develop self-adaptive
techniques to select the outsourced content in a CDN infrastructure, which requires no apriori knowledge of request statistics.
We identify clusters of “correlated” Web pages in a site, called Web site communities, and make these communities the basic outsourcing unit. Through a detailed simulation environment, using both real and synthetic
data, we show that the proposed techniques are very robust and effective in reducing the user-perceived latency, performing
very close to an unfeasible, off-line policy, which has full knowledge of the content popularity. 相似文献
3.
Genetic algorithms (GAs) are probabilistic optimization methods based on the biological principle of natural evolution. One
of the important operators in GAs is the selection strategy for obtaining better solutions. Specifically, finding a balance
between the selection pressure and diversity is a critical issue in designing an efficient selection strategy. To this extent,
the recently proposed real world tournament selection (RWTS) method has showed good performance in various benchmark problems. In this paper, we focus on analyzing characteristics of
RWTS from the viewpoint of both the selection probabilities and stochastic sampling properties in order to provide a rational
explanation for why RWTS provides improved performance. Statistical experimental results show that RWTS has a higher selection pressure with a relatively small loss of diversity and higher sampling accuracy than conventional
tournament selection. The performance tests in a traveling salesman problem further confirm that the comparatively higher
pressure and sampling accuracy, which are inherent in RWTS, can enhance the performance in the selection strategy. 相似文献
4.
We discuss the relationship between ID-based key agreement protocols, certificateless encryption and ID-based key encapsulation
mechanisms. In particular we show how in some sense ID-based key agreement is a primitive from which all others can be derived.
In doing so we focus on distinctions between what we term pure ID-based schemes and non-pure schemes, in various security models. We present security models for ID-based key agreement which do not “look natural” when
considered as analogues of normal key agreement schemes, but which look more natural when considered in terms of the models
used in certificateless encryption. We illustrate our models and constructions with two running examples, one pairing based
and one non-pairing based. Our work highlights distinctions between the two approaches to certificateless encryption and adds
to the debate about what is the “correct” security model for certificateless encryption. 相似文献
5.
6.
Rodolphe Buda 《Computational Economics》2008,31(4):397-408
The aggregation procedure is an important theoretical and empirical topic of economics. It appears in the Microeconomics and
Macroeconomics, in Panel and Cross-Sectional Data Analysis, in Data Mining Analysis, in Input–Output and Agent-based Computational
Economics Modelling. The question of the choice of algorithm is became important since the size of the sample data has became
more important, and despite of the speed of the computers. In this paper we present the “classical” algorithm (the “matrix
algebraic” one) of aggregation of the two-dimensional sample data, and compare it to the alternative algorithms (the “vectorial”
one) we developed. Then we present some extensions to the multidimensional aggregation.
相似文献
7.
In recent years, on-demand transport systems (such as a demand-bus system) are focused as a new transport service in Japan.
An on-demand vehicle visits pick-up and delivery points by door-to-door according to the occurrences of requests. This service
can be regarded as a cooperative (or competitive) profit problem among transport vehicles. Thus, a decision-making for the
problem is an important factor for the profits of vehicles (i.e., drivers). However, it is difficult to find an optimal solution
of the problem, because there are some uncertain risks, e.g., the occurrence probability of requests and the selfishness of
other rival vehicles. Therefore, this paper proposes a transport policy for on-demand vehicles to control the uncertain risks.
First, we classify the profit of vehicles as “assured profit” and “potential profit”. Second, we propose a “profit policy”
and “selection policy” based on the classification of the profits. Moreover, the selection policy can be classified into “greed”,
“mixed”, “competitive”, and “cooperative”. These selection policies are represented by selection probabilities of the next
visit points to cooperate or compete with other vehicles. Finally, we report simulation results and analyze the effectiveness
of our proposal policies. 相似文献
8.
Artem Sokolov Darrell Whitley Andre’ da Motta Salles Barreto 《Genetic Programming and Evolvable Machines》2007,8(3):221-237
This paper evaluates different forms of rank-based selection that are used with genetic algorithms and genetic programming.
Many types of rank based selection have exactly the same expected value in terms of the sampling rate allocated to each member
of the population. However, the variance associated with that sampling rate can vary depending on how selection is implemented.
We examine two forms of tournament selection and compare these to linear rank-based selection using an explicit formula. Because
selective pressure has a direct impact on population diversity, we also examine the interaction between selective pressure
and different mutation strategies. 相似文献
9.
High Performance Inverse Preconditioning 总被引:1,自引:0,他引:1
George A. Gravvanis 《Archives of Computational Methods in Engineering》2009,16(1):77-108
The derivation of parallel numerical algorithms for solving sparse linear systems on modern computer systems and software
platforms has attracted the attention of many researchers over the years. In this paper we present an overview on the design
issues of parallel approximate inverse matrix algorithms, based on an anti-diagonal “wave pattern” approach and a “fish-bone”
computational procedure, for computing explicitly various families of exact and approximate inverses for solving sparse linear
systems. Parallel preconditioned conjugate gradient-type schemes in conjunction with parallel approximate inverses are presented
for the efficient solution of sparse linear systems. Applications of the proposed parallel methods by solving characteristic
sparse linear systems on symmetric multiprocessor systems and distributed systems are discussed and the parallel performance
of the proposed schemes is given, using MPI, OpenMP and Java multithreading. 相似文献
10.
We describe a method for introducing “partial functions” into ACL2, that is, functions not defined everywhere. The function
“definitions” are actually admitted via the encapsulation principle: the new function symbol is constrained to satisfy the
appropriate equation. This is permitted only when a witness function can be exhibited, establishing that the constraint is
satisfiable. Of particular interest is the observation that every tail recursive definition can be witnessed in ACL2. We describe
a macro that allows the convenient introduction of arbitrary tail recursive functions, and we discuss how such functions can
be used to prove theorems about state machine models without reasoning about “clocks” or counting the number of steps until
termination. Our macro for introducing “partial functions” also permits a variety of other recursive schemes, and we briefly
illustrate some of them.
This revised version was published online in August 2006 with corrections to the Cover Date. 相似文献
11.
This paper discusses a study on the mechanism of self-organization. A global order is organized by the simple and locally
coordinated actions of autonomous agents using only local information, so complex or globally coordinated actions which use
global communication and high-level strategies are not necessary. The fundamental factors for establishing a global order
using self-organization are a “dissipative structure,” an “autocatalysis mechanism,” and “intentional fluctuations.” If an
environment where there are agents has a dissipative structure and those agents have some sort of autocatalysis and intentional
fluctuation mechanisms within themselves, it is possible to form a global order for them using only their simple and locally
coordinated actions. “The blind-hunger dilemma” is used as an example to simulate the self-organization and coordinated actions
of agents. In this simulation environment, there are many ant-like agents which must get energy. However, there is only one
small energy supply base, so either an efficient method or the coordinated actions of agents is needed. As a result, the agents
using our approach could move and get energy more efficiently than agents using conventional coordination mechanisms involving
global communication and high-level strategies.
Real World Computing Partnership
This work was presented, in part, at the Second International Symposium on Artificial Life and Robotics, Oita, Japan, February
18–20, 1997 相似文献
12.
Allen Van Gelder Fumiaki Okushi 《Annals of Mathematics and Artificial Intelligence》1999,26(1-4):113-132
This paper describes new “lemma” and “cut” strategies that are efficient to apply in the setting of propositional Model Elimination.
Previous strategies for managing lemmas and C-literals in Model Elimination were oriented toward first-order theorem proving.
The original “cumulative” strategy remembers lemmas forever, and was found to be too inefficient. The previously reported
C-literal and unit-lemma strategies, such as “strong regularity”, forget them unnecessarily soon in the propositional domain.
An intermediate strategy, called “quasi-persistent” lemmas, is introduced. Supplementing this strategy, methods for “eager”
lemmas and two forms of controlled “cut” provide further efficiencies. The techniques have been incorporated into “Modoc”,
which is an implementation of Model Elimination, extended with a new pruning method that is designed to eliminate certain
refutation attempts that cannot succeed. Experimental data show that on random 3CNF formulas at the “hard” ratio of 4.27 clauses
per variable, Modoc is not as effective as recently reported model-searching methods. However, on more structured formulas
from applications, such as circuit-fault detection, it is superior.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
13.
Rapid scalar value classification and volume clipping for?interactive 3D medical image visualization
In many clinical scenarios, medical data visualization and interaction are important to physicians for exploring inner anatomical
structures and extracting meaningful diagnostic information. Real-time high-quality volume rendering, artifact-free clipping,
and rapid scalar value classification are important techniques employed in this process. Unfortunately, in practice, it is
still difficult to achieve an optimal balance. In this paper, we present some strategies to address this issue, which are
based on the calculation of segment-based post color attenuation and dynamic ray–plane intersection (RPI) respectively. When
implemented within our visualization system, the new classification algorithm can deliver real-time performance while avoiding
the “color over-accumulation” artifacts suffered by the commonly used acceleration algorithms that employ pre-integrated classification.
Our new strategy can achieve an optimized balance between image quality and classification speed. Next, the RPI algorithm
is used with opacity adjustment technique to effectively remove the “striping” artifacts on the clipping plane caused by the
nonuniform integration length. Furthermore, we present techniques for multiple transfer function (TF) based anatomical feature
enhancement and “keyhole” based endoscopic inner structure view. Finally, the algorithms are evaluated subjectively by radiologists
and quantitatively compared using image power spectrum analysis. 相似文献
14.
Benjamin Parent Annemarie Kökösy Dragos Horvath 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2007,11(1):63-79
Novel genetic algorithm (GA)-based strategies, specifically aimed at multimodal optimization problems, have been developed by hybridizing the GA with alternative optimization heuristics, and used for the search of a maximal number of minimum energy conformations (geometries) of complex molecules (conformational sampling). Intramolecular energy, the targeted function, describes a very complex nonlinear response hypersurface in the phase space of structural degrees of freedom. These are the torsional angles controlling the relative rotation of fragments connected by covalent bonds. The energy surface of cyclodextrine, a macrocyclic sugar molecule with N = 65 degrees of freedom served as model system for testing and tuning the herein proposed multimodal optimization strategies. The success of GAs is known to depend on the peculiar hypotheses used to simulate Darwinian evolution. Therefore, the conformational sampling GA (CSGA) was designed such as to allow an extensive control on the evolution process by means of tunable parameters, some being classical GA controls (population size, mutation frequency, etc.), while others control the herein designed population diversity management tools or the frequencies of calls to the alternative heuristics. They form a large set of operational parameters, and a (genetic) meta-optimization procedure was used to search for parameter configurations maximizing the efficiency of the CSGA process. The specific impact of disabling a given hybridizing heuristics was estimated relatively to the default sampling behavior (with all the implemented heuristics on). Optimal sampling performance was obtained with a GA featuring a built-in tabu search mechanism, a “Lamarckian” (gradient-based) optimization tool, and, most notably, a “directed mutations” engine (a torsional angle driving procedure generating chromosomes that radically differ from their parents but have good chances to be “fit”, unlike offspring from spontaneous mutations). “Biasing” heuristics, implementing some more elaborated random draw distribution laws instead of the ‘flat’ default rule for torsional angle value picking, were at best unconvincing or outright harmful. Naive Bayesian analysis was employed in order to estimated the impact of the operational parameters on the CSGA success. The study emphasized the importance of proper tuning of the CSGA. The meta-optimization procedure implicitly ensures the management, in the context of an evolving operational parameterization, of the repeated GA runs that are absolutely mandatory for the reproducibility of the sampling of such vast phase spaces. Therefore, it should not be only seen as a tuning tool, but as the strategy for actual problem solving, essentially advocating a parallel exploration of problem space and parameter space. 相似文献
15.
With the concept of “Cognitive Sense of China” and “Smart Planet” proposed, wireless sensor networking is considered to be one of the most important technologies of the new century. In wireless sensor networks, how to extend battery lifetime is a core problem. In this paper, we address the problem of designing battery-friendly packet transmission policies for wireless sensor networks. Our objective is to maximize the lifetime of batteries for wireless sensor nodes subject to certain delay constraints. We present three packet transmission schemes and evaluate them with respect to battery performance. The first scheme, based on combining multiple packets, utilizes battery charge recovery effect, which allows some charge to be recovered during long idle periods. The second scheme, based on a modified version of lazy packet scheduling, draws smoother and lower current and is battery efficient. The final scheme, based on a combination of the two previous schemes has superior battery performance at the expense of larger average packet delay. All three schemes are simulated for a wireless network framework with internet traffic, and the results were validated. 相似文献
16.
A new analytical method with high speed processing in the time-frequency domain is presented. In this method, sine and cosine
waves with an established frequency and multiple periods are used, and we call these waves “cutting-out waves.” We all the
frequency the “established frequency,” and we call the number of periods of the cutting-out wave the “number of periods.”
The inner product of the cutting-out wave and the signal are calculated, and a signal element with a frequency near the established
frequency is detected. We call the unit that detects the signal element an “auditory cell.” There are many auditory cells,
and they have an established frequency which differs very little. The design of this method is the arrangement of the auditory
cells. There are three parameters in the design, and these parameters are a sampling frequency, the number of periods, and
the increasing rate of the established frequencies. In this article, we show the selection of these parameters. 相似文献
17.
Patrick Moratori Sanja Petrovic José Antonio Vázquez-Rodríguez 《Applied Intelligence》2010,32(2):205-215
This paper investigates the problem of inserting new rush orders into a current schedule of a real world job shop floor. Effective
rescheduling methods must achieve reasonable levels of performance, measured according to a certain cost function, while preserving
the stability of the shop floor, i.e. introducing as few changes as possible to the current schedule. This paper proposes
new and effective match-up strategies which modify only a part of the schedule in order to accommodate the arriving jobs.
The proposed strategies are compared with other rescheduling methods such as “right shift” and “insertion in the end”, which
are optimal with respect to stability but poor with respect to performance, and with “total rescheduling” which is optimal
with respect to performance but poor with respect to stability. Our results and statistical analysis reveal that the match-up
strategies are comparable to the “right shift” and “insertion in the end” with respect to stability and as good as “total
rescheduling” with respect to performance. 相似文献
18.
This article proposes a novel online portfolio selection strategy named “Passive Aggressive Mean Reversion” (PAMR). Unlike
traditional trend following approaches, the proposed approach relies upon the mean reversion relation of financial markets.
Equipped with online passive aggressive learning technique from machine learning, the proposed portfolio selection strategy
can effectively exploit the mean reversion property of markets. By analyzing PAMR’s update scheme, we find that it nicely
trades off between portfolio return and volatility risk and reflects the mean reversion trading principle. We also present
several variants of PAMR algorithm, including a mixture algorithm which mixes PAMR and other strategies. We conduct extensive
numerical experiments to evaluate the empirical performance of the proposed algorithms on various real datasets. The encouraging
results show that in most cases the proposed PAMR strategy outperforms all benchmarks and almost all state-of-the-art portfolio
selection strategies under various performance metrics. In addition to its superior performance, the proposed PAMR runs extremely
fast and thus is very suitable for real-life online trading applications. The experimental testbed including source codes
and data sets is available at . 相似文献
19.
Calculating the expected loss of diversity of selection schemes 总被引:2,自引:0,他引:2
Motoki T 《Evolutionary computation》2002,10(4):397-422
This paper concerns a measure of selective pressure, called "loss of diversity," that denotes the proportion of unselected individuals during the selection phase. We probabilistically calculate the expected value and variance of loss of diversity in tournament selection, truncation selection, linear ranking selection, and exponential ranking selection. From numerical results, we observe that in tournament selection, many more individuals are expected to be lost than with Blickle and Thiele's static estimate. We also observe that tournament and exponential ranking schemes potentially bring about nearly equivalent selection behaviors but have different types of control parameters. 相似文献
20.
An evaluation of combination strategies for test case selection 总被引:1,自引:0,他引:1
Mats Grindal Birgitta Lindström Jeff Offutt Sten F. Andler 《Empirical Software Engineering》2006,11(4):583-611
This paper presents results from a comparative evaluation of five combination strategies. Combination strategies are test case selection methods that combine “interesting” values of the input parameters of a test subject to form test cases. This research comparatively evaluated five combination strategies; the All Combination strategy (AC), the Each Choice strategy (EC), the Base Choice strategy (BC), Orthogonal Arrays (OA) and the algorithm from the Automatic Efficient Test Generator (AETG). AC satisfies n-wise coverage, EC and BC satisfy 1-wise coverage, and OA and AETG satisfy pair-wise coverage. The All Combinations strategy was used as a “gold standard” strategy; it subsumes the others but is usually too expensive for practical use. The others were used in an experiment that used five programs seeded with 128 faults. The combination strategies were evaluated with respect to the number of test cases, the number of faults found, failure size, and number of decisions covered. The strategy that requires the least number of tests, Each Choice, found the smallest number of faults. Although the Base Choice strategy requires fewer test cases than Orthogonal Arrays and AETG, it found as many faults. Analysis also shows some properties of the combination strategies that appear significant. The two most important results are that the Each Choice strategy is unpredictable in terms of which faults will be revealed, possibly indicating that faults are found by chance, and that the Base Choice and the pair-wise combination strategies to some extent target different types of faults. 相似文献