共查询到20条相似文献,搜索用时 15 毫秒
1.
为解决一类Job Shop问题,提出了一种具有自适应机制的新的混合算法。该算法在分析和比较模拟退火算法和遗传算法的基础上,针对它们都缺乏全局指导机制的共同问题,引入具有自适应能力的全局指导策略,建立起个体与种群之间的反馈机制,混合后的算法还综合了两种启发算法的各自优点。通过具体的算例验证了该算法的有效性。 相似文献
2.
Ryosuke Saga Kouki Okamoto Hiroshi Tsuji Kazunori Matsumoto 《Artificial Life and Robotics》2011,16(3):426-429
This article proposes the development of a software simulator that allows the user to evaluate algorithms for recommender systems. This simulator consists of agents, items, a recommender, a controller, and a recorder, and it locates the agents and allocates the items based on a small-world network. An agent plays the role of a user in the recommender system, and the recommender also plays a role in the system. The controller handles the simulation flow where (1) the recommender recommends items to agents based on the recommendation algorithm, (2) each agent evaluates the items based on the agents’ rating algorithm and using the attributes of each item and agent, and (3) the recorder obtains the results of the rating and evaluation measurements for the recommendation pertaining to such information as precision and recall. This article considers the background of the proposal and the architecture of the simulator. 相似文献
3.
In the process of learning the naive Bayes, estimating probabilities from a given set of training samples is crucial. However, when the training samples are not adequate, probability estimation method will inevitably suffer from the zero-frequency problem. To avoid this problem, Laplace-estimate and M-estimate are the two main methods used to estimate probabilities. The estimation of two important parameters m (integer variable) and p (probability variable) in these methods has a direct impact on the underlying experimental results. In this paper, we study the existing probability estimation methods and carry out a parameter Cross-test by experimentally analyzing the performance of M-estimate with different settings for the two parameters m and p. This part of experimental result shows that the optimal parameter values vary corresponding to different data sets. Motivated by these analysis results, we propose an estimation model based on self-adaptive differential evolution. Then we propose an approach to calculate the optimal m and p value for each conditional probability to avoid the zero-frequency problem. We experimentally test our approach in terms of classification accuracy using the 36 benchmark machine learning repository data sets, and compare it to a naive Bayes with Laplace-estimate and M-estimate with a variety of setting of parameters from literature and those possible optimal settings via our experimental analysis. The experimental results show that the estimation model is efficient and our proposed approach significantly outperforms the traditional probability estimation approaches especially for large data sets (large number of instances and attributes). 相似文献
4.
5.
Extracting significant features from high-dimension and small sample size biological data is a challenging problem. Recently, Micha? Draminski proposed the Monte Carlo feature selection (MC) algorithm, which was able to search over large feature spaces and achieved better classification accuracies. However in MC the information of feature rank variations is not utilized and the ranks of features are not dynamically updated. Here, we propose a novel feature selection algorithm which integrates the ideas of the professional tennis players ranking, such as seed players and dynamic ranking, into Monte Carlo simulation. Seed players make the feature selection game more competitive and selective. The strategy of dynamic ranking ensures that it is always the current best players to take part in each competition. The proposed algorithm is tested on 8 biological datasets. Results demonstrate that the proposed method is computationally efficient, stable and has favorable performance in classification. 相似文献
6.
Model selection plays a key role in the application of support vector machine (SVM). In this paper, a method of model selection based on the small-world strategy is proposed for least squares support vector regression (LS-SVR). In this method, the model selection is treated as a single-objective global optimization problem in which generalization performance measure performs as fitness function. To get better optimization performance, the main idea of depending more heavily on dense local connections in small-world phenomenon is considered, and a new small-world optimization algorithm based on tabu search, called the tabu-based small-world optimization (TSWO), is proposed by employing tabu search to construct local search operator. Therefore, the hyper-parameters with best generalization performance can be chosen as the global optimum based on the powerful search ability of TSWO. Experiments on six complex multimodal functions are conducted, demonstrating that TSWO performs better in avoiding premature of the population in comparison with the genetic algorithm (GA) and particle swarm optimization (PSO). Moreover, the effectiveness of leave-one-out bound of LS-SVM on regression problems is tested on noisy sinc function and benchmark data sets, and the numerical results show that the model selection using TSWO can almost obtain smaller generalization errors than using GA and PSO with three generalization performance measures adopted. 相似文献
7.
Feature selection is a dimensionality reduction technique that helps to improve data visualization, simplify learning, and enhance the efficiency of learning algorithms. The existing redundancy-based approach, which relies on relevance and redundancy criteria, does not account for feature complementarity. Complementarity implies information synergy, in which additional class information becomes available due to feature interaction. We propose a novel filter-based approach to feature selection that explicitly characterizes and uses feature complementarity in the search process. Using theories from multi-objective optimization, the proposed heuristic penalizes redundancy and rewards complementarity, thus improving over the redundancy-based approach that penalizes all feature dependencies. Our proposed heuristic uses an adaptive cost function that uses redundancy–complementarity ratio to automatically update the trade-off rule between relevance, redundancy, and complementarity. We show that this adaptive approach outperforms many existing feature selection methods using benchmark datasets. 相似文献
8.
Evan E. Anderson 《Software》1989,19(8):707-717
The proliferation of software packages has created a difficult, complex problem of evaluation and selection for many users. Traditional approaches to the quantification of package performance have relied on compensatory models, such as the linear weighted attribute model, which sums the weighted ratings of software attributes. These approaches define the dimensions of quality too narrowly and, therefore, omit substantial amounts of information from consideration. This paper presents an alternative methodology, previously used in capital rationing and tournament ranking, that expands the opportunity for objective insight into software quality. In particular, it considers three measures of quality, the frequency with which the attribute ratings of one package exceed those of another, the presence of outliers, where very poor performance may exist on a single attribute and be glossed over by compensatory methods, and the cumulative magnitude of attribute ratings on one package that exceed those on others. The proposed methodology is applied to the evaluation of the following software types: word processing, database management systems, spreadsheet/financial planning, integrated software, graphics, data communications and project management. 相似文献
9.
Masao Kubo Hiroshi Sato Takashi Matsubara Chris Melhuish 《Artificial Life and Robotics》2009,14(2):168-173
In this article, energy trophallaxis, i.e., distributed autonomous energy management methodology inspired by social insects and bat behavior, and its advantages, are shown by a series of computer simulations to address the survivability of organized groups of agents in a dynamic environment with uncertainty. The uncertainty of the agents’ organizational behavior is represented by two Lévy distributions. By carefully controlling energy donation behavior based on these distributions, we can examine the survivability of a larger group that traditional methods cannot analyze. As a result, even a small degree of friendship throughout the organization makes the group’s survivability improve dramatically. 相似文献
10.
针对当前木马普遍缺乏智能和适应性的问题,提出了一种功能原子化的自适应木马模型,给出了详细工作机制和实现方案,并提出了解决木马原子选择和组合等关键问题的相应算法.最后通过实例表明,该木马模型有较强的通用性、扩展性和健壮性.该模型具有功能原子化、目标个性化和系统智能化的特点,有效提高了木马对反木马软件的免疫性能,延长了木马的生存周期. 相似文献
11.
提出了基于多副本的动态自适应数据传输模型,并详细介绍了该模型所采用的动态任务分配算法,对比分析了与传统的基于单一副本数据传输方法的区别,最后运用GridFTP对两种方法作了详细的验证和比较.实验结果表明基于多副本的动态自适应数据传输模型具有较好的传输性能. 相似文献
12.
P. van Rosmalen P. Sloep L. Kester F. Brouns M. de Croock K. Pannekeet & R. Koper 《Journal of Computer Assisted Learning》2008,24(1):74-86
Abstract The introduction of elearning often leads to an increase in the time staff spends on tutoring. To alleviate the workload of staff tutors, we developed a model for organizing and supporting learner-related interactions in elearning systems. It makes use of the knowledge and experience of peers and builds on the assumption that (lifelong) learners, when instructed and assisted carefully, should be able to assist each other. The model operates at two levels. At level 1, prospective peer tutors are identified, based on a combination of workload and competency indicators. At level 2, the thus identified prospective peer tutors become the actual tutors; this is performed by empowering them with tools and guidelines for the task at hand. The article will situate the model in networks for lifelong learning. For one kind of interactions, answering content-related questions, we will review a set of existing approaches and emerging technologies and describe our model. Finally, we will describe and discuss the results of a simulation of a prototype of the model and discuss to what extent it matches our requirements. 相似文献
13.
Nembhard D.A. White C.C. III 《IEEE transactions on systems, man, and cybernetics. Part A, Systems and humans : a publication of the IEEE Systems, Man, and Cybernetics Society》1999,29(5):450-459
We consider the problem of routing a vehicle making multiple intermediate stops, assuming a non-order-preserving, multiattribute reward structure. Sub-paths of optimal paths may not be optimal for such a reward structure, which may result from routing a pick-up and delivery vehicle carrying hazardous materials that is routed on the basis of minimizing cost and risk. We assume that a priori bounds exist on the rewards from the vehicle's current position to each of the intermediate destinations and to the depot through all the intermediate destinations that have yet to be visited. Precise calculations of these rewards would require additional computational effort. Two heuristic search algorithms, BU* and DU*, are developed and analyzed. Both algorithms satisfy termination, completeness, and admissibility properties. Results indicate that BU* is guaranteed to perform no worse given better heuristic information, a guarantee that cannot be made for DU*. Computational requirements are illustrated through examples based on a real network in northeast Ohio 相似文献
14.
Mathias M. Adankon Author Vitae Mohamed Cheriet Author Vitae 《Pattern recognition》2007,40(3):953-963
Tuning support vector machine (SVM) hyperparameters is an important step in achieving a high-performance learning machine. It is usually done by minimizing an estimate of generalization error based on the bounds of the leave-one-out (LOO) such as radius-margin bound and on the performance measures such as generalized approximate cross-validation (GACV), empirical error, etc. These usual automatic methods used to tune the hyperparameters require an inversion of the Gram-Schmidt matrix or a resolution of an extra-quadratic programming problem. In the case of a large data set these methods require the addition of huge amounts of memory and a long CPU time to the already significant resources used in SVM training. In this paper, we propose a fast method based on an approximation of the gradient of the empirical error, along with incremental learning, which reduces the resources required both in terms of processing time and of storage space. We tested our method on several benchmarks, which produced promising results confirming our approach. Furthermore, it is worth noting that the gain time increases when the data set is large. 相似文献
15.
16.
17.
Qi Wu 《Expert systems with applications》2011,38(1):184-192
Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm driven by the simulation of a social psychological metaphor instead of the survival of the fittest individual. Based on the chaotic system theory, this paper proposes new PSO method that uses chaotic mappings for parameter adaptation of Wavelet v-support vector machine (Wv-SVM). Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed PSO introduces chaos mapping using logistic mapping sequences which increases its convergence rate and resulting precision. The simulation results show the parameter selection of Wv-SVM model can be solved with high search efficiency and solution accuracy under the proposed PSO method. 相似文献
18.
针对无线传感器网络锚节点稀疏条件下节点定位中存在的翻转现象和定位精度问题,提出了一种基于MCB的自适应和声搜索定位算法。通过引入MCB算法中的采样思想,随机产生网络拓扑约束下的未知节点的坐标,引入自适应的和声保留概率和音调调节概率,达到提高搜索能力和定位精度目的。仿真结果表明:算法能有效解决翻转现象,提高定位精度,提出的算法在定位精度和计算量方面优于对比算法。 相似文献
19.
Since the 802.16e standard has been released, there are few authentication pattern schemes and Extensible Authentication Protocol (EAP) selection proposals for manufacturers to choose from in large-scale network systems. This paper focuses on the re-authentication method??s design, improvement, and optimization for the PMP mode of the IEEE 802.16e standard in large-scale network systems to ensure the security of the keys. We first present an optimized scheme, called EAP_AKAY, based on the EAP-AKA authentication method (Arkko and Haverinen in Extensible Authentication Protocol Method for UMTS Authentication and Key Agreement (EAP-AKA), 2004), and then a self-adaptive K selection mechanism is proposed for re-authentication load balancing based on EAP_AKAY in large-scale network systems. This presented mechanism considers the cost of authentication, not only at the server end, but also at the client end. Thus, this scheme would minimize the total cost and resolve the limitation in current schemes. Furthermore, the K value would be re-selected, not only when MS is roaming to another BS region, but also in residing time to adapt to network environment changes. The simulation results and relevant analysis demonstrate that our scheme is effective in terms of the total cost of authentication, master key renewal, and good security. 相似文献
20.
This paper deals with the problem of preemptive scheduling in a two-stage flowshop with parallel unrelated machines at the first stage and a single machine at the second stage. At the first stage, jobs use some additional resources which are available in limited quantities at any time. The resource requirements are of 0–1 type. The objective is the minimization of makespan. The problem is NP-hard. Heuristic algorithms are proposed which solve to optimality the resource constrained scheduling problem at the first stage of the flowshop, and at the same time, minimize the makespan in the flowshop by selecting appropriate jobs for simultaneous processing. Several rules of job selection are considered. The performance of the proposed heuristic algorithms is analyzed by comparing solutions with the lower bound on the optimal makespan. The extensive computational experiment shows that the proposed heuristic algorithms are able to produce near-optimal solutions in short computational time. 相似文献