首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
段亚南  何霆  褚滨生 《计算机工程与设计》2004,25(7):1206-1207,1217
为解决一类Job Shop问题,提出了一种具有自适应机制的新的混合算法。该算法在分析和比较模拟退火算法和遗传算法的基础上,针对它们都缺乏全局指导机制的共同问题,引入具有自适应能力的全局指导策略,建立起个体与种群之间的反馈机制,混合后的算法还综合了两种启发算法的各自优点。通过具体的算例验证了该算法的有效性。  相似文献   

2.
高维索引作为基于内容检索和模式识别等领域的一项关键技术,其性能直接影响整个系统的查询速度和准确率,但高维情况下的 “维度灾难”一直制约着相应检索性能的提高。通过分析小世界模型,提出了完整的逐跳逼近索引算法,该算法仅维护点与点在度量空间上的局部邻近关系,通过将查询过程的“关注点”逐步往查询命中区域跳跃逼近来实现高维空间数据点间的范围查询和近似近邻查询。实验证明该方法在不依赖索引数据的先验分布情况下能有效地处理高维数据向量的检索,且具有良好的可维护性与拓展性。  相似文献   

3.
This article proposes the development of a software simulator that allows the user to evaluate algorithms for recommender systems. This simulator consists of agents, items, a recommender, a controller, and a recorder, and it locates the agents and allocates the items based on a small-world network. An agent plays the role of a user in the recommender system, and the recommender also plays a role in the system. The controller handles the simulation flow where (1) the recommender recommends items to agents based on the recommendation algorithm, (2) each agent evaluates the items based on the agents’ rating algorithm and using the attributes of each item and agent, and (3) the recorder obtains the results of the rating and evaluation measurements for the recommendation pertaining to such information as precision and recall. This article considers the background of the proposal and the architecture of the simulator.  相似文献   

4.
提出了一种从海量高维数据中进行高效查询的算法,该算法基于小世界网络模型,并采用网络节点表示高维数据的特征向量。算法主要包含两个部分,基于K-Means的索引生成算法和随机逼近查询算法,两个算法均给出了具体的操作步骤。算法经大量实验仿真,得出通过合理设置小世界网络节点的近邻节点数量以及最大查询路径和最大迭代次数等参数,算法可以满足不同精度的用户查询请求。实验结果表明,实现的算法在高维度海量数据查询中具有良好的检索效果。  相似文献   

5.

Evolutionary polynomial regression (EPR) is extensively used in engineering for soil properties modeling. This grey-box technique uses evolutionary computing to produce simple, transparent and well-structured models in the form of polynomial equations that best explain the observed data. A key task is then to determine mathematical structures for modeling physical phenomena and to select the optimal EPR model. This requires an algorithm to search through the model structure space and successfully produce feasible solutions that honor a set of statistical metrics. The complexity of EPR models increases greatly, however, with the number of polynomial terms used to tune these models. In this paper, we propose an alternative EPR for modeling complex soil properties. We implement a dual search-based EPR with self-adaptive offspring creation as model structure search strategy and couple a compromise programming tool to select a model that is preferred statistically relative to models with different polynomial terms. We illustrate our method using real-world data to improve predictions of optimal moisture content and creep index for soils. Our results demonstrate that the models derived using the proposed methodology can predict soil properties with adequate accuracy, physical meaning and lower number of parameters and input variables.

  相似文献   

6.
Extracting significant features from high-dimension and small sample size biological data is a challenging problem. Recently, Micha? Draminski proposed the Monte Carlo feature selection (MC) algorithm, which was able to search over large feature spaces and achieved better classification accuracies. However in MC the information of feature rank variations is not utilized and the ranks of features are not dynamically updated. Here, we propose a novel feature selection algorithm which integrates the ideas of the professional tennis players ranking, such as seed players and dynamic ranking, into Monte Carlo simulation. Seed players make the feature selection game more competitive and selective. The strategy of dynamic ranking ensures that it is always the current best players to take part in each competition. The proposed algorithm is tested on 8 biological datasets. Results demonstrate that the proposed method is computationally efficient, stable and has favorable performance in classification.  相似文献   

7.
8.
Model selection plays a key role in the application of support vector machine (SVM). In this paper, a method of model selection based on the small-world strategy is proposed for least squares support vector regression (LS-SVR). In this method, the model selection is treated as a single-objective global optimization problem in which generalization performance measure performs as fitness function. To get better optimization performance, the main idea of depending more heavily on dense local connections in small-world phenomenon is considered, and a new small-world optimization algorithm based on tabu search, called the tabu-based small-world optimization (TSWO), is proposed by employing tabu search to construct local search operator. Therefore, the hyper-parameters with best generalization performance can be chosen as the global optimum based on the powerful search ability of TSWO. Experiments on six complex multimodal functions are conducted, demonstrating that TSWO performs better in avoiding premature of the population in comparison with the genetic algorithm (GA) and particle swarm optimization (PSO). Moreover, the effectiveness of leave-one-out bound of LS-SVM on regression problems is tested on noisy sinc function and benchmark data sets, and the numerical results show that the model selection using TSWO can almost obtain smaller generalization errors than using GA and PSO with three generalization performance measures adopted.  相似文献   

9.
针对传统的均值漂移算法,加入了梯度方向直方图及其与颜色直方图的自适应选择,提高了均值漂移算法在复杂场景中目标跟踪的鲁棒性。传统的均值漂移算法往往选择固定的一个颜色直方图对目标进行跟踪,当目标自身或者跟踪场景发生变化时,容易跟踪失败。通过分析被跟踪目标在当前场景中与目标模板在颜色以及梯度方向上的相似性并设定阈值,从而选择并使用当前有效的目标特征,实现复杂变化场景下的目标跟踪。一系列不同场景下的运动目标跟踪实验,证实了该算法的可靠性。  相似文献   

10.
In the process of learning the naive Bayes, estimating probabilities from a given set of training samples is crucial. However, when the training samples are not adequate, probability estimation method will inevitably suffer from the zero-frequency problem. To avoid this problem, Laplace-estimate and M-estimate are the two main methods used to estimate probabilities. The estimation of two important parameters m (integer variable) and p (probability variable) in these methods has a direct impact on the underlying experimental results. In this paper, we study the existing probability estimation methods and carry out a parameter Cross-test by experimentally analyzing the performance of M-estimate with different settings for the two parameters m and p. This part of experimental result shows that the optimal parameter values vary corresponding to different data sets. Motivated by these analysis results, we propose an estimation model based on self-adaptive differential evolution. Then we propose an approach to calculate the optimal m and p value for each conditional probability to avoid the zero-frequency problem. We experimentally test our approach in terms of classification accuracy using the 36 benchmark machine learning repository data sets, and compare it to a naive Bayes with Laplace-estimate and M-estimate with a variety of setting of parameters from literature and those possible optimal settings via our experimental analysis. The experimental results show that the estimation model is efficient and our proposed approach significantly outperforms the traditional probability estimation approaches especially for large data sets (large number of instances and attributes).  相似文献   

11.
Feature selection is a dimensionality reduction technique that helps to improve data visualization, simplify learning, and enhance the efficiency of learning algorithms. The existing redundancy-based approach, which relies on relevance and redundancy criteria, does not account for feature complementarity. Complementarity implies information synergy, in which additional class information becomes available due to feature interaction. We propose a novel filter-based approach to feature selection that explicitly characterizes and uses feature complementarity in the search process. Using theories from multi-objective optimization, the proposed heuristic penalizes redundancy and rewards complementarity, thus improving over the redundancy-based approach that penalizes all feature dependencies. Our proposed heuristic uses an adaptive cost function that uses redundancy–complementarity ratio to automatically update the trade-off rule between relevance, redundancy, and complementarity. We show that this adaptive approach outperforms many existing feature selection methods using benchmark datasets.  相似文献   

12.
Evan E. Anderson 《Software》1989,19(8):707-717
The proliferation of software packages has created a difficult, complex problem of evaluation and selection for many users. Traditional approaches to the quantification of package performance have relied on compensatory models, such as the linear weighted attribute model, which sums the weighted ratings of software attributes. These approaches define the dimensions of quality too narrowly and, therefore, omit substantial amounts of information from consideration. This paper presents an alternative methodology, previously used in capital rationing and tournament ranking, that expands the opportunity for objective insight into software quality. In particular, it considers three measures of quality, the frequency with which the attribute ratings of one package exceed those of another, the presence of outliers, where very poor performance may exist on a single attribute and be glossed over by compensatory methods, and the cumulative magnitude of attribute ratings on one package that exceed those on others. The proposed methodology is applied to the evaluation of the following software types: word processing, database management systems, spreadsheet/financial planning, integrated software, graphics, data communications and project management.  相似文献   

13.
In this article, energy trophallaxis, i.e., distributed autonomous energy management methodology inspired by social insects and bat behavior, and its advantages, are shown by a series of computer simulations to address the survivability of organized groups of agents in a dynamic environment with uncertainty. The uncertainty of the agents’ organizational behavior is represented by two Lévy distributions. By carefully controlling energy donation behavior based on these distributions, we can examine the survivability of a larger group that traditional methods cannot analyze. As a result, even a small degree of friendship throughout the organization makes the group’s survivability improve dramatically.  相似文献   

14.
SCP2P:基于节点属性特征的自适应P2P模型   总被引:1,自引:0,他引:1  
李江峰  张晨曦  周兴铭 《计算机应用》2008,28(10):2580-2583
对等网络中的节点有其固有的属性特征。在以往的研究中,这些属性特征或是完全被忽略,或是仅仅简单、单一地被考虑。综合利用节点属性特征,提出了基于节点属性特征的自适应P2P模型。在模型中,节点按属性特征自适应组成集群。模型按实际需求驱动的模式动态调整和维护节点之间的连接,动态地调整节点的作用以及集群规模。  相似文献   

15.
特征选择在许多领域具有重要作用,提出一种基于混合自适应引力搜索算法的特征选择方法,在最大化分类精度的同时从数据样本中选出最小特征子集。算法设计两种解更新策略进行组合式搜索,引入群体约简方法,有效地平衡算法的全局搜索和局部收敛能力,同时提出自适应调控参数,减少参数设置对算法性能的影响。在七组真实数据集中的实验结果表明,从分类精度、特征子集大小和运行时间三方面比较,提出的方法优于原始算法和已有相近算法,具有良好的综合性能,是一种有效的特征选择方法。  相似文献   

16.
针对蚁群算法存在控制参数难以确定和易陷入停滞等不足,采用云模型理论对蚁群算法进行改进,将云模型作为模糊隶属函数,选择部分较优路径进行全局信息素更新,从而提高算法对路径的开发和探索,同时通过对云隶属函数的参数控制,实现算法的自适应调整策略。针对TSP问题进行仿真实验对比,结果也表明基于云模型的蚁群算法要明显优于ACS和MMAS算法。  相似文献   

17.
基于启发式遗传算法的SVM模型自动选择   总被引:6,自引:0,他引:6  
支撑矢量机(SVM)模型的自动选择是其实际应用的关键.常用的基于穷举搜索的留一法(LOO)很繁杂且效率很低.到目前为止,大多数的算法并不能有效地实现模型自动选择.本文利用实值编码的启发式遗传算法实现基于高斯核函数的SVM模型自动选择.在重点分析了SVM超参数对其性能的影响和两种SVM性能估计的基础上,确定了合适的遗传算法适应度函数.人造数据及实际数据的仿真结果表明了所提方法的可行性和高效性.  相似文献   

18.
基于小世界模型的高维索引技术能有效地处理高维数据的检索问题,但对适合该索引结构的插入和删除算法没有进行深入研究,影响了其应用范围。在深入分析该索引结构理论模型的基础上,提出了能够维护索引结构小世界特性的迭代式插入和删除算法。通过将插入算法建模成一种网络增长模型,应用平均场理论分析其度分布,通过实验测得聚集系数及平均路径长度,理论分析和实验结果表明插入和删除算法在完成更新时可以保证索引结构仍然符合小世界特性,扩展了该索引技术的应用范围。  相似文献   

19.
Hidden Web因为其隐蔽性而难以直接抓取,因此成为信息检索研究的一个新领域。提出了一种获取Hidden Web信息的方法,讨论了实现的关键技术。通过设计提出的启发式查询词选择算法,提高了抓取的效率。实验证明了该模型和算法的有效性。  相似文献   

20.
深入研究大间隔从样本间相似性、信息熵从特征间相关性进行特征选择的特点,提出一种有效地融合这两类方法的特征选择算法。采用Relief算法得到一个有效的特征排序,进而将其划分为若干区段。设置各区段的采样率,以对称不确定性作为启发因子获得每个局部随机子空间的特征子集。将获得的所有特征子集作为最终的特征选择结果。实验结果表明该方法优于一些常用的特征选择算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号