首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
为了进一步优化难解背包问题,在传统理论基础上给出了一种基于动态预期效率的经济学模型,构造了一种全新的背包优化算法,并进行了单独仿真实验和对比实验仿真。实验表明,在同一类背包问题中,该算法优于贪心算法、回溯法、动态规划算法和分支限界算法;与萤火虫群算法对比,该算法较大程度地提高了收敛速度并节省了存储空间,收敛速度几乎是萤火虫群算法的10倍。最后,经过对20个背包问题的探究,验证了该算法的可行性,并确定了该算法的适应范围。  相似文献   

2.
一种无线传感器网络目标的最优覆盖算法   总被引:5,自引:1,他引:4  
无线传感器网络点状目标覆盖的算法中,集合分割算法虽简单,但效率低且仅适用于每个传感器节点能量都相等的网络模型.为此,我们对集合分割算法进行改进,提出一种启发式贪心最优覆盖算法.该算法适用于节点能量正态分布的网络模型,采用了关键目标优先覆盖策略和节点能效最大化策略,延长了网络覆盖生命期,提高了算法的效率.实验表明新算法网络生命期延长80%以上,有更好的适应性和稳定性.  相似文献   

3.
在最少错误更正模型的基础上,提出一种重建单体型的启发式算法 H-MEC。按照单体型的单核苷酸多态性(SNP)位点顺序依次构建算法步骤,根据某SNP位点取值将覆盖该SNP位点的片段划分为2个集合,利用包含片段数较多集合中的片段进行重建。使用HapMap计划发布的CEPH样本中的60个个体,在1号染色体的单体型上进行实验。结果表明,H-MEC算法在各种参数设置下,能获得较Fast Hare算法和DGS算法更高的单体型重建率。此外,该算法在重建长单体型时也具有较高的执行效率。  相似文献   

4.
为了解决测试用例自动生成中等式约束的求解问题,提出一种加入等式处理策略的分支限界搜索算法.首先将线性代数中判定线性方程组是否有解的方法引入分支限界测试用例生成框架之中;然后在已有算法模型的基础上提出集成等式处理分支限界搜索算法,以支持多种变量类型的等式处理;最后将等式约束分为等式无解、等式多解和等式唯一解三大类进行处理,包含了等式约束求解问题的所有情况.实验结果表明,文中算法可以实现对一部分不可达路径的检测,在很大程度上减少测试用例生成的时间并提高覆盖率;对大工程的测试以及同开源约束求解工具Choco的对比实验,也证明了该算法可以提升测试效率.  相似文献   

5.
一类电路布线问题的分支限界算法   总被引:1,自引:0,他引:1  
分支限界策略对很多实际问题是重要和有效的。论文首先提出了一类电路布线问题,然后给出了解决该问题的分支限界算法并分析了所给出算法的复杂度。实验结果验证了所提出方法的有效性。  相似文献   

6.
分层分布式狄利克雷分布(HD-LDA)算法是一个对潜在狄利克雷分布(LDA)进行改进的基于概率增长模型的文本分类算法,与只能在单机上运行的LDA算法相比,可以运行在分布式框架下,进行分布式并行处理。Mahout在Hadoop框架下实现了HD-LDA算法,但是因为单节点算法的计算量大,仍然存在对大数据分类运行时间太长的问题。而大规模文本集合分散到多个节点上迭代推导,单个节点上文档集合的推导仍是顺序进行的,所以处理大规模文本集合时仍然需要很长时间才能完成全部文本的分类。为此,提出将Hadoop与图形处理器(GPU)相结合,将单节点文本集合的推导过程转移到GPU上运行,实现单节点多个文档并行推导,利用多台并行的GPU对HD-LDA算法进行加速。应用结果表明,使用该方法能使分布式框架下的HD-LDA算法对大规模文本集合处理达到7倍的加速比。  相似文献   

7.
季挺  张华 《控制与决策》2017,32(12):2153-2161
为解决当前近似策略迭代增强学习算法普遍存在计算量大、基函数不能完全自动构建的问题,提出一种基于状态聚类的非参数化近似广义策略迭代增强学习算法(NPAGPI-SC).该算法利用二级随机采样过程采集样本,利用trial-and-error过程和以样本完全覆盖为目标的估计方法计算逼近器初始参数,利用delta规则和最近邻思想在学习过程中自适应地调整逼近器,利用贪心策略选择应执行的动作.一级倒立摆平衡控制的仿真实验结果验证了所提出算法的有效性和鲁棒性.  相似文献   

8.
近年来,单体型检测问题已经得到了广泛的研究,成为计算生物学最热门的领域之一.本文对个体单体型重建问题进行研究,提出一种基于带权最少字符修改模型重建单体型的启发式算法HAW.HAW算法首先生成一对初始单体型,然后通过对初始单体型的不断扩充完成重建.实验结果表明HAW算法能有效求解模型,得到较以往算法更高的重建率,且算法运行速度较快,具有很高的实用价值.  相似文献   

9.
具有单连续变量的背包问题(knapsack problem with a single continuous variable,KPC)是标准0-1背包问题的自然推广,在KPC中背包容量不是固定的,因此其求解难度变大。针对现有差分进化(differential evolution,DE)算法在高维KPC实例上求解精度不够高的不足,提出基于拉马克进化的DE(Lamarckian evolution-based DE,LEDE)算法,将贪心修复优化算子产生的改进遗传给后代,以加快DE算法的收敛速度,提高DE算法在高维KPC实例上的求解精度。同时,在贪心修复优化算子中引入基于价值的贪心优化策略,用于优化使用基于价值密度的贪心修复策略生成的可行解,以帮助算法跳出局部最优。在40个KPC实例上对LEDE算法进行了实验分析,结果表明拉马克进化和基于价值的贪心优化策略能够提高LEDE算法的求精能力,LEDE算法在获得最优解和平均解方面均优于其他智能优化算法。  相似文献   

10.
基于单类分类器的半监督学习   总被引:1,自引:0,他引:1  
提出一种结合单类学习器和集成学习优点的Ensemble one-class半监督学习算法.该算法首先为少量有标识数据中的两类数据分别建立两个单类分类器.然后用建立好的两个单类分类器共同对无标识样本进行识别,利用已识别的无标识样本对已建立的两个分类面进行调整、优化.最终被识别出来的无标识数据和有标识数据集合在一起训练一个基分类器,多个基分类器集成在一起对测试样本的测试结果进行投票.在5个UCI数据集上进行实验表明,该算法与tri-training算法相比平均识别精度提高4.5%,与仅采用纯有标识数据的单类分类器相比,平均识别精度提高8.9%.从实验结果可以看出,该算法在解决半监督问题上是有效的.  相似文献   

11.
The knowledge of haplotypes allows researchers to identify the genetic variation affecting phenotypic such as health, disease and response to drugs. However, getting haplotype data by experimental methods is both time-consuming and expensive. Haplotype inference (HI) from the genotypes is a challenging problem in the genetics domain. There are several models for inferring haplotypes from genotypes, and one of the models is known as haplotype inference by pure parsimony (HIPP) which aims to minimize the number of distinct haplotypes used. The HIPP was proved to be an NP-hard problem. In this paper, a novel binary particle swarm optimization (BPSO) is proposed to solve the HIPP problem. The algorithm was tested on variety of simulated and real data sets, and compared with some current methods. The results showed that the method proposed in this paper can obtain the optimal solutions in most of the cases, i.e., it is a potentially powerful method for HIPP.  相似文献   

12.
A very challenging problem in the genetics domain is to infer haplotypes from genotypes. This process is expected to identify genes affecting health, disease and response to drugs. One of the approaches to haplotype inference aims to minimise the number of different haplotypes used, and is known as haplotype inference by pure parsimony (HIPP). The HIPP problem is computationally difficult, being NP-hard. Recently, a SAT-based method (SHIPs) has been proposed to solve the HIPP problem. This method iteratively considers an increasing number of haplotypes, starting from an initial lower bound. Hence, one important aspect of SHIPs is the lower bounding procedure, which reduces the number of iterations of the basic algorithm, and also indirectly simplifies the resulting SAT model. This paper describes the use of local search to improve existing lower bounding procedures. The new lower bounding procedure is guaranteed to be as tight as the existing procedures. In practice the new procedure is in most cases considerably tighter, allowing significant improvement of performance on challenging problem instances.  相似文献   

13.
In 2004, Hwang and Chen demonstrated new multi-proxy multi-signature schemes that allow a group of authorized proxy signers to sign messages on behalf of a group of original signers. Later, Lyuu and Wu pointed out Hwang et al.’s schemes were not secure and then proposed a modified scheme. They claimed that their modified schemes were secure. But in this paper we show a new attack on the Lyuu-Wu et al.’s schemes. Moreover, the original Hwang-Chen’s schemes are also vulnerable to this insider attack. Furthermore, we point out some improvements for the Lyuu-Wu scheme and Hwang-Chen schemes according to Wang et al.’s methods [Wang GL, Han XX, Zhu B. On the security of two threshold signature schemes with traceable signers. In: Applied Cryptography and Network Security (ACNS 2003). Lect Notes Comput Sci (LNCS), vol. 2846, Springer-Verlag; 2003. p. 111-222]. These improvements can resist our insider attack.  相似文献   

14.
张倩  吴璟莉 《计算机科学》2017,44(1):75-79, 112
求解三倍体个体单体型对于探索三倍体物种的遗传特性和表型差异等方面的研究具有重要的推动作用。针对带基因型信息的最少错误更正(MEC/GI)模型,提出了一种基于枚举策略的三倍体个体单体型重建算法EHTR。该算法依次重建3条单体型上的每一个单核苷酸多态性位点取值,对于给定位点,首先根据其基因型取值枚举该位点的3种单体型取值情况,然后选择片段支持度最高的取值作为该位点的重建值,算法的总时间复杂度为O(mn+mlogm+cnl)。采用CELSIM和MetaSim两种测序片段模拟生成器生成实验测试数据,在片段覆盖率、错误率、单片段长度、单体型长度和单体型海明距离等参数的不同设置下,对算法EHTR,GTIHR,W-GA和Q-PSO的重建率和运行时间进行对比分析。实验结果显示,算法EHTR在不同的参数设置下均能以更短的运行时间获得更高的重建率。  相似文献   

15.
在人群仿真的研究领域中,社会力模型是由Helbing提出的一种非常经典的微观仿真模型,能够模拟一些常见的人群自组织现象。但社会力模型仍然存在行人震荡、行人重叠等问题,因此许多学者在参数设置、受力范围、算法优化等方面对社会力模型进行了丰富和改进。目前,Gao等提出的一种考虑行人相对速度的改进社会力模型依然是一些学者进行改进社会力模型研究以及各种仿真实验的基础和重要参考。由于Gao等仅通过两个实验就表明了他们的改进社会力模型的优势这一情况欠缺可靠性,以及没有进行更多的人群自组织实验来表明改进后的社会力模型仍然保留原始社会力模型能够模拟人群自组织现象这一能力,因此文中对Gao等提出的改进社会力模型进行了验证性和评估性实验。通过验证性实验验证了Gao等进行的两个实验,证实了该改进社会力模型的优势。文中通过评估性实验证实了Gao等的改进社会力模型保留了能够模拟人群自组织现象的能力,发现并分析了Gao等的改进社会力模型所存在的行人重叠问题。  相似文献   

16.
We revisit from a fairness point of view the problem of online load balancing in the restricted assignment model and the 1-∞ model. We consider both a job-centric and a machine-centric view of fairness, as proposed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005). These notions are equivalent to the approximate notion of prefix competitiveness proposed by Kleinberg et al. (In: Proceedings of the 40th annual symposium on foundations of computer science, p. 568, 2001), as well as to the notion of approximate majorization, and they generalize the well studied notion of max-min fairness. We resolve a question posed by Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) proving that the greedy strategy is globally O(log?m)-fair, where m denotes the number of machines. This result improves upon the analysis of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who showed that the greedy strategy is globally O(log?n)-fair, where n is the number of jobs. Typically, n?m, and therefore our improvement is significant. Our proof matches the known lower bound for the problem with respect to the measure of global fairness. The improved bound is obtained by analyzing, in a more accurate way, the more general restricted assignment model studied previously in Azar et al. (J. Algorithms 18:221–237, 1995). We provide an alternative bound which is not worse than the bounds of Azar et al. (J. Algorithms 18:221–237, 1995), and it is strictly better in many cases. The bound we prove is, in fact, much more general and it bounds the load on any prefix of most loaded machines. As a corollary from this more general bound we find that the greedy algorithm results in an assignment that is globally O(log?m)-balanced. The last result generalizes the previous result of Goel et al. (In: Symposium on discrete algorithms, pp. 384–390, 2005) who proved that the greedy algorithm yields an assignment that is globally O(log?m)-balanced for the 1-∞ model.  相似文献   

17.
A new, rectilinear-monotone polygonally shaped fault block model, called Minimal-Connected-Component (MCC), was proposed in [D. Wang, A rectilinear-monotone polygonal fault block model for fault-tolerant minimal routing in mesh, IEEE Trans. Comput. 52 (3) (2003) 310–320] for minimal adaptive routing in mesh-connected multiprocessor systems. This model refines the widely used rectangular model by including fewer non-faulty nodes in fault blocks. The positions of source/destination nodes relative to faulty nodes are taken into consideration when constructing fault blocks. Adaptive routing algorithm was given in Wang (2003), that constructs a minimal “Manhattan” route avoiding all fault blocks, should such routes exist. However, if there are no minimal routes, we still need to find a route, preferably as short as possible. In this paper, we propose a heuristic algorithm that takes a greedy approach, and can compute a nearly shortest route without much overhead. The significance of this algorithm lies in the fact that routing is a frequently performed task, and messages need to get to their destinations as soon as possible. Therefore one would prefer to have a fast answer about which route to take (and then take it), rather than spend too much time working out an absolutely shortest route.  相似文献   

18.
In this paper, an automatic system is presented for target recognition using target echo signals of High Resolution Range (HRR) radars. This paper especially deals with combination of the feature extraction and classification from measured real target echo signal waveforms by using X-band pulse radar. The past studies in the field of radar target recognition have shown that the learning speed of feedforward neural networks is in general much slower than required and it has been a major disadvantage. There are two key reasons forth is status of feedforward neural networks: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms (Feng et al., 2009, Huang and Siew, 2004, Huang and Chen, 2007, Huang and Chen, 2008, Huang et al., 2006, Huang et al., 2010, Huang et al., 2004, Huang et al., 2005, Huang et al., 2012, Huang et al., 2008, Huang and Siew, 2005, Huang et al., 2011, Huang et al., 2006, Huang et al., 2006a, Huang et al., 2006b, Lan et al., 2009, Li et al., 2005, Liang et al., 2006, Liang et al., 2006, Rong et al., 2009, Wang and Huang, 2005, Wang et al., 2011, Yeu et al., 2006, Zhang et al., 2007, Zhu et al., 2005). To resolve these disadvantages of feedforward neural networks for automatic target recognition area in this paper suggested a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) (Feng et al., 2009, Huang and Siew, 2004, Huang and Chen, 2007, Huang and Chen, 2008, Huang et al., 2006, Huang et al., 2010, Huang et al., 2004, Huang et al., 2005, Huang et al., 2012, Huang et al., 2008, Huang and Siew, 2005, Huang et al., 2011, Huang et al., 2006, Huang et al., 2006a, Huang et al., 2006b, Lan et al., 2009, Li et al., 2005, Liang et al., 2006, Liang et al., 2006, Rong et al., 2009, Wang and Huang, 2005, Wang et al., 2011, Yeu et al., 2006, Zhang et al., 2007, Zhu et al., 2005) which randomly choose hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. Moreover, the Discrete Wavelet Transform (DWT) and wavelet entropy is used for adaptive feature extraction in the time-frequency domain in feature extraction stage to strengthen the premium features of the ELM in this study. The correct recognition performance of this new system is compared with feedforward neural networks. The experimental results show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.  相似文献   

19.
社交网络中最小正影响支配集问题是一个NP难度的组合优化问题,针对该问题,目前有2种典型的贪心求解算法求解速度较快,但贪心解的质量却有待提高。轮转贪心策略是在不增加贪心算法时间复杂度的前提下提升贪心解的质量,且通过实验研究表明能有效增强一些NP难度问题效果的贪心算法。本文将轮转贪心策略求解正影响支配集的2个贪心算法进行融合来提升贪心算法解的质量,提出相应的轮转贪心算法。实验表明,在典型的真实社交网络实例上,与原有贪心算法相比,本文的轮转贪心算法所获解的质量有一定的提高。  相似文献   

20.
王国瞻等人(计算机工程,2010年第3期和第8期)分别对谷利泽等人提出的代理盲签名方案和Lu等人提出的多重代理盲签名方案进行攻击,指出这2个方案不满足盲性。针对王国瞻等人对谷利泽等人和Lu等人方案的可追踪性攻击问 题,指出王国瞻等人的攻击是无效的,并采用构造方法证明代理签名人不能根据已有的信息追踪到签名。分析结果证明,王国瞻等人的攻击无效,谷利泽等人的方案和Lu等人的方案仍然满足盲性,是不可追踪的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号