首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Recently, casting planning as propositional satisfiability (SAT) has been shown to be an efficient technique of plan synthesis. This article is a response to the recently proposed challenge of developing novel propositional encodings that are based on a combination of different types of plan refinements and characterizing the tradeoffs. We refer to these encodings as hybrid encodings. An investigation of these encodings is important, because this can give insights into what kinds of planning problems can be solved faster with hybrid encodings.
Encodings based on partial–order planning and state–space planning have been reported in previous research. We propose a new type of encoding called a unifying encoding that subsumes these two encodings. We also report on several other hybrid encodings. Next, we show how the satisfiability framework can be extended to incremental planning. State–space encoding is attractive because of its lower size and causal encoding is attractive because of its highest flexibility in reordering steps. We show that hybrid encodings have a higher size and a lower flexibility in step reordering and, thus, do not combine the best of these encodings. We discuss in detail several specific planning scenarios where hybrid encodings are likely to be superior to nonhybrid encodings.  相似文献   

2.
One approach for solving Constraint Satisfaction Problems (CSP) (and related Constraint Optimization Problems (COP)) involving integer and Boolean variables is reduction to propositional satisfiability problem (SAT). A number of encodings (e.g., direct, log, support, order) for this purpose exist as well as specific encodings for some constraints that are often encountered (e.g., cardinality constraints, global constraints). However, there is no single encoding that performs well on all classes of problems and there is a need for a system that supports multiple encodings. We present a system that translates specifications of finite linear CSP problems into SAT instances using several well-known encodings, and their combinations. We also present a methodology for selecting a suitable encoding based on simple syntactic features of the input CSP instance. Thorough evaluation has been performed on large publicly available corpora and our encoding selection method improves upon the efficiency of existing encodings and state-of-the-art tools used in comparison.  相似文献   

3.
高冰冰  张长海  吕帅 《计算机科学》2010,37(11):252-256
介绍条件规划问题及其相关的求解系统,着重分析以逻辑为基拙的编码方式。针对基于量化布尔公式的转换方法进行详细分析,给出3种不同形式的量化布尔公式编码。最后,对这3种编码进行比较,分析基于命题逻辑公式与量化布尔公式这两种不同转换方式的优劣,讨论基于量化布尔公式的规划方法未来的研究方向和发展趋势。  相似文献   

4.
We present a unified approach to evaluate the relative expressive power of process calculi. In particular, we identify a small set of criteria (that have already been somehow presented in the literature) that an encoding should satisfy to be considered a valid means for language comparison. We argue that the combination of such criteria is a valid proposal by noting that: (i) several well-known encodings appeared in the literature satisfy them; (ii) this notion is not trivial, because some known encodings do not satisfy all the criteria we have proposed; (iii) several well-known separation results can be formulated in terms of our criteria; and (iv) some widely believed (but never formally proved) separation results can be proved by using the criteria we propose. Moreover, the criteria defined induce general proof-techniques for separation results that can be easily instantiated to cover known case-studies.  相似文献   

5.
The search behavior of an evolutionary algorithm depends on the interactions between the encoding that represents candidate solutions to the target problem and the operators that act on that encoding. In this paper, we focus on analyzing some properties such as locality, heritability, population diversity and searching behavior of various decoder-based evolutionary algorithm (EA) frameworks using different encodings, decoders and genetic operators for spanning tree based optimization problems. Although debate still continues on how and why EAs work well, many researchers have observed that EAs perform well when its encoding and operators exhibit good locality, heritability and diversity properties. We analyze these properties of various EA frameworks with two types of analytical ways on different spanning tree problems; static analysis and dynamic analysis, and then visualize them. We also show through this analysis that EA using the Edge Set encoding (ES) and the Edge Window Decoder encoding (EWD) indicate very good locality and heritability as well as very good diversity property. These are put forward as a potential explanation for the recent finding that they can outperform other recent high-performance encodings on the constrained spanning tree problems.  相似文献   

6.
We make a number of contributions to the study of the Quantified Constraint Satisfaction Problem (QCSP). The QCSP is an extension of the constraint satisfaction problem that can be used to model combinatorial problems containing contingency or uncertainty. It allows for universally quantified variables that can model uncertain actions and events, such as the unknown weather for a future party, or an opponent's next move in a game. In this paper we report significant contributions to two very different methods for solving QCSPs. The first approach is to implement special purpose algorithms for QCSPs; and the second is to encode QCSPs as Quantified Boolean Formulas and then use specialized QBF solvers. The discovery of particularly effective encodings influenced the design of more effective algorithms: by analyzing the properties of these encodings, we identify the features in QBF solvers responsible for their efficiency. This enables us to devise analogues of these features in QCSPs, and implement them in special purpose algorithms, yielding an effective special purpose solver, QCSP-Solve. Experiments show that this solver and a highly optimized QBF encoding are several orders of magnitude more efficient than the initially developed algorithms. A final, but significant, contribution is the identification of flaws in simple methods of generating random QCSP instances, and a means of generating instances which are not known to be flawed.  相似文献   

7.
Representing and reasoning about time dependent information is a key research issue in many areas of computer science and artificial intelligence. One of the best known and widely used formalisms for representing interval-based qualitative temporal information is Allen's interval algebra (IA). The fundamental reasoning task in IA is to find a scenario that is consistent with the given information. This problem is in general NP-complete.In this paper, we investigate how an interval-based representation, or IA network, can be encoded into a propositional formula of Boolean variables and/or predicates in decidable theories. Our task is to discover whether satisfying such a formula can be more efficient than finding a consistent scenario for the original problem. There are two basic approaches to modelling an IA network: one represents the relations between intervals as variables and the other represents the end-points of each interval as variables. By combining these two approaches with three different Boolean satisfiability (SAT) encoding schemes, we produced six encoding schemes for converting IA to SAT. In addition, we also showed how IA networks can be formulated into satisfiability modulo theories (SMT) formulae based on the quantifier-free integer difference logic (QF-IDL). These encodings were empirically studied using randomly generated IA problems of sizes ranging from 20 to 100 nodes. A general conclusion we draw from these experimental results is that encoding IA into SAT produces better results than existing approaches. More specifically, we show that the new point-based 1-D support SAT encoding of IA produces consistently better results than the other alternatives considered. In comparison with the six different SAT encodings, the SMT encoding came fourth after the point-based and interval-based 1-D support schemes and the point-based direct scheme. Further, we observe that the phase transition region maps directly from the IA encoding to each SAT or SMT encoding, but, surprisingly, the location of the hard region varies according to the encoding scheme. Our results also show a fixed performance ranking order over the various encoding schemes.  相似文献   

8.
政务热线承接了海量市民诉求,人工对工单分类耗时费力。现有工单分类方法大多基于机器学习或单一神经网络模型,难以有效理解上下文语义信息,且文本特征提取不全面。针对这一问题,本文提出一种融合RoBERTa和特征提取的政务热线工单分类方法。该方法首先通过基于RoBERTa预训练语言模型的语义编码层获取政务热线工单文本中的语义表征向量,然后通过由CNN-BiGRU-Self-Attention定义的特征提取层获取工单文本的局部特征和全局特征,并对全局特征进行处理以凸显重要性高的语义特征,最后将融合后的特征向量输入分类器来完成工单分类。实验结果表明,相较于其他基线分类方法,本文提出的方法能够取得更好的工单分类效果。  相似文献   

9.
杨超  吕帅  刘磊  魏唯  张波  吴俊 《计算机工程》2011,37(9):213-215
以规划领域中的动作为对象,研究规划方法中的动作互斥编码方式。介绍基于规划图的动作互斥编码、利用提取领域相关信息生成动作效果的直接阻碍与间接阻碍编码,以及依赖于域转移图动作间的长距离互斥编码,说明每类动作互斥编码的构造方法及其削减搜索空间、提高求解效率的作用。  相似文献   

10.
A permutation can be encoded in several different ways. This paper discusses some relations among some encodings and how one can be computed from others. The paper shows a short proof of an existing efficient algorithm for encoding a permutation and presents two new efficient algorithms. One of the new algorithms is constructed as the inverse of an existing algorithm for decoding, making it the first efficient permutation encoding algorithm obtained in that way. Received June 1994 / Accepted in revised form December 1998  相似文献   

11.
J. Li  X. Tang  J. Liu  J. Huang  Y. Wang 《Pattern recognition》2008,41(6):1975-1984
Various microarray experiments are now done in many laboratories, resulting in the rapid accumulation of microarray data in public repositories. One of the major challenges of analyzing microarray data is how to extract and select efficient features from it for accurate cancer classification. Here we introduce a new feature extraction and selection method based on information gene pairs that have significant change in different tissue samples. Experimental results on five public microarray data sets demonstrate that the feature subset selected by the proposed method performs well and achieves higher classification accuracy on several classifiers. We perform extensive experimental comparison of the features selected by the proposed method and features selected by other methods using different evaluation methods and classifiers. The results confirm that the proposed method performs as well as other methods on acute lymphoblastic-acute myeloid leukemia, adenocarcinoma and breast cancer data sets using a fewer information genes and leads to significant improvement of classification accuracy on colon and diffuse large B cell lymphoma cancer data sets.  相似文献   

12.
朱春媚  莫鸿强 《计算机应用》2017,37(7):1972-1976
针对在探讨适应度函数的周期性特点与整数编码元数之间的关联特性时,一阶积木块数量对编码性能的评价不一定成立的问题,提出以累积逃脱概率(AEP)作为遗传算法(GA)编码性能的评价指标,对以频率为正整数m的整数次幂的正弦函数为基函数线性组合构成的适应度函数编码展开研究。首先给出了该类适应度函数的一般形式和m进制整数编码的含义;然后介绍了AEP的定义,并根据函数特点制定了AEP的计算方法;最后分析比较了该类适应度函数在不同整数编码下的AEP,指出其采用m元整数编码时更容易进化。仿真结果表明,该类适应度函数采用m元整数编码时,其最终优化结果和群体适应度均值的上升时间皆明显优于其他编码,反映了AEP能有效评价编码的性能,并再次验证了对于该类适应度函数m元整数编码优于非m元整数编码的结论。  相似文献   

13.
Representations are formalized as encodings that map the search space to the vertex set of a graph. We define the notion of bit equivalent encodings and show that for such encodings the corresponding Walsh coefficients are also conserved. We focus on Gray codes as particular types of encoding and present a review of properties related to the use of Gray codes. Gray codes are widely used in conjunction with genetic algorithms and bit-climbing algorithms for parameter optimization problems. We present new convergence proofs for a special class of unimodal functions; the proofs show that a steepest ascent bit climber using any reflected Gray code representation reaches the global optimum in a number of steps that is linear with respect to the encoding size. There are in fact many different Gray codes. Shifting is defined as a mechanism for dynamically switching from one Gray code representation to another in order to escape local optima. Theoretical results that substantially improve our understanding of the Gray codes and the shifting mechanism are presented. New proofs also shed light on the number of unique Gray code neighborhoods accessible via shifting and on how neighborhood structure changes during shifting. We show that shifting can improve the performance of both a local search algorithm as well as one of the best genetic algorithms currently available.  相似文献   

14.
This paper addresses computational prediction of protein structural classes. Although in recent years progress in this field was made, the main drawback of the published prediction methods is a limited scope of comparison procedures, which in same cases were also improperly performed. Two examples include using protein datasets of varying homology, which has significant impact on the prediction accuracy, and comparing methods in pairs using different datasets. Based on extensive experimental work, the main aim of this paper is to revisit and reevaluate state of the art in this field. To this end, this paper performs a first-of-its-kind comprehensive and multi-goal study, which includes investigation of eight prediction algorithms, three protein sequence representations, three datasets with different homologies and finally three test procedures. Quality of several previously unused prediction algorithms, newly proposed sequence representation, and a new-to-the-field testing procedure is evaluated. Several important conclusions and findings are made. First, the logistic regression classifier, which was not previously used, is shown to perform better than other prediction algorithms, and high quality of previously used support vector machines is confirmed. The results also show that the proposed new sequence representation improves accuracy of the high quality prediction algorithms, while it does not improve results of the lower quality classifiers. The study shows that commonly used jackknife test is computationally expensive, and therefore computationally less demanding 10-fold cross-validation procedure is proposed. The results show that there is no statistically significant difference between these two procedures. The experiments show that sequence homology has very significant impact on the prediction accuracy, i.e. using highly homologous datasets results in higher accuracies. Thus, results of several past studies that use homologous datasets should not be perceived as reliable. The best achieved prediction accuracy for low homology datasets is about 57% and confirms results reported by Wang and Yuan [How good is the prediction of protein structural class by the component-coupled method?. Proteins 2000;38:165–175]. For a highly homologous dataset instance based classification is shown to be better than the previously reported results. It achieved 97% prediction accuracy demonstrating that homology is a major factor that can result in the overestimated prediction accuracy.  相似文献   

15.
Feature selection is an important data preprocessing step for the construction of an effective bankruptcy prediction model. The prediction performance can be affected by the employed feature selection and classification techniques. However, there have been very few studies of bankruptcy prediction that identify the best combination of feature selection and classification techniques. In this study, two types of feature selection methods, including filter‐ and wrapper‐based methods, are considered, and two types of classification techniques, including statistical and machine learning techniques, are employed in the development of the prediction methods. In addition, bagging and boosting ensemble classifiers are also constructed for comparison. The experimental results based on three related datasets that contain different numbers of input features show that the genetic algorithm as the wrapper‐based feature selection method performs better than the filter‐based one by information gain. It is also shown that the lowest prediction error rates for the three datasets are provided by combining the genetic algorithm with the naïve Bayes and support vector machine classifiers without bagging and boosting.  相似文献   

16.

Modeling law search and retrieval as prediction problems has recently emerged as a predominant approach in law intelligence. Focusing on the law article retrieval task, we present a deep learning framework named LamBERTa, which is designed for civil-law codes, and specifically trained on the Italian civil code. To our knowledge, this is the first study proposing an advanced approach to law article prediction for the Italian legal system based on a BERT (Bidirectional Encoder Representations from Transformers) learning framework, which has recently attracted increased attention among deep learning approaches, showing outstanding effectiveness in several natural language processing and learning tasks. We define LamBERTa models by fine-tuning an Italian pre-trained BERT on the Italian civil code or its portions, for law article retrieval as a classification task. One key aspect of our LamBERTa framework is that we conceived it to address an extreme classification scenario, which is characterized by a high number of classes, the few-shot learning problem, and the lack of test query benchmarks for Italian legal prediction tasks. To solve such issues, we define different methods for the unsupervised labeling of the law articles, which can in principle be applied to any law article code system. We provide insights into the explainability and interpretability of our LamBERTa models, and we present an extensive experimental analysis over query sets of different type, for single-label as well as multi-label evaluation tasks. Empirical evidence has shown the effectiveness of LamBERTa, and also its superiority against widely used deep-learning text classifiers and a few-shot learner conceived for an attribute-aware prediction task.

  相似文献   

17.
近年来,随着在线信贷的飞速发展,贷款总量不断加大,违约概率不断提升。因此对贷款风险进行深入研究,对在线信贷企业预防互联网金融风险是非常具有现实意义的。针对贷款数据非平衡分布、大量噪声、维度高的问题,本文提出一种基于SMOTE和XGBoost的贷款风险预测方法。通过特征工程对数据进行降维和去噪;针对数据的非平衡问题,使用SMOTE算法进行过采样,平衡正负样本数目;基于以上工作,构建XGBoost分类模型,与一些传统分类算法进行对比,然后对比在不同正负样本比例时,预测结果的有效性。实验表明,相比于传统分类模型,XGBoost算法在贷款风险预测模型中具有更好的效果,通过SMOTE算法增加少数类样本的比例可以提高预测结果的有效性。  相似文献   

18.
This paper presents a comprehensive review of hybrid and ensemble-based soft computing techniques applied to bankruptcy prediction. A variety of soft computing techniques are being applied to bankruptcy prediction. Our focus is on techniques, namely how different techniques are combined, but not on obtained results. Almost all authors demonstrate that the technique they propose outperforms some other methods chosen for the comparison. However, due to different data sets used by different authors and bearing in mind the fact that confidence intervals for the prediction accuracies are seldom provided, fair comparison of results obtained by different authors is hardly possible. Simulations covering a large variety of techniques and data sets are needed for a fair comparison. We call a technique hybrid if several soft computing approaches are applied in the analysis and only one predictor is used to make the final prediction. In contrast, outputs of several predictors are combined, to obtain an ensemble-based prediction.  相似文献   

19.
In real-world classification problems, different types of misclassification errors often have asymmetric costs, thus demanding cost-sensitive learning methods that attempt to minimize average misclassification cost rather than plain error rate. Instance weighting and post hoc threshold adjusting are two major approaches to cost-sensitive classifier learning. This paper compares the effects of these two approaches on several standard, off-the-shelf classification methods. The comparison indicates that the two approaches lead to similar results for some classification methods, such as Naïve Bayes, logistic regression, and backpropagation neural network, but very different results for other methods, such as decision tree, decision table, and decision rule learners. The findings from this research have important implications on the selection of the cost-sensitive classifier learning approach as well as on the interpretation of a recently published finding about the relative performance of Naïve Bayes and decision trees.  相似文献   

20.
基于闭环DNA的指派问题算法   总被引:6,自引:0,他引:6  
周康  同小军  许进 《计算机科学》2007,34(12):211-213
给出了闭环DNA计算模型及其生化实验。用闭环DNA计算模型设计出了指派问题的DNA算法。首先对决策变量进行二维DNA编码来存放决策变量和效益值,然后通过有目的的终止技术和删除实验得到指派问题的全部可行解,最后通过电泳实验和检测实验获得最优指派问题的最优解。举例说明了算法的可行性。最后,为减少DNA编码数量和缩短DNA编码的码长,讨论了算法的两种改进方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号