首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
基于选择性冗余思想,提出了一种测试数据自动生成算法.算法首先利用分支函数线性逼近和极小化方法,找出程序中所有可行路径,同时对部分可行路径自动生成适合的初始测试数据集;当利用分支函数线性逼近和极小化方法无法得到正确的测试数据时,基于使得测试数据集最小的原理和选择性冗余思想,针对未被初始测试数据集覆盖的谓词和子路径进行测试数据的增补.由于新算法结合谓词切片和DUC表达式,可以从源端判断子路径是否可行,因此能有效地降低不可行路径对算法性能的影响.算法分析和实验结果表明,该算法有效地减少了测试数据数量,提高了测试性能.  相似文献   

2.
基于上下文依赖规则覆盖的句子生成   总被引:1,自引:0,他引:1  
基于规则覆盖的句子生成,是上下文无关文法句子生成的主要方法,但是它也具有局限性。最近提出的上下文依赖规则覆盖,能根据文法的内部结构不同而具有不同的分支集合,比规则覆盖的精度更高。目前,尚未见这种上下文依赖规则覆盖的句子生成算法。该文在规则覆盖的句子生成算法的基础上,实现一个基于上下文依赖规则覆盖的句子生成算法。该算法已在机器上实现并经过实验检验。  相似文献   

3.
刘龙霞  吴军华 《计算机工程与设计》2011,32(8):2734-2736,2820
为有效解决完全的分类树方法产生大量的冗余测试数据的问题,提出了将贪心算法和分类树相结合的方法。利用分类树方法和分类树工具自动生成测试数据集,通过运用贪心算法对分类树方法产生的测试数据进行选择,在满足给定覆盖标准的前提下,精简测试数据集,从而达到降低测试成本的同时也保持测试数据的有效性。最后实例应用结果表明,该方法较之完全的分类树方法大大减少了冗余测试数据的数量,提高了测试效率。  相似文献   

4.
吴勋  周顺先  王雷 《计算机工程》2010,36(17):66-68
为实现测试用例的全覆盖,给出一种改进的两两组合测试数据自动生成算法。利用矩阵方法自动生成初始测试数据集,在此基础上通过组合匹配思想对初始测试数据集进行测试数据增补。实验结果表明,该算法简单高效,且具有产生测试数据少、时间消耗小的特点。  相似文献   

5.
张昇  刘春宝 《计算机仿真》2021,38(9):348-352
由于软件测试数据待测行为段序列连接存在冗余,导致目标路径覆盖率降低,提出基于状态空间剪枝的软件测试数据扩增算法.通过并发无关行为段在软件测试内的位置实施分类,依据分类结果采用状态空间剪枝算法,缩减状态空间的规模后,采用测序序列生成算法采用状态节点投影,对所有待测行为段实施操作和判断,按照状态空间实施全序列连接操作,生成全覆盖、无冗余的测试序列;采用自适应粒子群优化算法,设置初始参数、初始种群,判断终止条件,在扩增的测试数据覆盖目标路径时,输入覆盖的测试序列数据完成软件测试数据扩增.实验结果表明,上述算法在软件测试数据扩增效率高,耗时低,平均运行时间低至0.51s,目标路径覆盖率高达到1.0,并且后期的目标路径覆盖率平稳.  相似文献   

6.
针对在回归测试中原有的测试数据集往往难以满足新版本软件的测试需求问题,提出一种基于搜索的分层回归测试数据生成方法。方法主要包含覆盖目标方法集获取模块和测试数据生成模块。首先对新版本程序进行抽象分析,提取出方法调用图,利用方法调用轨迹和已有测试数据建立方法覆盖信息,获取目标方法集,并通过计算贝叶斯条件概率对目标方法集进行优先选择;利用Hadamard矩阵设计正交种群,同时结合已有测试数据集进行种群初始化,采用文化基因算法对目标集中方法生成测试数据。该方法针对4个基准程序与随机法和遗传算法以及基于粒子群算法测试数据生成方法相比,测试数据的生成效率平均提高了95.2%、78.2%和50.5%,测试数据检错能力平均提高了47.9%、33.6%和18.2%,实验结果表明,该方法更适合回归测试数据的生成。  相似文献   

7.
基于改进DPSO的组合测试数据生成算法   总被引:1,自引:0,他引:1       下载免费PDF全文
孙家泽  王曙燕 《计算机工程》2012,38(7):40-41,45
对离散粒子群优化算法进行改进,提出一种两两覆盖的组合测试数据生成算法。以一个粒子代表一个测试数据集,从整体上评价测试数据集对各个因素组合的覆盖情况,以测试数据中各因素离散值出现的次数为依据,随机产生粒子位置。实例分析表明,该算法与初始值无关,可有效生成测试数据且收敛速度快。  相似文献   

8.
基于解空间树的组合测试数据生成   总被引:13,自引:1,他引:12  
在组合覆盖测试模型的基础上提出:将所有的可用测试数据表示为一棵解空间树,利用回溯法对解空间树进行路径搜索来生成测试数据,然后使用贪心算法补充生成测试数据,以满足两两组合覆盖标准.并且实现了基于该方法的测试数据生成工具,所生成的测试数据集与同类工具相比具有一定的特点和优势.  相似文献   

9.
潘烁  王曙燕  孙家泽 《计算机应用》2012,32(4):1165-1167
在解决组合测试中的测试数据集生成问题时,粒子群优化算法(PSO)在待测数据量增加达到一定程度以后,出现迭代次数增加、收敛速度减慢的缺点。针对该问题,提出了一种应用于组合测试数据集生成问题的基于K-均值聚类的粒子群优化算法。通过对测试数据集合进行聚类分区域,增强测试数据集的多态性,从而对粒子群优化算法进行改进,增加各个区域内粒子之间的影响力。典型案例实验表明该方法在保证覆盖度的情况下具有一定的优势和特点。  相似文献   

10.
测试数据自动化生成技术尝试寻找一个相对小的数据集来满足测试充分性标准,以降低软件测试的成本,提高测试效率.当测试项的数据集大小超过其上限时,算法会使用淘汰算法把差异性较小的测试数据从集合中淘汰掉,把差异性较大的测试数据留下来,以维持种群的多样性.针对此问题,提出一种基于维持种群多样性的演化算法来求解测试数据集,算法利用启发信息迭代地选择一个条件?判定语句作为子目标,通过演化算法生成数据以覆盖目标.在此算法框架内,利用一种新的计算评估值的方法计算数据与测试项的距离信息;以及利用归一的曼哈顿距离计算测试数据差异性,通过淘汰策略把差异性较小的测试数据淘汰掉.在实验中,对14个计算机科学基础算法的基准函数进行了测试,并与现有文献中的测试数据生成方法进行对比,验证了算法有效提高了条件?判定覆盖率,并且减少了测试数据的生成数量,提高了测试性能.  相似文献   

11.
由于GIS中文查询语句与空间扩展SQL语句相差很大,直接转化非常困难,所以需要有某种中间语言作为过渡。本文对GIS中文查询系统中间语言的形成进行了研究,提出了以句子栈、实体栈、查询目标栈、查询条件栈和句型字符串为结构的中间语言,制订了空间查询语句的文法规则,设计了GIS中文查询语句到中间语言转换的算法。实验证明,该算法可以完成大部分查询语句到中间语言的转化。  相似文献   

12.
A system to generate and interpret customized visual languages in given application areas is presented. The generation is highly automated. The user presents a set of sample visual sentences to the generator. The generator uses inference grammar techniques to produce a grammar that generalizes the initial set of sample sentences, and exploits general semantic information about the application area to determine the meaning of the visual sentences in the inferred language. The interpreter is modeled on an attribute grammar. A knowledge base, constructed during the generation of the system, is then consulted to construct the meaning of the visual sentence. The architecture of the system and its use in the application environment of visual text editing (inspired by the Heidelberg icon set) enhanced with file management features are reported  相似文献   

13.
This paper presents a robust parsing approach which is designed to address the issue of syntactic errors in text. The approach is based on the concept of an error grammar which is a grammar of ungrammatical sentences. An error grammar is derived from a conventional grammar on the basis of an analysis of a corpus of observed ill-formed sentences. A robust parsing algorithm is presented which is applied after a conventional bottom–up parsing algorithm has failed. This algorithm combines a rule from the error grammar with rules from the normal grammar to arrive at a parse for an ungrammatical sentence. This algorithm is applied to 50 test sentences, with encouraging results.  相似文献   

14.
This paper describes a method of knowledge representation as a set of text expressed statements. The method is based on the identification of word-categories/phrases and their semantic relationships within the observed statement. Furthermore, the identification of semantic relationships between words/phrases using wh-questions that clarify the role of the word/phrase in the relationship is described. A conceptual model of the computer system based on the formalization method of text-expressed knowledge is proposed. The subsystem text formalization is described in detail, especially its parts: syntactic analysis of the sentence, sentence formalization, phrase structure grammar and lexicon. The phrase structure grammar is formed by induction and it is used to generate the language of the formalized notation of a sentence. The derivation of grammar is based on the simple phrase structure grammar which was used for the syntactical analysis of informal language notation. In its base, the suggested method translates sentences of the informal language into formal language sentences which are generated by the derivated phrase structure grammar. Current limitations of the method that also set the path of its further development are shown. Next concrete steps in the development of the method are also described.  相似文献   

15.
在对中文文本进行摘要提取时,传统的TextRank算法只考虑节点间的相似性,忽略了文本的其他重要信息。首先,针对中文单文档,在现有研究的基础上,使用TextRank算法,一方面考虑句子间的相似性,另一方面,使TextRank算法与文本的整体结构信息、句子的上下文信息等相结合,如文档句子或者段落的物理位置、特征句子、核心句子等有可能提升权重的句子,来生成文本的摘要候选句群;然后对得到的摘要候选句群做冗余处理,以除去候选句群中相似度较高的句子,得到最终的文本摘要。最后通过实验验证,该算法能够提高生成摘要的准确性,表明了该算法的有效性。  相似文献   

16.
获取上下文无关文法的一种交互式算法   总被引:4,自引:0,他引:4  
董韫美 《计算机学报》1996,19(3):168-173
本文提出一种交互式的上下文无关语言的学习算法,该算法是专门为SAQ系统设计的,所得到的文法能够自然地反映句子的内部结构,从而很容易刻划句子的含义(语义)。  相似文献   

17.
We propose a novel approach to embedding sentences into a high-dimensional space. Independent words in the sentence are located at points in the space, and the sentence is represented by a curve along these words. A set of functions that evaluates a sequence of words is designed over this space and is helpful for searching for words that are likely to follow the observed sentences. More generally, our approach makes sentences sequentially depending on the context. We simplify Japanese grammar and subsequently implement it as a grammar that constrains simple sentences to be generated. In this study, we performed experiments in which we created a dictionary containing 2877 different independent words and constructed a semantic space from texts in eight digital archived books, consisting of 8495 independent words and 161 paragraphs in total. It was demonstrated that several meaningful sentences can be generated that are likely to follow untrained input sentences.  相似文献   

18.
This paper presents an algorithm (a parser) for analyzing sentences according to grammatical constraints expressed in the framework of lexicalized tree-adjoining grammar. For the current grammars of English, the algorithm behaves much better and requires much less time than its worst-case complexity. The main objective of this work is to design a practical parser whose average-case complexity is much superior to its worst case. Most of the previous methods always required the worst-case complexity. The algorithm can be used in two modes. As a recognizer it outputs whether the input sentence is grammatically correct or not. As a parser it outputs a detailed analysis of the grammatically correct sentences. As sentences are read from left to right, information about possible continuations of the sentence is computed. In this sense, the algorithm is called a predictive left to right parser. This feature reduces the average time required to process a given sentence. In the worst case, the parser requires an amount of time proportional to G2n6 for a sentence of n words and for a lexicalized tree-adjoining grammar of size G. The worst-case complexity is only reached with pathological (not naturally occurring) grammars and inputs.  相似文献   

19.
A complete analysis of an English sentence includes syntactic, semantic, and pragmatic components. Presupposition belongs to the pragmatic component. How to determine the presuppositions of multiple-clause sentences has been the focus of much work. Projection of clausal presuppositions is one method to determine the presuppositions of multiple-clause sentences. In this paper we present a new approach to the projection problem. Drawing heavily on the theoretical techniques originating with Montague semantics, our system maps sentences of a category-based grammar into a set of expressions of intensional logic: one expression corresponding to the literal interpretation of the sentence and the remaining expressions corresponding to the presuppositions of the sentence. The new approach correctly predicts the presuppositions of a larger range of multiple-clause sentences than previous projection approaches.  相似文献   

20.
句子相似度的计算在自然语言处理的各个领域占有很重要的地位,一些传统的计算方法只考虑句子的词形、句长、词序等表面信息,并没有考虑句子更深层次的语义信息,另一些考虑句子语义的方法在实用性上的表现不太理想。在空间向量模型的基础上提出了一种同时考虑句子结构和语义信息的关系向量模型,这种模型考虑了组成句子的关键词之间的搭配关系和关键词的同义信息,这些信息反应了句子的局部结构成分以及各局部之间的关联关系,因此更能体现句子的结构和语义信息。以关系向量模型为核心,提出了基于关系向量模型的句子相似度计算方法。同时将该算法应用到网络热点新闻自动摘要生成算法中,排除文摘中意思相近的句子从而避免文摘的冗余。实验结果表明,在考虑网络新闻中的句子相似度时,与考虑词序与语义的算法相比,关系向量模型算法不但提高了句子相似度计算的准确率,计算的时间复杂度也得到了降低。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号