首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 472 毫秒
1.
The problem of identifying the conditions under which semantic or behavioural dependences arise between different program statements has interesting applications in various areas such as program understanding, software maintenance, software audits and software testing. We present an extension to the program dependence graph (PDG), called the dependence condition graph (DCG), that enables identifying the conditions for dependence between program points. We show that these conditions are not only correct with respect to the program's semantics, but also more precise than identified by other known techniques. We also present evidence that the DCG is a practical representation and can be built for large programs, and sketch many different applications of the DCG.  相似文献   

2.
An efficient algorithm to remove redundant dependences in simple loops with constant dependences is presented. Dependences constrain the parallel execution of programs and are typically enforced by synchronization instructions. The synchronization instructions represent a significant part of the overhead in the parallel execution of a program. Some program dependences are redundant because they are covered by other dependences. It is shown that unlike with single loops, in the case of nested loops, a particular dependence may be redundant at some iterations but not redundant at others, so that the redundancy of a dependence may not be uniform over the entire iteration space. A sufficient condition for the uniformity of redundancy in a doubly nested loop is developed  相似文献   

3.
软件缺陷预测技术用于定位软件中可能存在缺陷的代码模块,从而辅助开发人员进行测试与修复。传统的软件缺陷特征为基于软件规模、复杂度和语言特点等人工提取的静态度量元信息。然而,静态度量元特征无法直接捕捉程序上下文中的缺陷信息,从而影响了软件缺陷预测的性能。为了充分利用程序上下文中的语法语义信息,论文提出了一种基于混合注意力机制的软件缺陷预测方法 DP-MHA(Defect Prediction via Mixed Attention Mechanism)。DP-MHA首先从程序模块中提取基于AST树的语法语义序列并进行词嵌入编码和位置编码,然后基于多头注意力机制自学习上下文语法语义信息,最后利用全局注意力机制提取关键的语法语义特征,用于构建软件缺陷预测模型并识别存在潜在缺陷的代码模块。为了验证DP-MHA的有效性,论文选取了六个Apache的开源Java数据集,与经典的基于RF的静态度量元方法、基于RBM+RF、DBN+RF无监督学习方法和基于CNN和RNN深度学习方法进行对比,实验结果表明,DP-MHA在F1值分别提升了16.6%、34.3%、26.4%、7.1%、4.9%。  相似文献   

4.
This paper concerns hybrid systems subject to discrete-event supervisory control. It investigates the discrete reachability, where the hybrid system should be moved from a discrete initial state z init into a given discrete goal state z goal by a sequence of discrete inputs. The reachability analysis is carried out in three steps. First, a discrete-event model of the hybrid system is set up. Stochastic automata are used to describe all state sequences which may be generated by the hybrid system. Such a model is called complete. Second, methods for the reachability analysis of stochastic automata are elaborated which concern a weak and a strong condition for reachability. Third, these methods are applied to the hybrid system. It is shown that the reachability of the automaton implies the discrete reachability of the hybrid system, because the model is complete. Therefore, the weak and the strong condition for reachability of the stochastic automaton yield a necessary and a sufficient condition for discrete reachability of the hybrid system.  相似文献   

5.
There are several similar, but not identical, definitions of control dependence in the literature. These definitions are given in terms of control flow graphs which have had extra restrictions imposed (for example, end-reachability).We define two new generalisations of non-termination insensitive and non-termination sensitive control dependence called weak and strong control-closure. These are defined for all finite directed graphs, not just control flow graphs and are hence allow control dependence to be applied to a wider class of program structures than before.We investigate all previous forms of control dependence in the literature and prove that, for the restricted graphs for which each is defined, vertex sets are closed under each if and only if they are either weakly or strongly control-closed. Low polynomial-time algorithms for producing minimal weakly and strongly control-closed sets over generalised control flow graphs are given.This paper is the first to define an underlying semantics for control dependence: we define two relations between graphs called weak and strong projections, and prove that the graph induced by a set of vertices is a weak/strong projection of the original if and only if the set is weakly/strongly control-closed. Thus, all previous forms of control dependence also satisfy our semantics. Weak and strong projections, therefore, precisely capture the essence of control dependence in our generalisations and all the previous, more restricted forms. More fundamentally, these semantics can be thought of as correctness criteria for future definitions of control dependence.  相似文献   

6.
This paper investigates some methods for proving the equivalence of different language specifications that are given in terms of attribute grammars. Different specifications of the same language may be used for different purposes, such as language definition, program verification, or language implementation. The concept of syntactic coverings is extended to the semantic part of attribute grammars. Given two attribute grammars, the paper discusses several propositions that give sufficient conditions for one attribute grammar to be semantically covered by the other one. These tools are used for a comparison of two attribute grammars that specify syntax and semantics of mixed-type expressions. This example shows a trade-off between the complexity of syntactic and semantic specifications. Another example discussed is the equivalence of different attribute grammars for the translation of the while-statement, as used in compilers for top-down and bottom-up syntax analysis.This work was in part supported by the National Research Council of Canada.  相似文献   

7.
PAU is an all-paths chart-based unification parser that uses the same uniform representation for regular syntax, irregular syntax such as idioms, and semantics. PAU's representation has very little redundancy, simplifying the task of adding new semantics and syntax fo PAU's knowledge base. PAU uses relations between the syntax and semantics to avoid the proliferation of rules found in semantic grammars. By encoding semantics at the same level of representation as syntax, PAU is able to use semantic constraints early in the parse to eliminate semantically anomalous syntactic interpretations. Examples are given to show how PAU can handle the many eccentricities of different idioms using the same mechanisms as are used to handle regular syntax and semantics. These include the ability of some idioms, but not other idioms of the same syntactic form to undergo passivization, particle movement, action nominalization, indirect object movement, modification by adjectives, gerundive nominalization, prepositional phrase preposing, and topicalization. PAU's representation is bidirectional and is also used by a companion generator. PAU is designed to be efficient, runs in real time on typical workstations, and is being used in a number of natural language systems.  相似文献   

8.
The decomposition slice graph and concept lattice are two program representations used to abstract the details of code into a higher-level view of the program. The decomposition slice graph partitions the program into computations performed on different variables and shows the dependence relation between computations, holding when a computation needs another computation as a building block. The concept lattice groups program entities which share common attributes and organizes such groupings into a hierarchy of concepts, which are related through generalizations/specializations. This paper investigates the relationship existing between these two program representations. The main result of this paper is a novel program representation, called concept lattice of decomposition slices, which is shown to be an extension of the decomposition slice graph, and is obtained by means of concept analysis, with additional nodes associated with weak interferences between computations, i.e., shared statements which are not decomposition slices. The concept lattice of decomposition slices can be used to support software maintenance by providing relevant information about the computations performed by a program and the related dependences/interferences, as well as by representing a natural data structure on which to conduct impact analysis. Preliminary results on small to medium size code support the applicability of this method at the intraprocedural level or when investigating the dependences among small groups of procedures.  相似文献   

9.
框架语义角色标注(Frame Semantic Role Labeling, FSRL)是基于FrameNet标注体系的语义分析任务。语义角色标注通常对句法有很强的依赖性,目前的语义角色标注模型大多基于双向长短时记忆网络Bi-LSTM,虽然可以获取句子中的长距离依赖信息,但无法很好地获取句子中的句法信息。因此,引入Self-Attention机制来捕获句子中每个词的句法信息。实验结果表明,该模型在CFN(Chinese FrameNet,汉语框架网)数据集上的F1值得到了提升,证明了融入self-attention机制可以改进汉语框架语义角色标注模型的性能。  相似文献   

10.
11.
It is extremely difficult to parallelize DOACROSS loops with nonuniform loop-carried dependences. In this paper, we present a static scheduling scheme with an accompanying synchronization strategy that can execute such DOACROSS loops effectively and efficiently. Our approach uses one of the parallelization techniques called Dependence Uniformization, which finds a small set of uniform dependence vectors to cover all possible nonuniform dependences in a DOACROSS loop. It differs from the previous schemes in that we demonstrate a better way to select the uniform dependence vectors. When used with the Static Strip Scheduling scheme, the proposed uniform dependence vector set allows us to enforce dependences with more locality, which reduces the requirement of explicit synchronization considerably while retaining most of the parallelism. This paper describes the uniform dependence vectors selection strategy and the static strip scheduling scheme. The performance analysis and examples are also presented  相似文献   

12.
Real-time rule-based expert systems are embedded decision systems that must respond to changes in the environments within stringent timing constraints. Given a program p, the response time analysis problem is to determine the response time of p. This problem consists of: determining whether or not the execution of p always terminates in bounded time; and computing the maximal execution time of p. The Equational Logic (EQL) language is a simple language designed for real-time applications. It has been proved by A.K. Mok (1989) that the response time analysis problem is undecidable if the program variables have infinite domains, and is PSPACE-hard in the case where all of the variables have finite domains. However, we have observed that the use of a simple syntactic and semantic check on programs coupled with other techniques such as state space graph checks can dramatically reduce the time needed in the analysis. There are sets of syntactic and semantic constraint assertions such that if the set S of rules satisfies any of them, then the execution of S always terminates in bounded time. Each of these sets of syntactic and semantic constraint assertions is called a Special Form. The focus of the paper is on proving the existence of two Special Forms and determining tight response time upper bounds of EQL rule-based programs. For each known Special Form, an algorithm used to calculate the maximal response time of programs satisfying this Special Form is presented. Additionally, to enhance the applicability of the proposed algorithms, we show how the General Analysis Algorithm can be used with these algorithms  相似文献   

13.
The convex hull algorithm for simple polygons, due to Sklansky, fails in some cases, but its extreme simplicity, compared to the later algorithms, revived an interest in this algorithm. A sufficient condition for its success was given recently by Toussaint and Avis. They have proved that the algorithm works for polygons known as weakly externally visible polygons.

In this paper a new notion called external left visibility is introduced and it is shown that this is a necessary and sufficient condition for the success of Sklansky's algorithm. Moreover, algorithms testing simple polygons for external left visibility and weak external visibility are given.  相似文献   


14.
The aim of this work is to investigate the effects of temporal aggregation and systematic sampling on periodic autoregressive moving average (PARMA) time series. Firstly, it is shown that the class of weak PARMA processes, i.e. with uncorrelated but possibly dependent errors, is closed under a particular class of linear transformations that include both temporal aggregation and systematic sampling. This extends a similar result for autoregressive moving average processes; see [Wei, W.W.S., 2006. Time Series Analysis: Univariate and Multivariate Methods, second ed. Addison-Wesley, New York (Chapter 20)] for a review on the subject. Secondly, the properties of the noise of the transformed process are investigated. A sufficient condition is given under which aggregation and systematic sampling of a strong PARMA process, i.e. with independent errors, give rise in general to a weak PARMA process. Under that condition, the noise of the transformed process is neither strong nor a martingale difference. This result points out that the assumption of strong PARMA should not be used without careful considerations when analyzing aggregated time series that naturally occur in many scientific fields. The sufficient condition for non-independent errors is illustrated with the PARMA(1,1) model. A simulation study underlines the practical relevance of our findings and the importance of taking into account the dependence of the errors when fitting a PARMA model to an aggregated time series.  相似文献   

15.
Quantified Boolean formulae (QBF) allow compact encoding of many decision problems. Their importance motivated the development of fast QBF solvers. Certifying the results of a QBF solver not only ensures correctness, but also enables certain synthesis and verification tasks. To date the certificate of a true formula can be in the form of either a syntactic cube-resolution proof or a semantic Skolem-function model whereas that of a false formula is only in the form of a syntactic clause-resolution proof. The semantic certificate for a false QBF is missing, and the syntactic and semantic certificates are somewhat unrelated. This paper identifies the missing Herbrand-function countermodel for false QBF, and strengthens the connection between syntactic and semantic certificates by showing that, given a true QBF, its Skolem-function model is derivable from its cube-resolution proof of satisfiability as well as from its clause-resolution proof of unsatisfiability under formula negation. Consequently Skolem-function derivation can be decoupled from special Skolemization-based solvers and computed from standard search-based ones. Experimental results show strong benefits of the new method.  相似文献   

16.
基于6种语体的句法和语义树库分别构建了依存句法和语义网络,对这些网络的边数、节点数、节点平均度、聚类系数、平均最短路径长度、网络中心势、直径、节点度幂律分布的幂指数、度分布与幂律拟合的决定系数等整体特征进行了对比分析。以这些整体特征为变量,采用不同的聚类方法,对这6种语体的句法和语义网络进行了聚类分析。研究结果显示,同样是基于语言学原则构建起来的网络结构,依存句法网络和依存语义网络之间有明显差异。其参数的含义不尽相同,依据其各项参数所做的聚类实验的结果也不相同。采用语义网络的一些主要参数组合,可以获得相对合理的聚类结果,但不能很好地区分书面语体和口语体;通过句法网络的一些主要参数组合,可以很好地区分不同语体的文本,获得较为合理的文本聚类结果。  相似文献   

17.
针对汉语语句表意灵活复杂多变的特点,提出一种基于语义与情感的句子相似度计算方法,从表意层面计算句子相似度。该方法使用哈工大LTP平台对句子进行预处理,提取词语、词性、句法依存标记与语义角色标记,将语义角色标注结果作为句中语义独立成分赋予相似度权重系数,综合句法依存关系与词法关系计算两句相同标签语义独立成分相似度得到部分相似度,加权计算部分相似度得到句子整体相似度。另外,考虑到情感与句式因子,在整体相似度的基础上对满足条件的两句计算情感减益与句式减益。实验结果表明,该方法能有效提取出句子语义独立成分,从语义层面上计算句子相似度,解决了信息遗漏与句子组成成分不一致的问题,提高了句子相似度计算的准确率与鲁棒性。  相似文献   

18.
We describe a Chinese lexical semantic resource that consists of 11,765 predicates (mostly verbs and their nominalizations) analyzed with coarse-grained senses and semantic roles. We show that distinguishing senses at a coarse-grained level is a necessary part of specifying the semantic roles and describe our strategies for sense determination for purposes of predicate-argument structure specification. The semantic roles are postulated to account for syntactic variations, the different ways in which the semantic roles of a predicate are realized. The immediate purpose for this lexical semantic resource is to support the annotation of the Chinese PropBank, but we believe it can also serve as stepping stone for higher-level semantic generalizations.  相似文献   

19.
Measuring image similarity is an important task for various multimedia applications. Similarity can be defined at two levels: at the syntactic (lower, context-free) level and at the semantic (higher, contextual) level. As long as one deals with the syntactic level, defining and measuring similarity is a relatively straightforward task, but as soon as one starts dealing with the semantic similarity, the task becomes very difficult. We examine the use of simple readily available syntactic image features combined with other multimodal features to derive a similarity measure that captures the weak semantics of an image. The weak semantics can be seen as an intermediate step between low level image understanding and full semantic image understanding. We investigate the use of single modalities alone and see how the combination of modalities affect the similarity measures. We also test the measure on multimedia retrieval task on a tv series data, even though the motivation is in understanding how different modalities relate to each other.  相似文献   

20.
We build an open-source toolkit which implements deterministic learning to support search and text classification tasks. We extend the mechanism of logical generalization towards syntactic parse trees and attempt to detect weak semantic signals from them. Generalization of syntactic parse tree as a syntactic similarity measure is defined as the set of maximum common sub-trees and performed at a level of paragraphs, sentences, phrases and individual words. We analyze semantic features of such similarity measure and compare it with semantics of traditional anti-unification of terms. Nearest-neighbor machine learning is then applied to relate a sentence to a semantic class. Using syntactic parse tree-based similarity measure instead of bag-of-words and keyword frequency approach, we expect to detect a weak semantic signal otherwise unobservable. The proposed approach is evaluated in a four distinct domains where a lack of semantic information makes classification of sentences rather difficult. We describe a toolkit which is a part of Apache Software Foun-dation project OpenNLP, designed to aid search engineers in tasks requiring text relevance assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号