共查询到20条相似文献,搜索用时 15 毫秒
1.
介绍一种源程序度量系统的实现方法,它以可扩展程序对象模型作为其构建基础,将对源程序的信息抽取变为对抽象对象模型的信息抽取,故而无论从设计和实现系统的角度看都显得较为简洁易行。 相似文献
2.
Many studies use logistic regression models to investigate the ability of complexity metrics to predict fault-prone classes. However, it is not uncommon to see the inappropriate use of performance indictors such as odds ratio in previous studies. In particular, a recent study by Olague et al. uses the odds ratio associated with one unit increase in a metric to compare the relative magnitude of the associations between individual metrics and fault-proneness. In addition, the percents of concordant, discordant, and tied pairs are used to evaluate the predictive effectiveness of a univariate logistic regression model. Their results suggest that lesser known complexity metrics such as standard deviation method complexity (SDMC) and average method complexity (AMC) are better predictors than the two commonly used metrics: lines of code (LOC) and weighted method McCabe complexity (WMC). In this paper, however, we show that (1) the odds ratio associated with one standard deviation increase, rather than one unit increase, in a metric should be used to compare the relative magnitudes of the effects of individual metrics on fault-proneness. Otherwise, misleading results may be obtained; and that (2) the connection of the percents of concordant, discordant, and tied pairs with the predictive effectiveness of a univariate logistic regression model is false, as they indeed do not depend on the model. Furthermore, we use the data collected from three versions of Eclipse to re-examine the ability of complexity metrics to predict fault-proneness. Our experimental results reveal that: (1) many metrics exhibit moderate or almost moderate ability in discriminating between fault-prone and not fault-prone classes; (2) LOC and WMC are indeed better fault-proneness predictors than SDMC and AMC; and (3) the explanatory power of other complexity metrics in addition to LOC is limited. 相似文献
3.
随着软件产业的急速发展,应用软件系统规模不断增大,企业对软件质量的的重视程度越来越高,软件企业对软件测试的投入也逐渐增加。探讨了McCabe IQ测试工具对软件测试质量和效益的提升。 相似文献
4.
The paper presents results on the runtime complexity of two ant colony optimization (ACO) algorithms: ant system, the oldest ACO variant, and GBAS, the first ACO variant for which theoretical convergence results have been established. In both cases, as the class of test problems under consideration, a slight generalization of the well-known OneMax test function has been chosen. The techniques used for the runtime analysis of the two algorithms differ: in the case of GBAS, the expected runtime until the optimal solution is reached is studied by a direct bound estimation approach inspired by comparable results for the (1+1) evolutionary algorithm (EA). A runtime bound of order O(mlogm) , where m is the problem instance size, is obtained. In the case of ant system, the original discrete stochastic process is approximated by a suitable continuous deterministic process. The validity of the approximation is shown by means of a rigid convergence theorem exploiting a classical result from mathematical learning theory. Using this approximation, it is demonstrated that for the considered OneMax-type problems, a runtime of order O(mlog(1/ε)) until reaching an expected relative solution quality of 1-ε , and a runtime of O(mlogm) until reaching the optimal solution with high probability can be predicted. Our results are the first to show competitiveness in runtime complexity with (1+1 ) EA on OneMax for a proper ACO algorithm. 相似文献
5.
A materialised faceted taxonomy is an information source where the objects of interest are indexed according to a faceted taxonomy. This paper shows how from a materialised faceted taxonomy, we can mine an expression of the Compound Term Composition Algebra that specifies exactly those compound terms (conjunctions of terms) that have non-empty interpretation. The mined expressions
can be used for encoding in a very compact form (and subsequently reusing), the domain knowledge that is stored in existing materialised faceted taxonomies. A distinctive
characteristic of this mining task is that the focus is given on minimising the storage space requirements of the mined set
of compound terms. This paper formulates the problem of expression mining, gives several algorithms for expression mining,
analyses their computational complexity, provides techniques for optimisation, and discusses several novel applications that
now become possible.
Yannis Tzitzikas is currently Adjunct Professor in the Computer Science Department at University of Crete (Greece) and Visiting Researcher
in Information Systems Lab at FORTH-ICS (Greece). Before joining University of Crete and FORTH-ICS, he was a postdoctoral
fellow at the University of Namur (Belgium) and ERCIM postdoctoral fellow at ISTI-CNR (Pisa, Italy) and at VTT Technical Research
Centre of Finland. He conducted his undergraduate and graduate studies (M.Sc., Ph.D.) in the Computer Science Department at
University of Crete. His research interests fall in the intersection of the following areas: knowledge representation and
reasoning, information indexing and retrieval, conceptual modeling, and collaborative distributed applications. His current
research revolves around faceted metadata and semantics (theory and applications), the P2P paradigm (focusing on query evaluation
algorithms and automatic schema integration techniques) and flexible interaction schemes for information bases. The results
of his research are published in more than 30 papers in refereed international journals and conferences.
Anastasia Analyti earned a B.S. degree in Mathematics from University of Athens, Greece, and M.S. and Ph.D. degrees in Computer Science from
Michigan State University, USA. She worked as a visiting professor at the Department of Computer Science, University of Crete,
and at the Department of Electronic and Computer Engineering, Technical University of Crete. Since 1995, she has been a researcher
at the Information Systems Laboratory of the Institute of Computer Science, Foundation for Research and Technology-Hellas
(FORTH-ICS). Her current interests include the semantic Web, conceptual modelling, faceted metadata and semantics, rules for
the semantic Web, biomedical ontologies, contextual organisation of information, contextual web-ontology languages, information
integration and retrieval systems for the Web. She has published over 30 papers in refereed journals and conferences. 相似文献
6.
We propose a method to quantify the complexity of conditional probability measures by a Hilbert space seminorm of the logarithm of its density. The concept of reproducing kernel Hilbert spaces (RKHSs) is a flexible tool to define such a seminorm by choosing an appropriate kernel. We present several examples with artificial data sets where our kernel-based complexity measure is consistent with our intuitive understanding of complexity of densities. The intention behind the complexity measure is to provide a new approach to inferring causal directions. The idea is that the factorization of the joint probability measure P(effect,cause) into P(effect|cause)P(cause) leads typically to “simpler” and “smoother” terms than the factorization into P(cause|effect)P(effect). Since the conventional constraint-based approach of causal discovery is not able to determine the causal direction between only two variables, our inference principle can in particular be useful when combined with other existing methods. We provide several simple examples with real-world data where the true causal directions indeed lead to simpler (conditional) densities. 相似文献
7.
分析了用FORTRAN语言生成的四种类型数据文件的结构特点,介绍了在Visual C 环境下读取这些数据文件应注意的问题. 相似文献
9.
A number of empirical studies have pointed to a link between software complexity and software maintenance performance. The primary purpose of this paper is to document what is known about this relationship, and to suggest some possible future avenues of research. In particular, a survey of the empirical literature in this area shows two broad areas of study: complexity metrics and comprehension. Much of the complexity metrics research has focused on modularity and structure metrics. The articles surveyed are summarized as to major differences and similarities in a set of detailed tables. The text is used to highlight major findings and differences, and a concluding remarks section provides a series of recommendations for future research. 相似文献
10.
The online computational burden of linear model predictive control (MPC) can be moved offline by using multi-parametric programming, so-called explicit MPC. The solution to the explicit MPC problem is a piecewise affine (PWA) state feedback function defined over a polyhedral subdivision of the set of feasible states. The online evaluation of such a control law needs to determine the polyhedral region in which the current state lies. This procedure is called point location; its computational complexity is challenging, and determines the minimum possible sampling time of the system. A new flexible algorithm is proposed which enables the designer to trade off between time and storage complexities. Utilizing the concept of hash tables and the associated hash functions, the proposed method solves an aggregated point location problem that overcomes prohibitive complexity growth with the number of polyhedral regions, while the storage–processing trade-off can be optimized via scaling parameters. The flexibility and power of this approach is supported by several numerical examples. 相似文献
11.
In the paper we study new approaches to the problem of list coloring of graphs. In the problem we are given a simple graph G=( V, E) and, for every v∈ V, a nonempty set of integers S( v); we ask if there is a coloring c of G such that c( v)∈ S( v) for every v∈ V. Modern approaches, connected with applications, change the question—we now ask if S can be changed, using only some elementary transformations, to ensure that there is such a coloring and, if the answer is yes, what is the minimal number of changes. In the paper for studying the adding, the trading and the exchange models of list coloring, we use the following transformations: - •
- adding of colors (the adding model): select two vertices u, v and a color c∈S(u); add c to S(v), i.e. set S(v):=S(v)∪{c};
- •
- trading of colors (the trading model): select two vertices u, v and a color c∈S(u); move c from S(u) to S(v), i.e. set S(u):=S(u)?{c} and S(v):=S(v)∪{c};
- •
- exchange of colors (the exchange model): select two vertices u, v and two colors c∈S(u), d∈S(v); exchange c with d, i.e. set S(u):=(S(u)?{c})∪{d} and S(v):=(S(v)?{d})∪{c}.
Our study focuses on computational complexity of the above models and their edge versions. We consider these problems on complete graphs, graphs with bounded cyclicity and partial k-trees, receiving in all cases polynomial algorithms or proofs of NP-hardness. 相似文献
13.
对Google搜索引擎的工作原理进行了详细分析,同时介绍了Google公司新近推出的WebServices接口服务,简单地描述了WebServices相关技术,最后举了一个具体的应用实例。 相似文献
14.
Explanation-based learning depends on having an explanation on which to base generalization. Thus, a system with an incomplete or intractable domain theory cannot use this method to learn from every precedent. However, in such cases the system need not resort to purely empirical generalization methods, because it may already know almost everything required to explain the precedent. Learning by failing to explain is a method that uses current knowledge to prune the well-understood portions of complex precedents (and rules) so that what remains may be conjectured as a new rule. This paper describes precedent analysis, partial explanation of a precedent (or rule) to isolate the new technique(s) it embodies, and rule reanalysis, which involves analyzing old rules in terms of new rules to obtain a more general set. The algorithms PA, PA-RR, and PA-RR-GW implement these ideas in the domains of digital circuit design and simplified gear design. 相似文献
15.
本文介绍在Word中对插入的AutoCAD图形进行分解,并对分解后的对象进行分析、处理与编辑,有效地解决了AutoCAD图形插入到Word后效果不理想的问题。 相似文献
16.
介绍了用S7—300系列PLC编制数据采集程序从检测系统采集数据,对采集到的数据进行平滑处理,并用最小二乘法拟合得到列车的最终运动模型的全过程。重点介绍了运动模型的辨识过程。 相似文献
17.
文章介绍了一个运行在Windows下的网络检测软件Windump,详细描述了Windump的使用方法及参数,并通过实例介绍了Windump在网络管理中的实际应用。 相似文献
18.
根据积分不等式自身的特点选择适当的积分区间,利用变上限积分在相应区间上构造出合适的辅助函数,由单调与可导的关系以及积分中值定理或拉格朗日中值定理的巧妙应用确定出该辅助函数的单调性,再由单调性定义得到一些积分不等式并加以证明。 相似文献
19.
High-performance processors employ aggressive branch prediction and prefetching techniques to increase performance. Speculative
memory references caused by these techniques sometimes bring data into the caches that are not needed by correct execution.
This paper proposes the use of the first-level caches as filters that predict the usefulness of speculative memory references.
With the proposed technique, speculative memory references bring data only into the first-level caches rather than all levels
in the cache hierarchy. The processor monitors the use of the cache blocks in the first-level caches and decides which blocks
to keep in the cache hierarchy based on the usefulness of cache blocks. It is shown that a simple implementation of this technique
usually outperforms inclusive and exclusive baseline cache hierarchies commonly used by today’s processors and results in
IPC performance improvements of up to 10% on the SPEC CPU2000 integer benchmarks. 相似文献
20.
This paper reports on a failed experiment to use Wiki technology to support student engagement with the subject matter of a third year undergraduate module. Using qualitative data, the findings reveal that in an educational context, social technologies such as Wiki’s, are perceived differently compared with ordinary personal use and this discourages student adoption. A series of insights are then offered which help HE teachers understand the pitfalls of integrating social technologies in educational contexts. 相似文献
|