首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An experimental evaluation of data dependence analysis techniques   总被引:1,自引:0,他引:1  
Optimizing compilers rely upon program analysis techniques to detect data dependences between program statements. Data dependence information captures the essential ordering constraints of the statements in a program that need to be preserved in order to produce valid optimized and parallel code. Data dependence testing is very important for automatic parallelization, vectorization, and any other code transformation. In this paper, we examine the impact of data dependence analysis in practice. A number of data dependence tests have been proposed in the literature. In each test, there are different trade offs between accuracy and efficiency. We present an experimental evaluation of several data dependence tests, including the Banerjee test, the I-Test, and the Omega test. We compare these tests in terms of data dependence accuracy, compilation efficiency, effectiveness in parallelization, and program execution performance. We analyze the reasons why a data dependence test can be inexact and we explain how the examined tests handle such cases. We run various experiments using the Perfect Club Benchmarks and the scientific library Lapack. We present the measured accuracy of each test and the reasons for any approximation. We compare these tests in term's of efficiency and we analyze the trade offs between accuracy and efficiency. We also determine the impact of each data dependence test on the total compilation time. Finally, we measure the number of loops parallelized by each test and we compare the execution performance of each benchmark on a multiprocessor. Our results indicate that the Omega test is more accurate, but also very inefficient in the cases where the other two tests are inaccurate. In general, the cost of the Omega test is high and uses a significant percentage of the total compilation time. Furthermore, the difference in accuracy of the Omega test over the Banerjee test and the l-Test does not improve parallelization and program execution performance.  相似文献   

2.
空间数据库是管理空间数据的重要方式,亦是GIS的重要组成部分。介绍了空间数据库的概念和发展历程,分析了影响空间数据库性能的因素,进而探讨了图层加载、要素插入和空间计算等应用性能,设计了三组空间数据库的操作实验,对目前常用的几种空间数据库管理空间数据的性能进行了对比分析。  相似文献   

3.
文中介绍一个采用增量计算进行数据库操作的数据库引擎,阐述了利用部分求值技术实现增量式数据库查询的方法,给出了增量式数据库引擎的系统框架与实现方法。性能测试的实验结果的说明了这种数据库引擎的使用能够有效地提高数据库查询的效率。  相似文献   

4.
In this paper we present a new algorithm, DBMIN, for managing the buffer pool of a relational database management system. DBMIN is based on a new model of relational query behavior, thequery locality set model (QLSM). Like the hot set model, the QLSM has an advantage over the stochastic models due to its ability to predict future reference behavior. However, the QLSM avoids the potential problems of the hot set model by separating the modeling of reference behavior from any particular buffer management algorithm. After introducing the QLSM and describing the DBMIN algorithm, we present a performance evaluation methodology for evaluating buffer management algorithms in a multiuser environment. This methodology employed a hybrid model that combines features of both trace-driven and distribution-driven simulation models. Using this model, the performance of the DBMIN algorithm in a multiuser environment is compared with that of the hot set algorithm and four more traditional buffer replacement algorithms.  相似文献   

5.
In this paper we present a new algorithm, DBMIN, for managing the buffer pool of a relational database management system. DBMIN is based on a new model of relational query behavior, thequery locality set model (QLSM). Like the hot set model, the QLSM has an advantage over the stochastic models due to its ability to predict future reference behavior. However, the QLSM avoids the potential problems of the hot set model by separating the modeling of reference behavior from any particular buffer management algorithm. After introducing the QLSM and describing the DBMIN algorithm, we present a performance evaluation methodology for evaluating buffer management algorithms in a multiuser environment. This methodology employed a hybrid model that combines features of both trace-driven and distribution-driven simulation models. Using this model, the performance of the DBMIN algorithm in a multiuser environment is compared with that of the hot set algorithm and four more traditional buffer replacement algorithms.This research was partially supported by the Department of Energy under Contract No. DE-AC02-81ER10920 and the National Science Foundation under grant MCS82-01870.  相似文献   

6.
In recent years, a number of research works have been carried out to improve the information retrieval process by exploiting external knowledge, e.g. by employing ontologies. Even though ontologies seem to be a promising technique to improve the retrieval process, hardly any study has been performed to evaluate the use of ontologies over a longer time period to model user interests. In this work we introduce an ontology based video recommender system that exploits implicit relevance feedback to capture users’ evolving information needs. The system exploits a generic ontology to organise users’ interests. We evaluate the recommendations by performing a user-centred multiple time-series study where participants were asked to include the system into their daily news gathering routine. The results of this study suggest that the system can be successfully employed to improve personal information seeking tasks in news domain.  相似文献   

7.
Real-time database systems must maintain consistency while minimizing the number of transactions that miss the deadline. To satisfy both the consistency and real-time constraints, there is the need to integrate synchronization protocols with real-time priority scheduling protocols. One of the reasons for the difficulty in developing and evaluating database synchronization techniques is that it takes a long time to develop a system, and evaluation is complicated because it involves a large number of system parameters that may change dynamically. This paper describes an environment for investigating distributed real-time database systems. The environment is based on a concurrent programming kernel that supports the creation, blocking, and termination of processes, as well as scheduling and interprocess communication. The contribution of the paper is the introduction of a new approach to system development that utilizes a module library of reusable components to satisfy three major goals: modularity, flexibility, and extensibility. In addition, experiments for real-time concurrency control techniques are presented to illustrate the effectiveness of the environment.This work was supported in part by ONR contract # NOOO14-88-K-0245, by DOE contract # DEFG05-88-ER25063, by CIT contract # CIT-INF-90-011, and by IBM Federal Systems Division.  相似文献   

8.
9.
10.
为解决图象数据库建模,提出一种新的图象数据模型,将图象对象分解为由稳定属性组成的主表部分和若干个可变属性组成的副表部分,查询推理时对主表和副表临时组织以完成图象数据的存取管理等,它能对图象数据库5级模式结构的每级模式一致建模。  相似文献   

11.
12.
The use of capture-recapture to estimate the residual faults in a software artifact has evolved as a promising method. However, the assumptions needed to make the estimates are not completely fulfilled in software development, leading to an underestimation of the residual fault content. Therefore, a method employing a filtering technique with an experience factor to improve the estimate of the residual faults is proposed in this paper. An experimental study of the capture-recapture method with this correction method has been conducted. It is concluded that the correction method improves the capture-recapture estimate of the number of residual defects in the inspected document.  相似文献   

13.
While recent research on rule learning has focused largely on finding highly accurate hypotheses, we evaluate the degree to which these hypotheses are also simple, that is small. To realize this, we compare well-known rule learners, such as CN2, RIPPER, PART, FOIL and C5.0 rules, with the benchmark system SL2 that explicitly aims at computing small rule sets with few literals. The results show that it is possible to obtain a similar level of accuracy as state-of-the-art rule learners using much smaller rule sets.  相似文献   

14.
The potential of edge-based complete image segmentation into regions has not gained the due attention in the literature thus far. The present paper attempts to explore this potential by proposing an adaptive grouping algorithm to solve the contour closure problem that is the key to a successful edge-based complete image segmentation. The effectiveness of the proposed algorithm is extensively tested in the range image domain and compared to several region-based segmentation methods within a rigorous comparison framework. On three range image databases of varying quality acquired by different range scanners, it is shown that the proposed approach is able to achieve very appealing performance with respect to both segmentation quality and computation time.  相似文献   

15.
Subspace and projected clustering: experimental evaluation and analysis   总被引:5,自引:3,他引:2  
Subspace and projected clustering have emerged as a possible solution to the challenges associated with clustering in high-dimensional data. Numerous subspace and projected clustering techniques have been proposed in the literature. A comprehensive evaluation of their advantages and disadvantages is urgently needed. In this paper, we evaluate systematically state-of-the-art subspace and projected clustering techniques under a wide range of experimental settings. We discuss the observed performance of the compared techniques, and we make recommendations regarding what type of techniques are suitable for what kind of problems.  相似文献   

16.
In spite of several decades of software metrics research and practice, there is little understanding of how software metrics relate to one another, nor is there any established methodology for comparing them. We propose a novel experimental technique, based on search-based refactoring, to ‘animate’ metrics and observe their behaviour in a practical setting. Our aim is to promote metrics to the level of active, opinionated objects that can be compared experimentally to uncover where they conflict, and to understand better the underlying cause of the conflict. Our experimental approaches include semi-random refactoring, refactoring for increased metric agreement/disagreement, refactoring to increase/decrease the gap between a pair of metrics, and targeted hypothesis testing. We apply our approach to five popular cohesion metrics using ten real-world Java systems, involving 330,000 lines of code and the application of over 78,000 refactorings. Our results demonstrate that cohesion metrics disagree with each other in a remarkable 55 % of cases, that Low-level Similarity-based Class Cohesion (LSCC) is the best representative of the set of metrics we investigate while Sensitive Class Cohesion (SCOM) is the least representative, and we discover several hitherto unknown differences between the examined metrics. We also use our approach to investigate the impact of including inheritance in a cohesion metric definition and find that doing so dramatically changes the metric.  相似文献   

17.
A form model and an expert database system that analyzes instances of the form model to derive a conceptual schema are proposed. The form model describes the properties of form fields such as their origin, hierarchical structure, and cardinality. The expert database design system creates a conceptual schema by incrementally integrating related collections of forms. The rules of the expert systems are divided into six phases form selection; entity identification; attribute attachment; relationship identification; cardinality identification; and integrity constraints. The rules of the first phase use knowledge about the form flow to determine the order in which forms are analyzed. The rules in other phases are used in conjunction with a designer dialog to identify the entities, relationships, and attributes of a schema that represents the collection of forms  相似文献   

18.
HOTTest is a model based test automation technique of software systems based on models of the system described using HaskellDB. HaskellDB is an embedded domain specific language derived from Haskell. HOTTest enforces a systematic abstraction process and exploits system invariants for automatically producing test cases for domain specific requirements. Use of functional languages for system modeling is a new concept and hence HOTTest is subject to concerns of usability, like any other new technique. Also, the syntax and the declarative style of Haskell based languages make them difficult to learn. Similar concerns can be raised for HOTTest as it shares the same syntax with Haskell. In this paper we describe an experiment designed to study the usability of HOTTest and to compare it with existing model based test design techniques. The results show that HOTTest is more usable than the traditional technique and demonstrate that the test suites produced by HOTTest are more effective and efficient than those generated using the traditional model based test design technique. Editor: James Miller  相似文献   

19.
Steve Carr  Philip Sweany 《Software》2003,33(15):1419-1445
This paper describes our experiments comparing multiple scalar replacement algorithms to evaluate their effectiveness on entire scientific application benchmarks within the context of a production‐level compiler. We investigate at what point aggressive scalar replacement becomes detrimental and which dependence tests are necessary to give scalar replacement enough information to be effective. As many commercial optimizing compilers may include some version of scalar replacement as an optimization, it is important to determine how aggressive these algorithms need to be. Previously, no study has examined ‘how much’ scalar replacement is sufficient and effective within the context of an existing highly optimizing compiler. Our experiments show that, on whole programs, simple algorithms and simple dependence analysis capture nearly all opportunities for scalar replacement found in scientific application benchmarks. While additional aggressiveness may lead to some performance gain in some individual loops, it also leads to performance degradation too often to be worth the risk when considering entire applications. Algorithms restricted to value reuse over at most one loop iteration and to fully redundant array references give the best results. Our experiment further shows that scalar replacement is not only an effective optimization, but also a feasible one for commercial optimizers since the simple algorithms are not computationally expensive. Based upon our findings, we conclude that scalar replacement ought to be a part of any highly optimizing compiler because of its low cost and significant potential gain. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
Abstract

Five abbreviation schemes (simple truncation, vowel drop, minimum to distinguish, phonics and user denned) were analysed for learning, encoding and decoding. Forty subjects were each tested on two schemes, using two different 20 word lexicons. Simple truncation was the easiest to learn, based upon a trials to criteria experiment. Using a modified tachistoscopic display, simple truncation was the best for encodability. Either vowel drop or phonics was the best scheme for decoding. It appears that information content is important in decoding, but not in encoding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号