首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
TIM COOPER  MICHAEL WISE 《Software》1997,27(5):497-517
Traditional programming environments represent program source code as a set of source files. These files have various ‘dependencies’ on each other, such that a file needs recompilation if it depends on a file which has changed. A ‘build tool’ is used to process these dependencies and bring the application ‘up-to-date’. An example of a build tool is the UNIX ‘make’. This paper examines what happens when the dependencies used by the build tool are expressed between functions (or objects) rather than between files. Qualitative differences arise from the difference in granularity. The result is an effective incremental compilation programming environment, based on the C++ language. It is called ‘Barbados’, and is fully implemented. The environment resembles an interpreter in that changes to source code appear to be immediately reflected in all object code, except that errors are reported early as in compiled systems. Incremental compilation is not a well-used technology, possibly because the ‘fine-grain build’ problem is not well understood. Nevertheless, incremental compilation systems do exist. The advantages of the system described here are that it works with relatively standard compilation technology, it works for the C++ language including the preprocessor, it is an elegant solution and it is more efficient than competing algorithms. © 1997 by John Wiley & Sons, Ltd.  相似文献   

2.
Test data adequacy criteria are standards that can be applied to decide if enough testing has been performed. Previous research in software testing has suggested 11 fundamental properties which reasonable criteria should satisfy if the criteria make use of the structure of the program being tested. It is shown that there are several dependencies among the 11 properties making them questionable as a set of fundamental properties, and that the statements of the properties can be generalized so that they can be appropriately analyzed with respect to criteria that do not necessarily make use of the program's structure. An analysis that shows the relationships among the properties with respect to different classes of criteria which utilize the program structure and the specification in different ways is discussed. It is shown how the properties differ under the two models in order to maintain consistency that the dependencies are largely a result of five very weak existential properties, and that by modifying three of the properties, these weaknesses can be eliminated. The result is a reduced set of seven properties, each of which is strong from a mathematical perspective  相似文献   

3.
VBA在软件文档编制中的应用   总被引:6,自引:1,他引:6  
文档是软件的重要组成部分。对于汇编语言程序开发,良好的说明文档显得尤其重要。软件的文档大致可以包括说明文档、流程图和源程序等。超文本能够在不同的文档之间以及文档内部不同的地方建立交叉参考。本文介绍了如何通过VBA语言在Word软件下缩写宏命令。自动给软件说明文档增加超级链接。提高软件文档的功能。  相似文献   

4.
针对目前已有的文本分类方法未考虑文本内部词之间的语义依存信息而需要大量训练数据的问题,提出基于语义依存分析的图网络文本分类模型TextSGN。首先对文本进行语义依存分析,对语义依存关系图中的节点(单个词)和边(依存关系)进行词嵌入和one-hot编码;在此基础上,为了对语义依存关系进行快速挖掘,提出一个SGN网络块,通过从结构层面定义信息传递的方式来对图中的节点和边进行更新,从而快速地挖掘语义依存信息,使得网络更快地收敛。在多组公开数据集上训练分类模型并进行分类测试,结果表明,TextSGN模型在短文本分类上的准确率达到95.2%,较次优分类法效果提升了3.6%。  相似文献   

5.
A formal system for reasoning about functional dependencies (FDs) and subset dependencies (SDs) defined over relational expressions is described. An FD e:X → Y indicates that Y is functionally dependent on X in the relation denoted by expression e; an SD e ? f indicates that the relation denoted by e is a subset of that denoted by f. The system is shown to be sound and complete by resorting to the analytic tableaux method. Applications of the system include the problem of determining if a constraint of a subschema is implied by the constraints of the base schema and the development of database design methodologies similar to normalization.  相似文献   

6.
This paper suggests a novel compression scheme for small text files. The proposed scheme depends on Boolean minimization of binary data accompanied with the adoption of Burrows-Wheeler transformation (BWT) algorithm. Compression of small text files must fulfil special requirements since they have small context. The use of Boolean minimization and Burrows-Wheeler transformation generate better context information for compression with standard algorithms. We tested the suggested scheme on collections of small and medium-sized files. The testing results showed that proposed scheme improve the compression ratio over other existing methods.  相似文献   

7.
Software developers rely on a build system to compile their source code changes and produce deliverables for testing and deployment. Since the full build of large software systems can take hours, the incremental build is a cornerstone of modern build systems. Incremental builds should only recompile deliverables whose dependencies have been changed by a developer. However, in many organizations, such dependencies still are identified by build rules that are specified and maintained (mostly) manually, typically using technologies like make. Incomplete rules lead to unspecified dependencies that can prevent certain deliverables from being rebuilt, yielding incomplete results, which leave sources and deliverables out-of-sync. In this paper, we present a case study on unspecified dependencies in the make-based build systems of the glib, openldap, linux and qt open source projects. To uncover unspecified dependencies in make-based build systems, we use an approach that combines a conceptual model of the dependencies specified in the build system with a concrete model of the files and processes that are actually exercised during the build. Our approach provides an overview of the dependencies that are used throughout the build system and reveals unspecified dependencies that are not yet expressed in the build system rules. During our analysis, we find that unspecified dependencies are common. We identify 6 common causes in more than 1.2 million unspecified dependencies.  相似文献   

8.
信息时代推进盲文数字化, 关乎我国广大盲人文化素质的提高和生活水平的改善. 本文实现了一种基于国家通用盲文标调规则的汉盲转换系统, 能够快速生成海量符合国家通用盲文方案的数字化资源, 满足视障人士无障碍获取信息的需求. 此系统按通用盲文规则处理汉语文本, 将其转换为符合标调规则、简写规则的盲文结果. 测试结果表明, 此系统可以准确处理标调规则、简写规则, 可得到准确的符合国家通用盲文方案的盲文数字化结果. 声调省写覆盖率、韵母简写覆盖率和篇幅增加量均与国家通用盲文方案的理论值相当, 能够快速处理长篇语料文件, 程序执行效率高, 具有实用价值, 可以用于推广国家通用盲文, 促进我国盲文数字化无障碍建设.  相似文献   

9.
We study inference systems for the combined class of functional and full hierarchical dependencies in relational databases. Two notions of implication are considered: the original notion in which a dependency is implied by a given set of dependencies and the underlying set of attributes, and the alternative notion in which a dependency is implied by a given set of dependencies alone. The first main result establishes a finite axiomatisation for the original notion of implication which clarifies the role of the complementation rule in the combined setting. In fact, we identify inference systems that are appropriate in the following sense: full hierarchical dependencies can be inferred without use of the complementation rule at all or with a single application of the complementation rule at the final step of the inference only; and functional dependencies can be inferred without any application of the complementation rule. The second main result establishes a finite axiomatisation for the alternative notion of implication. We further show how inferences of full hierarchical dependencies can be simulated by inferences of multivalued dependencies, and vice versa. This enables us to apply both of our main results to the combined class of functional and multivalued dependencies. Furthermore, we establish a novel axiomatisation for the class of non-trivial functional dependencies.  相似文献   

10.
In this study, a comprehensive analysis of the lexical dependency and pruning concepts for the text classification problem is presented. Dependencies are included in the feature vector as an extension to the standard bag-of-words approach. The pruning process filters features with low frequencies so that fewer but more informative features remain in the solution vector. The pruning levels for words, dependencies, and dependency combinations for different datasets are analyzed in detail. The main motivation in this work is to make use of dependencies and pruning efficiently in text classification and to achieve more successful results using much smaller feature vector sizes. Three different datasets were used in the experiments and statistically significant improvements for most of the proposed approaches were obtained.  相似文献   

11.
12.
CMake是开源的跨平台自动化构建系统,它可以产生多种构建文件,如Unix/Linux下的Makefile或Windows Visual C++的projects/workspaces。它使得开发者可使用各种平台上的原生构建系统,这是CMake区别于Automake等其他类似系统的重要特点,也是CMake的重要优势。相对于Automake,它提供了更多对Windows的支持。CMake虽然使用简单,但同样具备大项目管理能力。  相似文献   

13.
生成测试数据和数据库状态是进行数据库系统测试的重要工作,逆向查询处理(RQP)算法提供了一种生成测试数据的方法。然而RQP算法只针对Select查询语句,为克服这一局限性,在RQP的基础上进行扩展,形成逆向操作处理(RMP)算法,以处理SQL语言中的所有数据操作语句。RMP算法的基本思想是将Delete、Insert、Update等数据操作语句转化为查询操作,即将这些操作语句所需的数据库实例应满足的条件转化为用Select语句来描述,再将转化后得到的Select语句作为RQP算法的输入,从而得到满足条件的数据库实例。RMP算法支持SQL基本语句的逆向运算,为数据库测试数据的自动生成提供了更好的支持。  相似文献   

14.
索引对象标识与特征文件管理   总被引:1,自引:0,他引:1  
本文提出一种基于对象标识原理、时间戳排序技术及汉字联想思想实现特征文件有效组织和汉字支撑环境自动形成的方法,并讨论了面向文本数据库管理系统(FIMS)的文本对象索引系统的设计与实现问题。  相似文献   

15.
通过对分组密码安全性设计的分析,针对DES分组密码的不足进行改进,设计了一种基于非S盒变换的变种DES,用随机数产生S盒的排列顺序,通过对密钥和S盒顺序的交替移位,使所有的明文采用不同的密钥加密或不同的S盒处理,任意两组相同的明文加密后都会产生不同的密文,从而实现牢不可破的"一次一密"的密码体制.  相似文献   

16.
Distributed-memory systems can incorporate thousands of processors at a reasonable cost. However, with an increasing number of processors in a system, fault detection and fault tolerance become critical issues. By replicating the computation on more than one processor and comparing the results produced by these processors, errors can be detected. During the execution of a program, due to data dependencies, typically not all of the processors in a multiprocessor system are busy at all times. Therefore processor schedules contain idle time slots and it is the goal of this work to exploit these idle time slots to schedule duplicated computation for the purpose of fault detection. We propose a compiler-assisted approach to fault detection in regular loops on distributed-memory systems. This approach achieves fault detection by duplicating the execution of statement instances. After carefully analyzing the data dependencies of a regular loop, selected instances of loop statements are duplicated in a way that ensures the desired fault coverage. We first present duplication strategies for fault detection and show that these strategies use idle processor times for executing replicated statements, whenever possible. Next, we present loop transformations to implement these fault-detection strategies. Also, a general framework for selecting appropriate loop transformations is developed. Experimental results performed on the CRAY-T3D show that the overhead of adding the fault detection capability is usually less than 25%, and is less than 10% when communication overhead is reduced by grouping messages  相似文献   

17.
Geo-visualization Fortran library   总被引:1,自引:0,他引:1  
Geobrowser tools offer easy access to geographical and map images over which geospatial data can be overlaid, a process that provides a powerful new visualization resource for scientists. Many of these tools make use of the well-documented KML/XML data formats, and the challenge for the scientist is to generate KML files from their simulation and analysis programs. Since many of these programs are written in the Fortran language, which does not have native tools to support XML files, we have developed a new library - WKML - that enables KML files to be produced directly and automatically. This paper describes the WKML library, gives a number of different examples to illustrate the breadth of its functionality, and describes in more detail an example of its use for hydrology.  相似文献   

18.
The paper addresses the problem of generating sentences from logical formulae. It describes a simple and efficient algorithm for generating text which has been developed for use in machine translation, but will have wider application in natural language processing. An important property of the algorithm is that the logical form used to generate a sentence need not be one which could have been produced by parsing the sentence: formal equivalence between logical forms is allowed for. This is necessary for a machine translation system, such as the one envisaged in this paper, which uses single declarative grammars of individual languages, and declarative statements of translation equivalences for transfer. In such a system, it cannot be guaranteed that transfer will produce a logical form in the same order as would have been produced by parsing some target-language sentence, and it is not practicable to define a normal form for the logical forms. The algorithm is demonstrated using a categorial grammar and a simple indexed logic, as this allows a particularly clear and elegant formulation. It is shown that the algorithm can be adapted to phrase-structure grammars, and to more complex semantic representations than that used here.  相似文献   

19.
文本挖掘中的中文分词算法研究及实现   总被引:4,自引:0,他引:4  
文本挖掘是指使用数据挖掘技术,自动地从文本数据中发现和提取独立于用户信息需求的文档集中的隐含知识。而中文文本数据的获得是依靠中文信息处理技术来进行的,因而自动分词成为中文信息处理中的基础课题。对于海量信息处理的应用,分词的速度是极为重要的,对整个系统的效率有很大的影响。分析了几种常见的分词方法,设计了一个基于正向最大匹配法的中文自动分词系统。为了提高分词的精度,对加强歧义消除和词语优化的算法进行了研究处理。  相似文献   

20.
许高建  胡学钢  王庆人 《微机发展》2007,17(12):122-124
文本挖掘是指使用数据挖掘技术,自动地从文本数据中发现和提取独立于用户信息需求的文档集中的隐含知识。而中文文本数据的获得是依靠中文信息处理技术来进行的,因而自动分词成为中文信息处理中的基础课题。对于海量信息处理的应用,分词的速度是极为重要的,对整个系统的效率有很大的影响。分析了几种常见的分词方法,设计了一个基于正向最大匹配法的中文自动分词系统。为了提高分词的精度,对加强歧义消除和词语优化的算法进行了研究处理。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号