首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Program understanding is an essential part of all software maintenance and enhancement activities. As currently practiced, program understanding consists mainly of code reading. The few automated understanding tools that are actually used in industry provide helpful but relatively shallow information, such as the line numbers on which variable names occur or the calling structure possible among system components. These tools rely on analyses driven by the nature of the programming language used. As such, they are adequate to answer questions concerning implementation details, so called what questions. They are severely limited, however, when trying to relate a system to its purpose or requirements, the why questions. Application programs solve real‐world problems. The part of the world with which a particular application is concerned is that application's domain. A model of an application's domain can serve as a supplement to programming‐language‐based analysis methods and tools. A domain model carries knowledge of domain boundaries, terminology, and possible architectures. This knowledge can help an analyst set expectations for program content. Moreover, a domain model can provide information on how domain concepts are related. This article discusses the role of domain knowledge in program understanding. It presents a method by which domain models, together with the results of programming‐language‐based analyses, can be used to answers both what and why questions. Representing the results of domain‐based program understanding is also important, and a variety of representation techniques are discussed. Although domain‐based understanding can be performed manually, automated tool support can guide discovery, reduce effort, improve consistency, and provide a repository of knowledge useful for downstream activities such as documentation, reengineering, and reuse. A tools framework for domain‐based program understanding, a dowser, is presented in which a variety of tools work together to make use of domain information to facilitate understanding. Experience with domain‐based program understanding methods and tools is presented in the form of a collection of case studies. After the case studies are described, our work on domain‐based program understanding is compared with that of other researchers working in this area. The paper concludes with a discussion of the issues raised by domain‐based understanding and directions for future work. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

2.
Development environments based on ActiveX controls and JavaBeans are marketed as ‘visual programming’ platforms; in practice their visual dimension is limited to the design and implementation of an application's graphical user interface (GUI). The availability of sophisticated GUI development environments and visual component development frameworks is now providing viable platforms for implementing visual programming within general‐purpose platforms, i.e. for the specification of non‐GUI program functionality using visual representations. We describe how specially designed reflective components can be used in an industry‐standard visual programming environment to graphically specify sophisticated data transformation pipelines that interact with GUI elements. The components are based on Unix‐style filters repackaged as ActiveX controls. Their visual layout on the development environment canvas is used to specify the connection topology of the resultant pipeline. The process of converting filter‐style programs into visual controls is automated using a domain‐specific language. We demonstrate the approach through the design and the visual implementation of a GUI‐based spell‐checker. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
Pilsung Kang 《Software》2018,48(3):385-401
Function call interception (FCI), or method call interception (MCI) in the object‐oriented programming domain, is a technique of intercepting function calls at program runtime. Without directly modifying the original code, FCI enables to undertake certain operations before and/or after the called function or even to replace the intercepted call. Thanks to this capability, FCI has been typically used to profile programs, where functions of interest are dynamically intercepted by instrumentation code so that the execution control is transferred to an external module that performs execution time measurement or logging operations. In addition, FCI allows for manipulating the runtime behavior of program components at the fine‐grained function level, which can be useful in changing an application's original behavior at runtime to meet certain execution requirements such as maintaining performance characteristics for different input problem sets. Due to this capability, however, some FCI techniques can be used as a basis of many security exploits for vulnerable systems. In this paper, we survey a variety of FCI techniques and tools along with their applications to diverse areas in the computing and software domains. We describe static and dynamic FCI techniques at large and discuss the strengths and weaknesses of different implementations in this category. In addition, we also discuss aspect‐oriented programming implementation techniques for intercepting method calls.  相似文献   

4.
Software practitioners need ways to assess their software, and metrics can provide an automated way to do that, providing valuable feedback with little effort earlier than the testing phase. Semantic metrics were proposed to quantify aspects of software quality based on the meaning of software's task in the domain. Unlike traditional software metrics, semantic metrics do not rely on code syntax. Instead, semantic metrics are calculated from domain information, using the knowledge base of a program understanding system. Because semantic metrics do not rely on code syntax, they can be calculated before code is fully implemented. This article evaluates the semantic metrics theoretically and empirically. We find that the semantic metrics compare well to existing metrics and show promise as early indicators of software quality.  相似文献   

5.
《Knowledge》2000,13(2-3):71-79
Knowledge is an interesting concept that has attracted the attention of philosophers for thousands of years. In more recent times, researchers have investigated knowledge in a more applied way with the chief aim of bringing knowledge to life in machines. Artificial Intelligence has provided some degree of rigour to the study of knowledge and Expert Systems are able to use knowledge to solve problems and answer questions.Current business, social, political and technological pressures have forced organisations to take greater control of the knowledge asset. Software suppliers and others offering valuable solutions in this area have unfortunately clouded the issue of knowledge. Information and data control are seen as implicit knowledge management tools and many have abandoned the search for explicit knowledge management methods.Knowledge representation schemes help to identify knowledge. They allow for human understanding and machine application and they can support the automated use of knowledge in problem solving. Some of these representation methods also employ spatial techniques that add an extra dimension to human understanding.Knowledge mapping defined in this work uses learning dependency to organise the map and draws on the ideas of what knowledge is and on spatial representation structures. Knowledge maps can support metrics that provide information about the knowledge asset. Knowledge maps create a visible knowledge framework that supports the explicit management of knowledge by organisation managers and directors. Knowledge maps also offer other advantages to the organisation, the individual and to educational institutions.  相似文献   

6.
This article reviews the extensive literature emerging from studies concerned with skill acquisition and the development of knowledge representation in programming. In particular, it focuses upon theories of program comprehension that suggest programming knowledge can be described in terms of stereotypical knowledge structures that can in some way capture programming expertise independently of the programming language used and in isolation from a programmer's specific training experience. An attempt is made to demonstrate why existing views are inappropriate. On the one hand, programs are represented in terms of a variety of formal notations ranging from the quasi‐mathematical to the near textual. It is argued that different languages may lead to different forms of knowledge representation, perhaps emphasizing certain structures at the expense of others or facilitating particular strategies. On the other hand, programmers are typically taught problem‐solving techniques that suggest a strict approach to problem decomposition. Hence, it seems likely that another factor that may mediate the development of knowledge representation, and that has not received significant attention elsewhere, is related to the training experience that programmers typically encounter. In this article, recent empirical studies that have addressed these issues are reviewed, and the implications of these studies for theories of skill acquisition and for knowledge representation are discussed. In conclusion, a more extensive account of knowledge representation in programming is presented that emphasizes training effects and the role played by specific language features in the development of knowledge representation within the programming domain.  相似文献   

7.
In contemporary aspect-oriented languages, pointcuts are usually specified directly in terms of the structure of the source code. The definition of such low-level pointcuts requires aspect developers to have a profound understanding of the entire application's implementation and often leads to complex, fragile and hard-to-maintain pointcut definitions. To resolve these issues, we present an aspect-oriented programming system that features a logic-based pointcut language that is open such that it can be extended with application-specific pointcut predicates. These predicates define an application-specific model that serves as a contract that base program developers provide and aspect developers can depend upon. As a result, pointcuts can be specified in terms of this more high-level model of the application which confines all intricate implementation details that are otherwise exposed in the pointcut definitions themselves.  相似文献   

8.
This paper aims at presenting a case study on the use of human factors and ergonomics to enhance requirement specifications for complex sociotechnical system support tools through enhancing the understanding of human performance within the business domain and the indication of high‐value requirements candidates to information technology support. This work uses methods based on cognitive engineering to build representations of the business domain, highlighting workers’ needs, and contributing to the improvement of software requirements specifications, used in the healthcare domain. As the human factors discipline fits between human sciences and technology design, we believe that its concepts can be combined with software engineering to improve understanding of how people work, enabling the design of better information technology.  相似文献   

9.
The field of automated reasoning is an outgrowth of the field of automated theorem proving. The difference in the two fields is not so much in the procedures on which they rest, but rather in the way the corresponding programs are used. Here we present a comprehensive treatment of the use of an automated reasoning program to answer certain previously open questions from equivalential calculus. The questions are answered with a uniform method that employs schemata to study the infinite domain of theorems deducible from certain formulas. We include sufficient detail both to permit the work to be duplicated and to enable one to consider other applications of the techniques. Perhaps more important than either the results or the methodology is the demonstration of how an automated reasoning program can be used as an assistant and a colleague. Precise evidence is given of the nature of this assistance.  相似文献   

10.
Fault localization techniques are originally proposed to assist in manual debugging by generally producing a rank list of suspicious locations.With the increasing popularity of automated program repair,the fault localization techniques have been introduced to effectively reduce the search space of automated program repair.Unlike developers who mainly focus on the rank information,current automated program repair has two strategies to use the fault localization information:suspiciousness-first algorithm(SFA)based on the suspiciousness accuracy and rank-first algorithm(RFA)relying on the rank accuracy.However,despite the fact that the two different usages are widely adopted by current automated program repair and may result in different repair results,little is known about the impacts of the two strategies on automated program repair.In this paper we empirically compare the performance of SFA and RFA in the context of automated program repair.Specifically,we implement the two strategies and six well-studied fault localization techniques into four state-of-the-art automated program repair tools,and then use these tools to perform repair experiments on 60 real-world bugs from Defects4J.Our study presents a number of interesting findings:RFA outperforms SFA in 70.02%of cases when measured by the number of candidate patches generated before a valid patch is found(NCP),while SFA performs better in parallel repair and patch diversity;the performance of SFA can be improved by increasing the suspiciousness accuracy of fault localization techniques;finally,we use SimFix that deploys SFA to successfully repair four extra Defects4J bugs which cannot be repaired by SimFix originally using RFA.These observations provide a new perspective for future research on the usage and improvement of fault localization in automated program repair.  相似文献   

11.
Probabilistic weather forecasts are amongst the most popular ways to quantify numerical forecast uncertainties. The analog regression method can quantify uncertainties and express them as probabilities. The method comprises the analysis of errors from a large database of past forecasts generated with a specific numerical model and observational data. Current visualization tools based on this method are essentially automated and provide limited analysis capabilities. In this paper, we propose a novel approach that breaks down the automatic process using the experience and knowledge of the users and creates a new interactive visual workflow. Our approach allows forecasters to study probabilistic forecasts, their inner analogs and observations, their associated spatial errors, and additional statistical information by means of coordinated and linked views. We designed the presented solution following a participatory methodology together with domain experts. Several meteorologists with different backgrounds validated the approach. Two case studies illustrate the capabilities of our solution. It successfully facilitates the analysis of uncertainty and systematic model biases for improved decision‐making and process‐quality measurements.  相似文献   

12.
Object-oriented languages are widely used in software development to help the developer in using dynamic data structures which evolve during program execution. However, the task of program comprehension and performance analysis necessitates the understanding of data structures used in a program. Particularly, in understanding which application programming interface (API) objects are used during runtime of a program. The objective of this work is to give a compact view of the complete program code information at a single glance and to provide the user with an interactive environment to explore details of a given program. This work presents a novel interactive visualization tool for collection framework usage, in a Java program, based on hierarchical treemap. A given program is instrumented during execution time and data recorded into a log file. The log file is then converted to extensible markup language (XML)-based tree format which proceeds to the visualization component. The visualization provides a global view to the usage of collection API objects at different locations during program execution. We conduct an empirical study to evaluate the impact of the proposed visualization in program comprehension. The experimental group (having the proposed tool support), on average, completes the tasks in 45% less time as compared to the control group (not provided with the proposed tool). Results show that the proposed tool enables to comprehend more information with less effort and time. We have also evaluated the performance of the proposed tool using 20 benchmark software tools. The proposed tool is anticipated to help the developer in understanding Java programs and assist in program comprehension and maintenance by identifying APIs usage and their patterns.  相似文献   

13.
As software systems become increasingly massive, the advantages of automated transformation tools are clearly evident. These tools allow the machine to both reason about and manipulate high-level source code. They enable off-loading of mundane and laborious programming tasks from human developer to machine, thereby reducing cost and development time frames.Although there has been much work in software transformation, there still exist many hurdles in realizing this technology in a commercial domain. From our own experience, there are two significant problems that must be addressed before transformation technology can be usefully applied in a commercial setting. These are: (1) Avoiding disruption of the style (i.e., layout and commenting) of source code and the introduction of any undesired modifications that can occur as a side effect of the transformation process. (2) Correct automated handling of C preprocessing and the presentation of a semantically correct view of the program during transformation. Many existing automated transformation tools require source to be manually modified so that preprocessing constructs can be parsed. The real semantic of the program remains obscured resulting in the need for complicated analysis during transformation. Many systems also resort to pretty printing to generate transformed programs, which inherently disrupts coding style. In this paper we describe our own C/C++ transformation system, Proteus, that addresses both these issues. It has been tested on millions of lines of commercial C/C++ code and has been shown to meet the stringent criteria laid out by Lucent’s own software developers.  相似文献   

14.
The use of graphical user interfaces in a computerized work environment is often considered to substantially improve the work situation. The outcome can, however, often be the opposite. Inappropriate use of windowing techniques, scrolling, and colors can result in tedious and confusing interaction with the computer. Today's standards and style guides define basic design principles but are insufficient for design of interfaces to end‐user applications. Here detailed domain knowledge is indeed essential. A domain‐specific style guide (DSSG) is an extension of today's standard with domain‐specific primitives, interface elements, and forms, together with domain‐specific guidelines. Careful dedicated analysis of information utilization in a domain is the development basis for a DSSG. The development is performed with an object‐oriented approach to facilitate the reuse of interface components and to support consistency and structure. Using a DSSG, the development of applications can be performed with a simplified information analysis. Therefore a more effective design process is possible, one in which end users can participate in the design using their own familiar domain‐related terminology. Time and costs for the development process can be drastically reduced if domain‐specific style guides, design guidelines, and development tools are used.  相似文献   

15.
This paper presents Programming Adaptive Testing (PAT), a Web‐based adaptive testing system for assessing students' programming knowledge. PAT was used in two high school programming classes by 73 students. The question bank of PAT is composed of 443 questions. A question is classified in one out of three difficulty levels. In PAT, the levels of difficulties are adapted to Bloom's taxonomy lower levels, and students are examined in their cognitive domain. This means that PAT has been designed according to pedagogical theories in order to be appropriate for the needs of the course ‘Application Development in a Programming Environment’. If a student answers a question correctly, a harder question is presented, otherwise an easier one. Easy questions examine the student's knowledge, while difficult questions examine the student's skills to apply prior knowledge to new problems. A student answers a personalized test composed of 30 questions. PAT classifies a student in one out of three programming skills' levels. It can predict the corresponding classification of students in Greek National Exams. Furthermore, it can be helpful to both students and teachers. A student could discover his or her programming shortcomings. Similarly, a teacher could objectively assess his or her students as well as discover the subjects that need to be repeated.  相似文献   

16.
Although database design tools have been developed that attempt to automate (or semiautomate) the design process, these tools do not have the capability to capture common sense knowledge about business applications and store it in a context-specific manner. As a result, they rely on the user to provide a great deal of "trivial" details and do not function as well as a human designer who usually has some general knowledge of how an application might work based on his or her common sense knowledge of the real world. Common sense knowledge could be used by a database design system to validate and improve the quality of an existing design or even generate new designs. This requires that context-specific information about different database design applications be stored and generalized into information about specific application domains (e.g., pharmacy, daycare, hospital, university, manufacturing). Such information should be stored at the appropriate level of generality in a hierarchically structured knowledge base so that it can be inherited by the subdomains below. For this to occur, two types of learning must take place. First, knowledge about a particular application domain that is acquired from specific applications within that domain are generalized into a domain node (e.g., entities, relationships, and attributes from various hospital applications are generalized to a hospital node). This is referred to as within domain learning. Second, the information common to two (or more) related application domain nodes is generalized to a higher-level node; for example, knowledge from the car rental and video rental domains may be generalized to a rental node. This is called across domain learning. This paper presents a methodology for learning across different application domains based on a distance measure. The parameters used in this methodology were refined by testing on a set of representative cases; empirical testing provided further validation  相似文献   

17.
Jean Bovet  Terence Parr 《Software》2008,38(12):1305-1332
Programmers tend to avoid using language tools, resorting to ad hoc methods, because tools can be hard to use, their parsing strategies can be difficult to understand and debug, and their generated parsers can be opaque black‐boxes. In particular, there are two very common difficulties encountered by grammar developers: understanding why a grammar fragment results in a parser non‐determinism and determining why a generated parser incorrectly interprets an input sentence. This paper describes ANTLRWorks, a complete development environment for ANTLR grammars that attempts to resolve these difficulties and, in general, make grammar development more accessible to the average programmer. The main components are a grammar editor with refactoring and navigation features, a grammar interpreter, and a domain‐specific grammar debugger. ANTLRWorks' primary contributions are a parser non‐determinism visualizer based on syntax diagrams and a time‐traveling debugger that pays special attention to parser decision‐making by visualizing lookahead usage and speculative parsing during backtracking. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
GOP is a graph‐oriented programming model which aims at providing high‐level abstractions for configuring and programming cooperative parallel processes. With GOP, the programmer can configure the logical structure of a parallel/distributed program by constructing a logical graph to represent the communication and synchronization between the local programs in a distributed processing environment. This paper describes a visual programming environment, called VisualGOP, for the design, coding, and execution of GOP programs. VisualGOP applies visual techniques to provide the programmer with automated and intelligent assistance throughout the program design and construction process. It provides a graphical interface with support for interactive graph drawing and editing, visual programming functions and automation facilities for program mapping and execution. VisualGOP is a generic programming environment independent of programming languages and platforms. GOP programs constructed under VisualGOP can run in heterogeneous parallel/distributed systems. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
并行程序设计模型和语言   总被引:17,自引:0,他引:17  
安虹  陈国良 《软件学报》2002,13(1):118-124
并行计算技术的发展已有20多年的历史了.时至今日,高性能并行计算仍然缺乏有效的并行程序设计方法和工具,使得编写并行程序、理解并行程序的行为、调试和优化并行程序的性能都很困难.从分析并行程序设计困难的原因入手,指出了当前各种高性能并行机系统支持的并行程序设计方法存在的诸多问题,综述了并行程序设计模型和语言的研究现状,给出了并行程序设计模型的评价标准,并提出了这一研究领域所面临的挑战性问题,指出了一些未来可能的发展方向.  相似文献   

20.
ContextThe number of students enrolled in universities at standard and on-line programming courses is rapidly increasing. This calls for automated evaluation of students assignments.ObjectiveWe aim to develop methods and tools for objective and reliable automated grading that can also provide substantial and comprehensible feedback. Our approach targets introductory programming courses, which have a number of specific features and goals. The benefits are twofold: reducing the workload for teachers, and providing helpful feedback to students in the process of learning.MethodFor sophisticated automated evaluation of students’ programs, our grading framework combines results of three approaches (i) testing, (ii) software verification, and (iii) control flow graph similarity measurement. We present our tools for software verification and control flow graph similarity measurement, which are publicly available and open source. The tools are based on an intermediate code representation, so they could be applied to a number of programming languages.ResultsEmpirical evaluation of the proposed grading framework is performed on a corpus of programs written by university students in programming language C within an introductory programming course. Results of the evaluation show that the synergy of proposed approaches improves the quality and precision of automated grading and that automatically generated grades are highly correlated with instructor-assigned grades. Also, the results show that our approach can be trained to adapt to teacher’s grading style.ConclusionsIn this paper we integrate several techniques for evaluation of student’s assignments. The obtained results suggest that the presented tools can find real-world applications in automated grading.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号