首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 80 毫秒
1.
2.
3.
Zeller  A. 《Computer》2001,34(11):26-31
Although software engineers have enjoyed tremendous productivity increases as more of their tasks have become automated, debugging remains as labor-intensive and painful as it. was 50 years ago. An engineer or programmer must still set up hypotheses to use in identifying and correcting a failure's root cause. The author describes a new algorithm that promises to relieve programmers of the hit-or-miss approach to debugging. Delta Debugging uses the results of automated testing to systematically narrow the set of failure-inducing circumstances. Programmers supply a test function for each bug and hardcode it into any imperative language. The test function checks a set of changes to determine if the failure is present or if the outcome is unresolved, then feeds that information to the Delta Debugging code. As we discover more about the structure of these circumstances and the resulting causality chain, we come closer to passing much of the boredom and monotony of debugging onto machines. Debugging can be just as disciplined, systematic, and quantifiable as any other area of software engineering-which means that we should eventually be able to automate at least part of it. Ultimately, debugging may become as automated as testing-not only detecting failures, but also revealing how they came to be  相似文献   

4.
During the system development, developers often must correct wrong behavior in the software—an activity colloquially called program debugging. Debugging is a complex activity, especially in real-time embedded systems because such systems interact with the physical world and make heavy use of interrupts for timing and driving I/O devices.Debugging interrupts is difficult, because they cause non-linear control flow in programs which is hard to reproduce in software. Record/replay mechanisms have proven their use to debugging embedded systems, because they provide means to recreate control flows offline where they can be debugged.In this work, we present the data tracing part of the record/replay mechanism that is specifically targeted to record interrupt behavior. To tune our tracing mechanism, we use the observed principle of return address clustering and a formal model for quantitative reasoning about the tracing mechanism. The presented heuristic and mechanisms show surprisingly good results—with higher fingerprint widths an 800 percent speedup on the selector function and a 300 percent reduction on duplicates for non-optimal selector functions—considering the leanness of the approach. Using an equal portion for the fingerprint and for the return address lead to the best results in our experiments.  相似文献   

5.
Frameworks are reusable software composed of concrete and abstract classes that implement the functionality of a domain. Applications reuse frameworks to enhance quality and development efficiency. However, frameworks are hard to learn and reuse. Application developers must understand the complex class hierarchy of the framework to instantiate it properly. In this paper, we present an approach to build a Domain-Specific Modeling Language (DSML) of a framework and use it to facilitate framework reuse during application development. The DSML of a framework is built by identifying the features of this framework and the information required to instantiate them. Application generators transform models created with the DSML into application code, hiding framework complexities. In this paper, we illustrate the use of our approach in a framework for the domain of business resource transactions and a experiment that evaluated the efficiency obtained with our approach.  相似文献   

6.
Developers commonly make use of a web search engine such as Google to locate online resources to improve their productivity. A better understanding of what developers search for could help us understand their behaviors and the problems that they meet during the software development process. Unfortunately, we have a limited understanding of what developers frequently search for and of the search tasks that they often find challenging. To address this gap, we collected search queries from 60 developers, surveyed 235 software engineers from more than 21 countries across five continents. In particular, we asked our survey participants to rate the frequency and difficulty of 34 search tasks which are grouped along the following seven dimensions: general search, debugging and bug fixing, programming, third party code reuse, tools, database, and testing. We find that searching for explanations for unknown terminologies, explanations for exceptions/error messages (e.g., HTTP 404), reusable code snippets, solutions to common programming bugs, and suitable third-party libraries/services are the most frequent search tasks that developers perform, while searching for solutions to performance bugs, solutions to multi-threading bugs, public datasets to test newly developed algorithms or systems, reusable code snippets, best industrial practices, database optimization solutions, solutions to security bugs, and solutions to software configuration bugs are the most difficult search tasks that developers consider. Our study sheds light as to why practitioners often perform some of these tasks and why they find some of them to be challenging. We also discuss the implications of our findings to future research in several research areas, e.g., code search engines, domain-specific search engines, and automated generation and refinement of search queries.  相似文献   

7.
数据结构可视化类库的设计与实现   总被引:4,自引:0,他引:4  
苏莹  吴伟民 《微机发展》2006,16(5):61-64
本工作室开发的数据结构可视化类库(JVDSCL,Visual Data Structures Class Library in Java)将可视化技术引入数据结构类库,实现了数据结构可视化。介绍了对数据结构类的可视化扩充方法,给出了实现各种数据结构可视化布局算法的基本框架。JVDSCL可以应用在程序调试和软件开发,提高软件的可视性、重用性与开发效率。  相似文献   

8.
Building application domain models is a time-consuming activity in software engineering. In small teams, it is an activity that involves almost all participants, including developers and domain experts. In our approach, we support the knowledge engineering activity by reusing tagging done by team participants when they search information on the Web about the application’s domain. Team participants collaborate implicitly when they do tagging because their individually created tags are collected and form a folksonomy. This folksonomy reflects their knowledge about the domain and it is the base for eliciting domain model elements in the knowledge acquisition and conceptualization tasks in a consensual way. Experiments provide evidence that our approach helps team participants to build richer domain models than if they do not use our software tool. The tool allows the reuse of simple annotations as long as users learn about the application’s domain.  相似文献   

9.
Much of software developers' time is spent understanding unfamiliar code. To better understand how developers gain this understanding and how software development environments might be involved, a study was performed in which developers were given an unfamiliar program and asked to work on two debugging tasks and three enhancement tasks for 70 minutes. The study found that developers interleaved three activities. They began by searching for relevant code both manually and using search tools; however, they based their searches on limited and misrepresentative cues in the code, environment, and executing program, often leading to failed searches. When developers found relevant code, they followed its incoming and outgoing dependencies, often returning to it and navigating its other dependencies; while doing so, however, Eclipse's navigational tools caused significant overhead. Developers collected code and other information that they believed would be necessary to edit, duplicate, or otherwise refer to later by encoding it in the interactive state of Eclipse's package explorer, file tabs, and scroll bars. However, developers lost track of relevant code as these interfaces were used for other tasks, and developers were forced to find it again. These issues caused developers to spend, on average, 35 percent of their time performing the mechanics of navigation within and between source files. These observations suggest a new model of program understanding grounded in theories of information foraging and suggest ideas for tools that help developers seek, relate, and collect information in a more effective and explicit manner  相似文献   

10.
Our current understanding of how programmers perform feature location during software maintenance is based on controlled studies or interviews, which are inherently limited in size, scope and realism. Replicating controlled studies in the field can both explore the findings of these studies in wider contexts and study new factors that have not been previously encountered in the laboratory setting. In this paper, we report on a field study about how software developers perform feature location within source code during their daily development activities. Our study is based on two complementary field data sets: one that reflects complete IDE activity of 67 professional developers over approximately one month, and the other that reflects usage of an IR-based code search tool by nearly 600 developers. Analyzing this data, we report results on how often developers use which type of code search tools, on the types of queries and retreival strategies used by developers, and on patterns of developer feature location behavior following code search. The results of the study suggest that there is (1) a need for helping developers to devise better code search queries; (2) a lack of adoption of niche code search tools; (3) a need for code search tool to handle both lookup and exploratory queries; and (4) a need for better integration between code search, structured navigation, and debugging tools in feature location tasks.  相似文献   

11.
Interactive Fault Localization Using Test Information   总被引:2,自引:0,他引:2       下载免费PDF全文
Debugging is a time-consuming task in software development. Although various automated approaches have been proposed, they are not effective enough. On the other hand, in manual debugging, developers have difficulty in choosing breakpoints. To address these problems and help developers locate faults effectively, we propose an interactive fault-localization framework, combining the benefits of automated approaches and manual debugging. Before the fault is found, this framework continuously recommends checking points based on statements' suspicions, which are calculated according to the execution information of test cases and the feedback information from the developer at earlier checking points. Then we propose a naive approach, which is an initial implementation of this framework. However, with this naive approach or manual debugging, developers' wrong estimation of whether the faulty statement is executed before the checking point (breakpoint) may make the debugging process fail. So we propose another robust approach based on this framework, handling cases where developers make mistakes during the fault-localization process. We performed two experimental studies and the results show that the two interactive approaches are quite effective compared with existing fault-localization approaches. Moreover, the robust approach can help developers find faults when they make wrong estimation at some checking points.  相似文献   

12.
It is well recognized that traceability links between software artifacts provide crucial support in comprehension, efficient development, and effective management of a software system. However, automated traceability systems to date have been faced with two major open research challenges: how to extract traceability links with both high precision and high recall, and how to efficiently visualize links for complex systems because of scalability and visual clutter issues. To overcome the two challenges, we designed and developed a traceability system, DCTracVis. This system employs an approach that combines three supporting techniques, regular expressions, key phrases, and clustering, with information retrieval (IR) models to improve the performance of automated traceability recovery between documents and source code. This combination approach takes advantage of the strengths of the three techniques to ameliorate limitations of IR models. Our experimental results show that our approach improves the performance of IR models, increases the precision of retrieved links, and recovers more correct links than IR alone. After having retrieved high-quality traceability links, DCTracVis then utilizes a new approach that combines treemap and hierarchical tree techniques to reduce visual clutter and to allow the visualization of the global structure of traces and a detailed overview of each trace, while still being highly scalable and interactive. Usability evaluation results show that our approach can effectively and efficiently help software developers comprehend, browse, and maintain large numbers of links.  相似文献   

13.
联邦成员框架代码的自动生成技术研究   总被引:3,自引:0,他引:3  
联邦成员软件的开发者面临着学习和使用RTI库的问题,大量低层接口的编程往往会使联邦开发者的注意力从联邦问题域转移到RTI技术细节上去.因此生成联邦成员框架代码的设计和实现可以大大降低成员软件开发难度,加快成员软件开发.该文通过分析联邦成员的程序流程和软件组成,采用面向对象的方法对联邦成员进行了抽象,设计了一些体现联邦成员特征的基本类,并基于这些设计实现了一种由HLA对象模型(FOM/SOM)自动生成联邦成员软件框架代码的方法.在RTI和实际的仿真实体模型间提供了一个抽象层,开发者不必考虑联邦成员与RTI之间的信息交换过程,只负责实现仿真实体模型的仿真功能,从而在成员级上实现了成员代码重用.  相似文献   

14.
In this paper, we draft a CSP-based language for our distributed software development system. A program model hierarchy, classified into three characteristic layer: specification layer, monitoring/debugging layer and code layer, is introduced to help users use appropriate modeling methods in the program development cycle. A hierarchical visualization interface—which contains our three-level visualization modeling framework with three different models: Petri nets, IPC Dataflow and Event Code View, transferred from the extended CSP—has been also designed and implemented. We could monitor a distributed application by using any coordinated combination of the constructs of the above models as a favorite view in a systematic manner. The transformation rules among the constructs of these visual models are also discussed.  相似文献   

15.
Modelling relationship between entities in real‐world systems with a simple graph is a standard approach. However, reality is better embraced as several interdependent subsystems (or layers). Recently, the concept of a multilayer network model has emerged from the field of complex systems. This model can be applied to a wide range of real‐world data sets. Examples of multilayer networks can be found in the domains of life sciences, sociology, digital humanities and more. Within the domain of graph visualization, there are many systems which visualize data sets having many characteristics of multilayer graphs. This report provides a state of the art and a structured analysis of contemporary multilayer network visualization, not only for researchers in visualization, but also for those who aim to visualize multilayer networks in the domain of complex systems, as well as those developing systems across application domains. We have explored the visualization literature to survey visualization techniques suitable for multilayer graph visualization, as well as tools, tasks and analytic techniques from within application domains. This report also identifies the outstanding challenges for multilayer graph visualization and suggests future research directions for addressing them.  相似文献   

16.
17.
Writing functional and error code already is problematic because of context switches that must occur inside the heads of software developers as they progress in their coding task. These distracting context switches lead to coding errors because developers stop thinking about one type of code and start thinking about another. Having to consider a third code type further increases the code development processes' complexity. In this article, we look at today's security coding environment and suggest some research thrusts to advance programmers' capabilities to address modern security programming challenges.  相似文献   

18.
A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths.  相似文献   

19.
Code cloning is one of the active research areas in the software engineering community. Specifically, researchers have conducted numerous empirical studies on code cloning and reported that 7 % to 23 % of the code in a typical software system has been cloned. However, there was less awareness of code clones in dynamically-typed languages and most studies are limited to statically-typed languages such as Java, C, and C++. In addition, most previous studies did not consider different application domains such as standalone projects or web applications. As a result, very little is known about clones in dynamically-typed languages, such as JavaScript, in different application domains. In this paper, we report a large-scale clone detection experiment in a dynamically-typed programming language, JavaScript, for different application domains: web pages and standalone projects. Our experimental results showed that unlike JavaScript standalone projects, JavaScript web applications have 95 % of inter-file clones and 91–97 % of widely scattered clones. We observed that web application developers created clones intentionally and such clones may not be as risky as claimed in previous studies. Understanding the risks of cloning in web applications requires further studies, as cloning may be due to either good or bad intentions. Also, we identified unique development practices such as including browser-dependent or device-specific code in code clones of JavaScript web applications. This indicates that features of programming languages and technologies affect how developers duplicate code.  相似文献   

20.
To address the problem of errors in spreadsheets, we have investigated spreadsheet authors׳ mental models in a hope of finding cognition-based principles for spreadsheet visualization and debugging tools. To this end, we have conducted three empirical studies. The first study explored the nature of mental models of spreadsheet authors during explaining and debugging tasks. It was found that several mental models about spreadsheets are activated in spreadsheet authors׳ minds. Particularly, when explaining a spreadsheet, the real-world and domain mental models are prominent, and the spreadsheet model is suppressed; however, when locating and fixing an error, one must constantly switch back and forth between the domain model and the spreadsheet model, which requires frequent use of the mapping between problem domain concepts and their spreadsheet model counterparts. The second study examined the effects of replacing traditional spreadsheet formulas with problem domain narratives in the context of a debugging task. Domain narratives were found to be easy to learn and they helped participants to locate more errors in spreadsheets. Furthermore, domain narratives also increased the use of the domain mental model and appeared to improve the mapping between the domain and spreadsheet models. The third study investigated the effects of allowing spreadsheet authors to fix errors by editing domain narratives, thus relieving them from the use of traditional low-level cell references. This scenario was found to promote spreadsheet authors using even more of their domain mental model in a manner that completely overshadowed the use of their spreadsheet mental model. Thus, from a mental model perspective, it is possible to devise a new spreadsheet paradigm that uses domain narratives in place of traditional spreadsheet formulas, thus automatically presenting spreadsheet content so that it prompts spreadsheet authors to think in a manner that closely corresponds to their mental models of the application domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号