首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Comprehending and debugging computer programs are inherently difficult tasks. The current approach to building program execution and debugging environments is to use exclusively visual stimuli on programming languages whose syntax and semantics has often been designed without empirical guidance. We present an alternative: Sodbeans, an open-source integrated development environment designed to output carefully chosen spoken auditory cues to supplement empirically evaluated visual stimuli. Originally designed for the blind, earlier work suggested that Sodbeans may benefit sighted programmers as well. We evaluate Sodbeans in two experiments. First, we report on a formal debugging experiment comparing (1) a visual debugger, (2) an auditory debugger, and (3) a multimedia debugger, which includes both visual and auditory stimuli. The results from this study indicate that while auditory debuggers on their own are significantly less effective for sighted users when compared with visual and multimedia debuggers, multimedia debuggers might benefit sighted programmers under certain circumstances. Specifically, we found that while multimedia debuggers do not provide instant usability, once programmers have some practice, their performance in answering comprehension questions improves. Second, we created and evaluated a pilot survey analyzing individual elements in a custom programming language (called HOP) to garner empirical metrics on their comprehensibility. Results showed that some of the most widely used syntax and semantics choices in commercial programming languages are extraordinarily unintuitive for novices. For example, at an aggregate level, the word for , as in a for loop, was rated reliably worse than repeat by more than 673% by novices. After completing our studies, we implemented the HOP programming language and integrated it into Sodbeans.  相似文献   

2.

Comprehension of computer programs involves identifying important program parts and inferring relationships between them. The ability to comprehend a computer program is a skill that begins its development in the novice programmer and reaches maturity in the expert programmer. This research examined the beginning of this process, that of comprehension of computer programs by novice programmers. The mental representations of the program text that novices form, which indicate the comprehension strategies being used, were examined. In the first study, 80 novice programmers were tested on their comprehension of short program segments. The results suggested that novices form detailed, concrete mental representations of the program text, supporting work that has previously been done with novice comprehension. Their mental representations were primarily procedural in nature, with little or no modeling using real‐world referents. In a second study, the upper and lower quartile comprehenders from Study 1 were tested on their comprehension of a longer program. Results supported the conclusions from Study 1 in that the novices tended towards detailed representations of the program text with little real‐world reference. However, the comprehension strategies used by high comprehenders differed substantially from those used by low comprehenders. Results indicated that the more advanced novices were using more abstract concepts in their representations, although their abstractions were detailed in nature.  相似文献   

3.
Prior empirical studies of programming have shown that novice programmers tend to program by exploration, relying on frequent compilation and execution of their code in order to make progress. One way visual and end-user programming environments have attempted to facilitate this exploratory programming process is through their support of “live” editing models, in which immediate visual feedback on a program's execution is provided automatically at edit time. Notice that the notion of “liveness” actually encompasses two distinct dimensions: (a) the amount of time a programmer must wait between editing a program and receiving visual feedback (feedback delay); and (b) whether such feedback is provided automatically, or whether the programmer must explicitly request it (feedback self-selection). While a few prior empirical studies of “live” editing do exist, none has specifically evaluated the impact of these dimensions of “live” editing within the context of the imperative programming paradigm commonly taught in first-semester computer science courses. As a preliminary step toward that end, we conducted an experimental study that investigated the impact of feedback self-selection on novice imperative programming. Our within-subjects design compared the impact of three different levels of feedback self-selection on syntactic and semantic correctness: (a) no visual feedback at all (the No Feedback treatment); (b) visual feedback, in the form of a visualization of the program's execution state, provided on request when a “run” button is hit (the Self-Select treatment); and (c) visual feedback, in the form of a visualization of the program's execution state, updated on every keystroke (the Automatic treatment). Participants in the Automatic and Self-Select treatments produced programs that had significantly fewer syntactic and semantic errors than those of the No Feedback treatment; however, no significant differences were found between the Automatic and Self-Select treatments. These results suggest that, at least in the case of novice imperative programming environments, the benefits of delivering a continuously updated visual representation of a program's execution may fail to justify the substantial costs of implementing such feedback. We recommend that programming environment designers instead direct their efforts toward carefully considering when programmers will be ready to take advantage of the feedback that is coming toward them, along with what content will be of most benefit to them.  相似文献   

4.
《Ergonomics》2012,55(8):1264-1279
This study teased apart the effects of comprehensibility and complexity on older adults' comprehension of warning symbols by manipulating the relevance of additional information in further refining the meaning of the symbol. Symbols were systematically altered such that increased visual complexity (in the form of contextual cues) resulted in increased comprehensibility. One hundred older adults, aged 50–71 years, were tested on their comprehension of these symbols before and after training. High comprehensibility–complexity symbols were found to be better understood than low- or medium-comprehensibility–complexity symbols and the effectiveness of the contextual cues varied as a function of training. Therefore, the nature of additional detail determines whether increased complexity is detrimental or beneficial to older adults' comprehension – if the additional details provide ‘cues to knowledge’, older adults' comprehension improves as a result of the increased complexity. However, some cues may require training in order to be effective.

Practitioner Summary: Research suggests that older adults have greater difficulty in understanding more complex symbols. However, we found that when the complexity of symbols was increased through the addition of contextual cues, older adults' comprehension actually improved. Contextual cues aid older adults in making the connection between the symbol and its referent.  相似文献   

5.
Specification mining takes execution traces as input and extracts likely program invariants, which can be used for comprehension, verification, and evolution related tasks. In this work we integrate scenario-based specification mining, which uses a data-mining algorithm to suggest ordering constraints in the form of live sequence charts, an inter-object, visual, modal, scenario-based specification language, with mining of value-based invariants, which detects likely invariants holding at specific program points. The key to the integration is a technique we call scenario-based slicing, running on top of the mining algorithms to distinguish the scenario-specific invariants from the general ones. The resulting suggested specifications are rich, consisting of modal scenarios annotated with scenario-specific value-based invariants, referring to event parameters and participating object properties. We have implemented the mining algorithm and the visual presentation of the mined scenarios within a standard development environment. An evaluation of our work over a number of case studies shows promising results in extracting expressive specifications from real programs, which could not be extracted previously. The more expressive the mined specifications, the higher their potential to support program comprehension and testing.  相似文献   

6.
With the increasing performance demand in real-time systems it becomes more and more important to provide feedback to programmers and software development tools on the performance-relevant code parts of a real-time program. So far, this information was limited to an estimation of the worst-case execution time (WCET) and its associated worst-case execution path (WCEP) only. However, both, the WCET and the WCEP, only provide partial information. Only code parts that are on one of the WCEPs are indicated to the programmer. No information is provided for all other code parts. To give a comprehensive view covering the entire code base, tools in the spirit of program profiling are required. This work proposes an efficient approach to compute worst-case timing information for all code parts of a program using a complementary metric, called criticality. Every statement of a program is assigned a criticality value, expressing how critical the code is with respect to the global WCET. This gives valuable information how close the worst execution path passing through a specific program part is to the global WCEP. We formally define the criticality metric and investigate some of its properties with respect to dominance in control-flow graphs. Exploiting some of those properties, we propose an algorithm that reduces the overhead of computing the metric to cover complete programs. We also investigate ways to efficiently find only those code parts whose criticality is above a given threshold. Experiments using well-established real-time benchmark programs show an interesting distribution of the criticality values, revealing considerable amounts of highly critical as well as uncritical code. The metric thus provides ideal information to programmers and software development tools to optimize the worst-case execution time of these programs.  相似文献   

7.

To study cognitive processes, such as computer program comprehension, researchers often use verbal protocols to collect a process trace. However, the difficulty of collecting and analysing verbal protocol data can discourage even the most resolute researcher. Therefore, alternatives to verbal protocols, such as responses to comprehension questions, are undeniably attractive. Unfortunately, there is little methodological research to justify the use of most alternative methods. The current study compares the use of verbal protocol data with responses to comprehension questions as measures of comprehension process. According to results from the protocol analysis data, programmers used significantly different comprehension processes to understand computer programs in two phases of an experiment. If previous research was correct, then programmers' responses to different types of comprehension questions should reflect the differences in comprehension process. Unfortunately, comprehension process was not reflected in the responses to the questions. Hence, this research confirms that process tracing methods, such as verbal protocols, are a more appropriate method by which to investigate program comprehension processes.  相似文献   

8.
Recently, the first two in a series of planned comprehension experiments were performed to measure the effect of the control structure diagram (CSD) on program comprehensibility. Upper- and lower-division computer science and software engineering students were asked to respond to questions regarding the structure and execution of one source code module of a public domain graphics library. The time taken for each response and the correctness of each response was recorded. Statistical analysis of the data collected from these two experiments revealed that the CSD was highly significant in enhancing the subjects' performance in this program comprehension task. The results of these initial experiments promise to shed light on fundamental questions regarding the effect of software visualizations on program comprehensibility  相似文献   

9.
This paper presents one experiment to explain why and under which circumstances visual programming languages would be easier to understand than textual programming languages. Towards this goal we bring together research from psychology of programming and image processing. According to current theories of imagery processing imagery facilitates a quicker access to semantic information. Thus, visual programming languages should allow for quicker construction of a mental representation based on data flow relationships of a program than procedural languages. To test this hypothesis the mental models of C and spreadsheet programmers were assessed in different program comprehension situations. The results showed that spreadsheet programmers developed data flow based mental representations in all situations, while C programmers seemed to access first a control flow and then data flow based mental representations. These results could help to expand theories of mental models from psychology of programming to account for the effect of imagery.  相似文献   

10.
The construction of large software systems is always achieved through assembly of independently written components — program modules. For these software components to work together, they must share a common set of data types and principles for representing structured data such as arrays of values and files. This common set of tools for creating and operating on data objects is provided by the infrastructure of the computer system: the hardware, operating system and runtime code. Because the nature and properties of these tools are crucial for correct operation of software components and their inter-operation, it is essential to have a precise specification that may be used for verifying correctness of application software on one hand, and to verify correctness of system behavior on the other. We call such a specification a program execution model (PXM). It is evident that the properties of the PXM implemented by a computer system can have serious impact on the ability of application programmers to practice modular software construction. This paper discusses the concept of program execution models and presents a set of principles that a PXM must satisfy to provide a sound basis for modular software construction. Because parallel program execution on computer systems with many processing units is an essential part of contemporary computing environments, the expression of parallelism and modular software construction using components involving parallel operations is included in this treatment. The conclusion is that it is possible to build computer systems that implement a PXM within which any parallel program may be used, unmodified, as a component for building more substantial parallel programs.  相似文献   

11.
《Computers & chemistry》1989,13(3):277-290
Friendly software is defined as computer programs which can be run interactively and profitably without incurring fatal errors causing execution termination. We discuss a modular library and a simple program structure providing for input validation, help, status, overview and change commands in a standard FORTRAN environment. As an illustration we show how program WONPSE [Computers Chem.13, 201 (1989)] was easily transformed from a conventional program into a friendly tool for frontier research and student training on N-electron symmetry-eigenfunctions.  相似文献   

12.
Program comprehension research can be characterized by both the theories that provide rich explanations about how programmers understand software, as well as the tools that are used to assist in comprehension tasks. In this paper, I review some of the key cognitive theories of program comprehension that have emerged over the past thirty years. Using these theories as a canvas, I then explore how tools that are commonly used today have evolved to support program comprehension. Specifically, I discuss how the theories and tools are related and reflect on the research methods that were used to construct the theories and evaluate the tools. The reviewed theories and tools are distinguished according to human characteristics, program characteristics, and the context for the various comprehension tasks. Finally, I predict how these characteristics will change in the future and speculate on how a number of important research directions could lead to improvements in program comprehension tool development and research methods. Dr. Margaret-Anne Storey is an associate professor of computer science at the University of Victoria, a Visiting Scientist at the IBM Centre for Advanced Studies in Toronto and a Canada Research Chair in Human Computer Interaction for Software Engineering. Her research passion is to understand how technology can help people explore, understand and share complex information and knowledge. She applies and evaluates techniques from knowledge engineering and visual interface design to applications such as reverse engineering of legacy software, medical ontology development, digital image management and learning in web-based environments. She is also an educator and enjoys the challenges of teaching programming to novice programmers.  相似文献   

13.

An experiment was conducted to examine skill differences in the control strategy for computer program comprehension. A computer program along with its hierarchy of program plans was provided to 10 intermediate and 10 novice computer programmers. Each program plan is known as a program segment to the subjects. A random list of plan goals was also provided to the subjects. The subjects were asked to match each program segment with its goal while they were comprehending the program. Several measures of the subjects' performance and control strategy were collected and analysed. The results indicated the use of an overall top-down strategy by both intermediates and novices for program comprehension. Novices' control strategies involved more opportunistic elements than experts' in the overall top-down process of program comprehension. Those differences in the control strategy between intermediates and novices result in better performance in intermediates than novices.  相似文献   

14.
This paper investigates the interplay between high level debugging strategies and low level tactics in the context of a multi-representation software development environment (SDE). It investigates three questions. 1. How do programmers integrate debugging strategies and tactics when working with SDEs? 2. What is the relationship between verbal ability, level of graphical literacy and debugging (task) performance. 3. How do modality and perspective influence debugging strategy and deployment of tactics? The paper extends the work of Katz and Anderson [1988. Debugging: an analysis of bug location strategies. Human-Computer Interaction 3, 359–399] and others in terms of identifying high level debugging strategies, in this case when working with SDEs. It also describes how programmers of different backgrounds and degrees of experience make differential use of the multiple sources of information typically available in a software debugging environment. Individual difference measures considered among the participants were their programming experience and their knowledge of external representation formalisms. The debugging environment enabled the participants, computer science students, to view the execution of a program in steps and provided them with concurrently displayed, adjacent, multiple and linked programming representations. These representations comprised the program code, two visualisations of the program and its output. The two visualisations of the program were available, in either a largely textual format or a largely graphical format so as to track interactions between experience and low level mode-specific tactics, for example.The results suggest that (i) additionally to deploying debugging strategies similar to those reported in the literature, participants also employed a strategy specific to SDEs, following execution, (ii) verbal ability was not correlated with debugging performance, (iii) knowledge of external representation formalisms was as important as programming experience to succeed in the debugging task, and (iv) participants with greater experience of both programming and external representation formalisms, unlike the less experienced, were able to modify their debugging strategies and tactics effectively when working under different format conditions (i.e. when working with either largely graphical or largely textual visualisations) in order to maintain their high debugging accuracy level.  相似文献   

15.
GOP is a graph‐oriented programming model which aims at providing high‐level abstractions for configuring and programming cooperative parallel processes. With GOP, the programmer can configure the logical structure of a parallel/distributed program by constructing a logical graph to represent the communication and synchronization between the local programs in a distributed processing environment. This paper describes a visual programming environment, called VisualGOP, for the design, coding, and execution of GOP programs. VisualGOP applies visual techniques to provide the programmer with automated and intelligent assistance throughout the program design and construction process. It provides a graphical interface with support for interactive graph drawing and editing, visual programming functions and automation facilities for program mapping and execution. VisualGOP is a generic programming environment independent of programming languages and platforms. GOP programs constructed under VisualGOP can run in heterogeneous parallel/distributed systems. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
Although automobile navigation systems have conventionally utilised visual and auditory modalities to deliver information, recent advancement in tactile technology has introduced the possibility of integrating tactile actuators into navigation systems. To empirically test the effect of tactile navigational cues, we conducted simulated driving experiments in which participants (N?=?96) were exposed to four sets of information modalities (visual, visual?+?auditory, visual?+?tactile, and visual?+?auditory?+?tactile). The results indicate that multisensory systems including tactile cues allow participants to respond faster to unexpected road events and impart greater satisfaction with the overall driving experience. The implications and limitations of these findings are discussed.  相似文献   

17.
This research examined program design methodologies which claim to improve the design process by providing strategies to programmers for structuring solutions to computer problems. In this experiment, professional programmers were provided with the specifications for each of three non-trivial problems and asked to produce pseudo-code for each specification according to the principles of a particular design methodology. The measures collected were the time to design and code, percent complete, and complexity, as measured by several metrics. These data were used to develop profiles of the solutions produced by different methodologies and to develop comparisons among the various methodologies. These differences are discussed in light of their impact on the comprehensibility, reliability, and maintainability of the programs produced.  相似文献   

18.
Programmers have always been curious about what their programs are doing while it is executing, especially when the behavior is not what they are expecting. Since program execution is intricate and involved, visualization has long been used to provide the programmer with appropriate insights into program execution. This paper looks at the evolution of on-line visual representations of executing programs, showing how they have moved from concrete representations of relatively small programs to abstract representations of larger systems. Based on this examination, we describe the challenges implicit in future execution visualizations and methodologies that can meet these challenges.  相似文献   

19.
Design patterns are recognized in the software engineering community as useful solutions to recurring design problems that improve the quality of programs. They are more and more used by developers in the design and implementation of their programs. Therefore, the visualization of the design patterns used in a program could be useful to efficiently understand how it works. Currently, a common representation to visualize design patterns is the UML collaboration notation. Previous work noticed some limitations in the UML representation and proposed new representations to tackle these limitations. However, none of these pieces of work conducted empirical studies to compare their new representations with the UML representation. We designed and conducted an empirical study to collect data on the performance of developers on basic tasks related to design pattern comprehension (i.e., identifying composition, role, participation) to evaluate the impact of three visual representations and to compare them with the UML one. We used eye-trackers to measure the developers’ effort during the execution of the study. Collected data and their analyses show that stereotype-enhanced UML diagrams are more efficient for identifying composition and role than the UML collaboration notation. The UML representation and the pattern-enhanced class diagrams are more efficient for locating the classes participating in a design pattern (i.e., identifying participation).  相似文献   

20.
Visualcode is a visual notation that uses coloured expressions and graphical environments to describe the execution of Scheme programs. RainbowScheme is a program visualization system which is designed to produce visualcode representations of step-by-step execution of Scheme programs. This article presents a new approach of teaching recursion using visualcode and RainbowScheme. Experimental evaluation indicates that viewing RainbowScheme-produced visual traces and requiring students to use visualcode to generate visual evaluation steps of recursive programs can enhance the learners' ability to evaluate recursive programs as well as to solve recursive programming problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号