首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Thirty advisory interactions between computer system 'help desk' consultants and their clients were transcribed and analysed as part of a project to determine the behavioural requirements for intelligent on-line help facilities. An interesting property of these interactions is that the advice was frequently modified in response to verification requests: questions (often syntactically implicit) which contain presuppositional statements that are partial answers to the asserted query. Designs for intelligent help facilities might exploit this finding by supporting the verification strategy and attempting to extract and use the presupposed statements in these questions to generate advice.  相似文献   

2.
Explanation is an important capability for usable intelligent systems, including intelligent agents and cognitive models embedded within simulations and other decision support systems. Explanation facilities help users understand how and why an intelligent system possesses a given structure and set of behaviors. Prior research has resulted in a number of approaches to provide explanation capabilities and identified some significant challenges. We describe designs that can be reused to create intelligent agents capable of explaining themselves. The designs include ways to provide ontological, mechanistic, and operational explanations. These designs inscribe lessons learned from prior research and provide guidance for incorporating explanation facilities into intelligent systems. The designs are derived from both prior research on explanation tool design and from the empirical study reported here on the questions users ask when working with an intelligent system. We demonstrate the use of these designs through examples implemented using the Herbal high-level cognitive modeling language. These designs can help build better agents—they support creating more usable and more affordable intelligent agents by encapsulating prior knowledge about how to generate explanations in concise representations that can be instantiated or adapted by agent developers.  相似文献   

3.
Practical applications of constraint programming   总被引:1,自引:1,他引:0  
Mark Wallace 《Constraints》1996,1(1-2):139-168
Constraint programming offers facilities for problem modelling, constraint propagation and search. This paper discusses the resulting benefits for practical applications which exploit these facilities.The modelling facilities are particularly exploited in applications to verification, both of circuits and of real time control systems. The propagation facilities are exploited in applications involving user feedback and graphical interfaces. The search facilities are exploited in applications such as scheduling and resource allocation, which involve combinatorial problems.The paper surveys applications under each of these three headings.  相似文献   

4.
Help for users of Information Processing Systems (IPSs) is typically based upon the presentation of pre-stored texts written by the system designers for predictable situations. Though advances in user interface technology have eased the process of requesting advice, current on-line help facilities remain tied to a back-end of canned answers, spooled onto users, screens to describe queried facilities.This paper argues that the combination of a user's knowledge of an application and the particular states which a system can assume require different answers for the large number of possible situations. Thus, a marriage of techniques from the fields of text generation and Intelligent Help Systems research is needed to construct responses dynamically. Furthermore, it is claimed that the help texts should attempt to address not only the immediate needs of the user, but to facilitate learning of the system by incorporating a variety of educational techniques to specialise answers in given contexts.A computational scheme for help text generation based on schema of rhetorical predicates is presented. Using knowledge of applications programs and their users, it is possible to provide a variety of answers in response to a number of questions. The approach uses object-oriented techniques to combine different information from a variety of sources in a flexible manner, yielding responses which are appropriate to the state of the IPS and to the user's level of knowledge.Modifications to the scheme which resulted from its evaluation in the EUROHELP project are described, together with ongoing collaborative work and further research developments.Colin Tattersall is a Research Fellow at the Computer Based Learning Unit. He completed his B. Sc. in Computational Studies at Leeds University in 1986, then joined the CBL Unit on an ESRC studentship linked to ESPRIT project P280 EUROHELP. Within the project, his work related to knowledge representation and text generation for intelligent help systems, leading to a Ph.D. in mid-1990 entitled Question-answering and explanation in online help systems: a knowledge-based approach. The work was followed by a year-long fellowship, funded by ICL, to investigate the commercial viability of advanced help system architectures. This paper reflects results and experience gained from both research and development of intelligent help systems.  相似文献   

5.
ContextFormal methods, and particularly formal verification, is becoming more feasible to use in the engineering of large highly dependable software-based systems, but so far has had little rigorous empirical study. Its artefacts and activities are different to those of conventional software engineering, and the nature and drivers of productivity for formal methods are not yet understood.ObjectiveTo develop a research agenda for the empirical study of productivity in software projects using formal methods and in particular formal verification. To this end we aim to identify research questions about productivity in formal methods, and survey existing literature on these questions to establish face validity of these questions. And further we aim to identify metrics and data sources relevant to these questions.MethodWe define a space of GQM goals as an investigative framework, focusing on productivity from the perspective of managers of projects using formal methods. We then derive questions for these goals using Easterbrook et al.’s (2008) taxonomy of research questions. To establish face validity, we document the literature to date that reflects on these questions and then explore possible metrics related to these questions. Extensive use is made of literature concerning the L4.verified project completed within NICTA, as it is one of the few projects to achieve code-level formal verification for a large-scale industrially deployed software system.ResultsWe identify more than thirty research questions on the topic in need of investigation. These questions arise not just out of the new type of project context, but also because of the different artefacts and activities in formal methods projects. Prior literature supports the need for research on the questions in our catalogue, but as yet provides little evidence about them. Metrics are identified that would be needed to investigate the questions. Thus although it is obvious that at the highest level concepts such as size, effort, rework and so on are common to all software projects, in the case of formal methods, measurement at the micro level for these concepts will exhibit significant differences.ConclusionsEmpirical software engineering for formal methods is a large open research field. For the empirical software engineering community our paper provides a view into the entities and research questions in this domain. For the formal methods community we identify some of the benefits that empirical studies could bring to the effective management of large formal methods projects, and list some basic metrics and data sources that could support empirical studies. Understanding productivity is important in its own right for efficient software engineering practice, but can also support future research on cost-effectiveness of formal methods, and on the emerging field of Proof Engineering.  相似文献   

6.
Abstract

We describe the Smalltalk Gurus, components of the MoleHill intelligent tutoring system for Smalltalk programming. The Gurus offer help on plans for achieving goals in the Smalltalk environment, as well as remediation for students' incorrect and less-than-optimal plans. The Gurus' assistance is provided via the multimodal media of animation and voice-over audio. MoleHill employs multiple Gurus to deliver advice and instruction concerning disparate information domains, thus facilitating learners' cognitive organization and assimilation of new knowledge and information. We have labelled the approach instantiated by the Smalltalk Gurus the guru instructional model, one which is generally applicable to computer-based advisory systems.  相似文献   

7.
Earlier work suggests that program transformations can simplify program verification. A given program containing complex language features is transformed into a semantically equivalent program containing only simpler language features. The transformed program is proven using a set of proof rules for only the simpler features. That approach was illustrated by transforming a given program that may contain multiple‐level escape statements within nested loops into an equivalent program that contains no escape statements. This paper gives additional transformations, which map a given program that may contain multiple‐level escape statements to a semantically equivalent program (TP) that contains only single‐level escape statements. The proof of TP uses proof rules for single‐level escape statements, or the earlier transformations further map TP to a program with no escape statements, whose proof uses proof rules for loops without escape statements. This paper also discusses escape statements where the number of levels is determined at run‐time. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper we investigate how formal software verification systems can be improved by utilising parallel assignment in weakest precondition computations.We begin with an introduction to modern software verification systems. Specifically, we review the method in which software abstractions are built using counterexample-guided abstraction refinement (CEGAR). The classical NP-complete parallel assignment problem is first posed, and then an additional restriction is added to create a special case in which the problem is tractable with an O(n2) algorithm. The parallel assignment problem is then discussed in the context of weakest precondition computations. In this special situation where statements can be assumed to execute truly concurrently, we show that any sequence of simple assignment statements without function calls can be transformed into an equivalent parallel assignment block.Results of compressing assignment statements into a parallel form with this algorithm are presented for a wide variety of software applications. The proposed algorithms were implemented in the ComFoRT reasoning framework [J. Ivers and N. Sharygina. Overview of ComFoRT: A model checking reasoning framework. Technical Report CMU/SEI-2004-TN-018, Carnegie Mellon Software Engineering Institute, 2004] and used to measure the improvement in the verification of real software systems. This improvement in time proved to be significant for many classes of software.  相似文献   

9.
Research has shown that computer games and other virtual environments can support significant learning gains because they allow young people to explore complex concepts in simulated form. However, in complex problem‐solving domains, complex thinking is learned not only by taking action, but also with the aid of mentors who provide guidance in the form of questions, instructions, advice, feedback and encouragement. In this study, we examine one context of such mentoring to understand the impact of replacing face‐to‐face interactions between mentors and students with virtual, chat‐based interactions. We use pre‐ and post‐measures of learning and a post‐measure of engagement, as well as epistemic network analysis (ENA), a novel quantitative method, to examine student and mentor discourse. Our results suggest that mentoring via online chat can be as effective as mentoring face‐to‐face in appropriately structured contexts more generally – and that ENA may be a useful tool for assessing student and mentor discourse in the context of learning interactions.  相似文献   

10.
In this paper, we propose an approach to the construction of an intelligent system that handles various domain information provided on the Internet. The intelligent system adopts statistical decision-making as its reasoning framework and automatically constructs probabilistic knowledge, required for its decision-making, from Web-pages. This construction of probabilistic knowledge is carried out using aprobability interpretation idea that transforms statements in Web-pages into constraints on the subjective probabilities of a person who describes the statements. In this paper, we particularly focus on describing the basic idea of our approach and on discussing difficulties in our approach, including our perspective. Kazunori Fujimoto: He received bachelor’s degree from Department of Electrical Engineering, Doshisha University, Japan, in 1989, and master’s degree from Division of Applied Systems Science, Kyoto University, Japan, in 1992. From there, he joined NTT Electrical Communications Laboratories, Tokyo, Japan, and has been engaged in research on Artificial Intelligence. He is currently interested in probabilistic reasoning, knowledge acquisition, and especially in quantitative approaches to research in human cognition and behavior. Mr. Fujimoto is a member of Decision Analysis Society, The Behaviormetric Society of Japan, Japanese Society for Artificial Intelligence, Information Processing Society of Japan, and Japanese Society for Fuzzy Theory and Systems. Kazumitsu Matsuzawa: He received B.S. and M.S. degrees in electronic engineering from Tokyo Institute of Technology, Tokyo, Japan, in 1975 and 1977. From there, he joined NTT Electrical Communications Laboratories, Tokyo, Japan, and has been engaged in research on computer architecture and the design of LSI. He is currently concerned with AI technology. Mr. Matsuzawa is a member of The Institute of Electronics, Information and Communication Engineers, Information Processing Society of Japan, Japanese Society for Artificial Intelligence, and Japanese Society for Fuzzy Theory and Systems.  相似文献   

11.
ABSTRACT

New technologies, data, and algorithms impact nearly every aspect of daily life. Unfortunately, many of these algorithms operate like black boxes and cannot explain their results even to their programmers, let alone to end-users. As more and more tasks get delegated to such intelligent systems and the nature of user interactions with them becomes increasingly complex, it is important to understand the amount of trust that a user is willing to place on such systems. However, attempts at quantifying trust have either been limited in their scope or not empirically thorough. To address this, we build on prior work which empirically modelled trust in user-technology interactions and describe the development and evolution of a human computer trust scale. We present results of two studies (N=118 & N=183) which were undertaken to assess the reliability and validity of the proposed scale. Our study contributes to the literature by (a) developing a multi-dimensional scale to assess user trust in HCI and (b) being the first study to use the concept of design fiction and future scenarios to study trust.  相似文献   

12.

In the experiment presented in this paper the Elaboration Likelihood Model (ELM), a social psychological theory of persuasion, was applied to explain why users sometimes agree with the incorrect advice of an expert system. Subjects who always agreed with the expert system's incorrect advice (n = 36) experienced less mental effort, scored lower on recall questions, and evaluated the cases as being easier than subjects who disagreed once or more with the expert system (n = 35). These results show that subjects who agreed with the expert system hardly studied the advice but just trusted the expert system. This is in agreement with the ELM. The experiment also covers an investigation into the factors that moderate user agreement. The results have serious implications for the use of expert systems.  相似文献   

13.
In this paper, we emphasize the importance of efficient debugging in formal verification and present capabilities that we have developed in order to aid debugging in Intel’s Formal Verification Environment. We have given the name “Counter-Example Wizard” to the bundle of capabilities that we have developed to address the needs of the verification engineer in the context of counter-example diagnosis and rectification. The novel features of the Counter-Example Wizard are the multi-value counter-example annotation, constraint-based debugging, and multiple counter-example generation mechanisms. Our experience with the verification of real-life Intel designs shows that these capabilities complement one another and can help the verification engineer diagnose and fix a reported failure. We use real-life verification cases to illustrate how our system solution can significantly reduce the time spent in the loop of model checking, specification, and design modification. Published online: 21 February 2003  相似文献   

14.
DNA computing is a hot research topic in recent years. Formalization and verification using theories(π-calculus, bioambients, κ-calculus and etc.) in Computer Science attract attention because it can help prove and predict to a certian degree various kinds of biological processes. Combining these two aspects, formal methods can be used to verify algorithms in DNA computing, including basic arithmetic operations if they are to be included in a DNA chip. In this paper, we first introduce a newly-designed algorithm for solving binary addition with DNA, which contributes to a unit in DNA computer processor, and then formalize the algorithm in κ-calculus(a formal method well suited for describing protein interactions) to show the correctness of it in a sense, and a sensible example is provided. Finally, some discussion on the described model is made, in addition to a few possible future improvement directions.  相似文献   

15.
The importance that an understanding of time plays in many problem-solving situations requires that intelligent programs be equipped with extensive temporal knowledge. This paper discusses one route to that goal, namely the construction of a time specialist, a program knowledgable about time in general which can be used by a higher level program to deal with the temporal aspects of its problem-solving. Some examples are given of such a use of a time specialist. The principal issues addressed in this paper are how the time specialist organizes statements involving temporal references, checks them for consistency, and uses them in answering questions.  相似文献   

16.
Academic research has produced many model-based specification and analysis techniques, however, most organisations continue to document requirements as textual statements. To help bridge this gap between academic research and requirements practice, this paper reports an extension to the RESCUE process in which patterns for generating requirements statements from i* system models were manually applied to i* models developed for a complex air traffic control system. The paper reports the results of this application and describes them with examples, the benefits of the approach to the project, and ongoing research to implement these patterns in the REDEPEND modelling tool to make requirements engineers more productive. We review similar work on requirements modelling and expression, and compare our work to it to demonstrate the proposed advance in the state of the art. Finally the paper discusses future uses of requirements generation from model patterns in RESCUE.  相似文献   

17.
ContextThe automated identification of code fragments characterized by common design flaws (or “code smells”) that can be handled through refactoring, fosters refactoring activities, especially in large code bases where multiple developers are engaged without a detailed view on the whole system. Automated refactoring to design patterns enables significant contributions to design quality even from developers with little experience on the use of the required patterns.ObjectiveThis work targets the automated identification of refactoring opportunities to the Strategy design pattern and the elimination through polymorphism of respective “code smells” that are related to extensive use of complex conditional statements.MethodAn algorithm is introduced for the automated identification of refactoring opportunities to the Strategy design pattern. Suggested refactorings comprise conditional statements that are characterized by analogies to the Strategy design pattern, in terms of the purpose and selection mode of strategies. Moreover, this work specifies the procedure for refactoring to Strategy the identified conditional statements. For special cases of these statements, a technique is proposed for total replacement of conditional logic with method calls of appropriate concrete Strategy instances. The identification algorithm and the refactoring procedure are implemented and integrated in the JDeodorant Eclipse plug-in. The method is evaluated on a set of Java projects, in terms of quality of the suggested refactorings and run-time efficiency. The relevance of the identified refactoring opportunities is verified by expert software engineers.ResultsThe identification algorithm recalled, from the projects used during evaluation, many of the refactoring candidates that were identified by the expert software engineers. Its execution time on projects of varying size confirmed the run-time efficiency of this method.ConclusionThe proposed method for automated refactoring to Strategy contributes to simplification of conditional statements. Moreover, it enhances system extensibility through the Strategy design pattern.  相似文献   

18.
ContextA considerable portion of the software systems today are adopted in the embedded control domain. Embedded control software deals with controlling a physical system, and as such models of physical characteristics become part of the embedded control software.ObjectiveDue to the evolution of system properties and increasing complexity, faults can be left undetected in these models of physical characteristics. Therefore, their accuracy must be verified at runtime. Traditional runtime verification techniques that are based on states/events in software execution are inadequate in this case. The behavior suggested by models of physical characteristics cannot be mapped to behavioral properties of software. Moreover, implementation in a general-purpose programming language makes these models hard to locate and verify. Therefore, this paper proposes a novel approach to perform runtime verification of models of physical characteristics in embedded control software.MethodThe development of an approach for runtime verification of models of physical characteristics and the application of the approach to two industrial case studies from the printing systems domain.ResultsThis paper presents a novel approach to specify models of physical characteristics using a domain-specific language, to define monitors that detect inconsistencies by exploiting redundancy in these models, and to realize these monitors using an aspect-oriented approach. We complement runtime verification with static analysis to verify the composition of domain-specific models with the control software written in a general-purpose language.ConclusionsThe presented approach enables runtime verification of implemented models of physical characteristics to detect inconsistencies in these models, as well as broken hardware components and wear and tear of hardware in the physical system. The application of declarative aspect-oriented techniques to realize runtime verification monitors increases modularity and provides the ability to statically verify this realization. The complementary static and runtime verification techniques increase the reliability of embedded control software.  相似文献   

19.
20.
《Ergonomics》2012,55(12):1499-1514
Abstract

Although it is recognised that face-to-face interactions are important for sharing interests and (new) knowledge, it remains unknown how and where students and university employees interact in academic buildings. Therefore, the aim of this study is to analyse the location choice for face-to-face interactions in an academic building, including several personal- and interaction characteristics. An Experience Sampling Method (ESM) was used to collect data on 643 face-to-face interactions during two weeks in the Flux building at Eindhoven University of Technology, the Netherlands. In general, students more often interacted in meeting rooms than teaching staff, and support staff interacted less in eat/drink areas and the hallways than other users. Unexpectedly, some of the lectures took place outside of traditional project-/lecture space. Real estate managers of university campuses could use these results to create better interactive work environments that stimulate face-to-face interactions among employees and students of different departments.

Practitioner Summary: Based on longitudinal data of ftf interactions among students and employees in an academic building, results showed that ftf interaction characteristics, compared to personal characteristics, are most important for explaining the location choice of interactions. These insights could help to design academic work environments that optimise the support of interactions.

Abbreviation: ABO: activity-based office; ANOVA: analyses of variance; ESM: experience sampling method; FTF: face-to-face; HR: human resources; MMNL: mixed multinomial logit model; NewWoW: new ways of working  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号