共查询到20条相似文献,搜索用时 0 毫秒
1.
Conversation is an essential component of social behavior, one of the primary means by which humans express intentions, beliefs,
emotions, attitudes and personality. Thus the development of systems to support natural conversational interaction has been
a long term research goal. In natural conversation, humans adapt to one another across many levels of utterance production
via processes variously described as linguistic style matching, entrainment, alignment, audience design, and accommodation.
A number of recent studies strongly suggest that dialogue systems that adapted to the user in a similar way would be more
effective. However, a major research challenge in this area is the ability to dynamically generate user-adaptive utterance
variations. As part of a personality-based user adaptation framework, this article describes personage, a highly parameterizable generator which provides a large number of parameters to support adaptation to a user’s linguistic
style. We show how we can systematically apply results from psycholinguistic studies that document the linguistic reflexes
of personality, in order to develop models to control personage’s parameters, and produce utterances matching particular personality profiles. When we evaluate these outputs with human
judges, the results indicate that humans perceive the personality of system utterances in the way that the system intended. 相似文献
2.
Natural Language is a powerful medium for interacting with users, and sophisticated computer systems using natural language are becoming more prevalent. Just as human speakers show an essential, inbuilt responsiveness to their hearers, computer systems must tailor their utterances to users. Recognizing this, researchers devised user models and strategies for exploiting them in order to enable systems to produce the best answer for a particular user.Because these efforts were largely devoted to investigating how a user model could be exploited to produce better responses, systems employing them typically assumed that a detailed and correct model of the user was available a priori, and that the information needed to generate appropriate responses was included in that model. However, in practice, the completeness and accuracy of a user model cannot be guaranteed. Thus, unless systems can compensate for incorrect or incomplete user models, the impracticality of building user models will prevent much of the work on tailoring from being successfully applied in real systems. In this paper, we argue that one way for a system to compensate for an unreliable user model is to be able to react to feedback from users about the suitability of the texts it produces. We also discuss how such a capability can actually alleviate some of the burden now placed on user modeling. Finally, we present a text generation system that employs whatever information is available in its user model in an attempt to produce satisfactory texts, but is also capable of responding to the user's follow-up questions about the texts it produces.Dr. Johanna D. Moore holds interdisciplinary appointments as an Assistant Professor of Computer Science and as a Research Scientist at the Learning Research and Development Center at the University of Pittsburgh. Her research interests include natural language generation, discourse, expert system explanation, human-computer interaction, user modeling, intelligent tutoring systems, and knowledge representation. She received her MS and PhD in Computer Science from the University of California at Los Angeles, and her BS in Mathematics and Computer Science from the University of California at Los Angeles. She is a member of the Cognitive Science Society, ACL, AAAI, ACM, IEEE, and Phi Beta Kappa. Readers can reach Dr. Moore at the Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260.Dr. Cecile Paris is the project leader of the Explainable Expert System project at USC's information Sciences Institute. She received her PhD and MS in Computer Science from Columbia University (New York) and her bachelor's degree from the University of California in Berkeley. Her research interests include natural language generation and user modeling, discourse, expert system explanation, human-computer interaction, intelligent tutoring systems, machine learning, and knowledge acquisition. At Columbia University, she developed a natural language generation system capable of producing multi-sentential texts tailored to the users level of expertise about the domain. At ISI, she has been involved in designing a flexible explanation facility that supports dialogue for an expert system shell. Dr. Paris is a member of the Association for Computational Linguistics (ACL), the American Association for Artificial Intelligence (AAAI), the Cognitive Science Society, ACM, IEEE, and Phi Kappa Phi. Readers can reach Dr. Paris at USC/ISI, 4676 Admiralty Way, Marina Del Rey, California, 90292 相似文献
3.
A BASIC program to assist the instruction of steady-state enzyme kinetics has been developed for the IBM PC microcomputer. Its purpose is to simulate laboratory experiments in order to minimize the time required to obtain kinetic data from which students deduce kinetic mechanisms and determine kinetic constants of enzyme-catalyzed reactions. The program randomly selects a kinetic scheme from various sequential, ping pong, and iso reaction sequences as well as values for the kinetic constants. The scheme and kinetic constants are unknown to the student at this time; the only thing he or she knows is the stoichiometry of the catalyzed reaction which can have two or three substrates and products. The student is prompted to enter values for concentrations of substrates and products; several different concentrations for each substrate and product can be entered in a single experiment. The program then calculates, displays and prints (if desired) the corresponding initial steady-state velocities. The student can perform as many experiments as desired until enough information is obtained to determine the kinetic mechanism and to calculate values for the kinetic constants. 相似文献
4.
Software and Systems Modeling - Software systems start to include other types of interfaces beyond the “traditional” Graphical-User Interfaces (GUIs). In particular, Conversational User... 相似文献
6.
We tackle the problem of new users or documents in collaborative filtering. Generalization over users by grouping them into
user groups is beneficial when a rating is to be predicted for a relatively new document having only few observed ratings.
Analogously, generalization over documents improves predictions in the case of new users. We show that if either users and
documents or both are new, two-way generalization becomes necessary. We demonstrate the benefits of grouping of users, grouping
of documents, and two-way grouping, with artificial data and in two case studies with real data. We have introduced a probabilistic
latent grouping model for predicting the relevance of a document to a user. The model assumes a latent group structure for
both users and items. We compare the model against a state-of-the-art method, the User Rating Profile model, where only the
users have a latent group structure. We compute the posterior of both models by Gibbs sampling. The Two-Way Model predicts
relevance more accurately when the target consists of both new documents and new users. The reason is that generalization
over documents becomes beneficial for new documents and at the same time generalization over users is needed for new users. 相似文献
7.
There are many different ways of building software applications and of tackling the problems of understanding the system to be built, designing that system and finally implementing the design. One approach is to use formal methods, which we can generalise as meaning we follow a process which uses some formal language to specify the behaviour of the intended system, techniques such as theorem proving or model-checking to ensure the specification is valid (i.e., meets the requirements and has been shown, perhaps by proof or other means of inspection, to have the properties the client requires of it) and a refinement process to transform the specification into an implementation. Conversely, the approach we take may be less structured and rely on informal techniques. The design stage may involve jotting down ideas on paper, brainstorming with users etc. We may use prototyping to transform these ideas into working software and get users to test the implementation to find problems. Formal methods have been shown to be beneficial in describing the functionality of systems, what we may call application logic, and underlying system behaviour. Informal techniques, however, have also been shown to be useful in the design of the user interface to systems. Given that both styles of development are beneficial to different parts of the system we would like to be able to use both approaches in one integrated software development process. Their differences, however, make this a challenging objective. In this paper we describe models and techniques which allow us to incorporate informal design artefacts into a formal software development process. 相似文献
8.
The evaluation of the usability and the learnability of a computer system may be performed with predictive models during the design phase. It may be done on the executable code as well as by observing the user in action. In the latter case, data collected in vivo must be processed. The goal is to provide software supports for performing this difficult and time consuming task. The paper presents an early analysis of, and experience relating to, the automatic evaluation of multimodal user interfaces. With this end in view, a generic Wizard of Oz platform has been designed to allow the observation and automatic recording of subjects' behavior while they interact with a multimodal interface. It is then shown how recorded data can be analyzed to detect behavioral patterns, and how deviations of such patterns from a data-flow-oriented task model can be exploited by a software usability critic. 相似文献
11.
An IBM-compatible microcomputer program for teaching purposes is described which stimulates the operation of a sedimentation velocity determination of a protein in an analytical ultracentrifuge using schlieren optics. The program operates in speeded-up time and simulates the major procedures which would need to be carried out to operate such an instrument. The position of the sedimenting boundary can be observed at any time during the run, and up to six 'photographs' can be recorded for subsequent analysis. Calculation of sedimentation coefficient, diffusion coefficient and mol. wt can be made from a dot-matrix printout. Ten representative proteins are stored within the program, but provision exists for user-supplied data. 相似文献
12.
Formal verification methods require that a model of the system to analyze, in the form of a network of automata for instance, be built previously. Every evolution of this formal model must represent a real evolution of the modeled system; if the model contains indeed spurious evolutions, meaningless states, which do not correspond to physically possible states, can be reached and the verification results are surely not trustworthy. This paper focuses on construction of the formal model of a closed-loop system which can be represented as a Discrete Event System (DES) and where all evolutions and states are meaningful w.r.t. to the real system behavior. A closed-loop system is composed of a physical system to control, named plant, and a controller. A modular approach to build the plant model is presented in the first part of the paper; to prevent from meaningless evolutions and states in this model, a solution based on the concept of urgent edges is proposed and exemplified. Then, construction of the formal model of the closed-loop system is addressed; it is shown that restriction of the evolutions of this model to the only meaningful ones can be easily achieved by introducing variables that represent the modification of the inputs of the logic controller and the stability condition of the control specification. 相似文献
13.
The continuous development of the Linked Data Web depends on the advancement of the underlying extraction mechanisms. This is of particular interest for the scientific publishing domain, where currently most of the data sets are being created manually. In this article, we present a Machine Learning pipeline that enables the automatic extraction of heading metadata (i.e., title, authors, etc) from scientific publications. The experimental evaluation shows that our solution handles very well any type of publication format and improves the average extraction performance of the state of the art with around 4%, in addition to showing an increased versatility. Finally, we propose a flexible Linked Data-driven mechanism to be used both for refining and linking the automatically extracted metadata. 相似文献
14.
The production of moulded micro components including design, manufacture and quality control is a highly integrated process. A lot of experts, machines, tools, etc. should be deployed meaningfully along this process. The process chain was never written down in detail or as a whole so that new products could be developed on this basis. All developments hitherto have been carried out partly intuitive and partly based on area-specific sub process chains. For an efficient and effective planning and controlling and user support during future tool-based micro product engineering processes, the authors propose a corresponding reference process model. For this purpose, implicit process knowledge must be retrieved from expert’s minds to consider area-specific sub-process chains. For implementation of the reference process model, the Integrated Product Engineering Model (iPeM) will be used. 相似文献
15.
This paper addresses the key issue of providing flexible multimedia presentation with user participation and suggests synchronization models that can specify the user participation during the presentation. We study models like the Petrinet-based hypertext model and the object composition Petri nets (OCPN). We suggest a dynamic timed Petri nets structure that can model pre-emptions and modifications to the temporal characteristics of the net. This structure can be adopted by the OCPN to facilitate modeling of multimedia synchronization characteristics with dynamic user participation. We show that the suggested enhancements for the dynamic timed Petri nets satisfy all the properties of the Petri net theory. We use the suggested enhancements to model typical scenarios in a multimedia presentation with user inputs. 相似文献
16.
This paper describes a practical method for evaluating the usability of human-computer interfaces. The paper specifies the requirements of such a method, and then outlines our work in developing a method to meet this specification. The method is based on the conduct of realistic tasks with an interactive system and the subsequent systematic elicitation of end-users' and designers' reactions to the interface using a criterion-based evaluation checklist. Two practical examples are used to illustrate development of the method: (a) evaluation of a prototype production scheduling system, and (b) comparative assessment of the usability of three prototype user interfaces to a public-access database. The paper discusses some issues raised by the method and considers how it can be furthe developed. 相似文献
17.
以Aspen Plus支持的几种用户模型开发方法进行了探讨分析,简要介绍了用户模型的开发步骤,以帮助用户熟悉Aspen Plus用户模型的开发方法和步骤。通过介绍和分析Aspen Plus提供的4种用户模型开发方法,发现Fortran用户模型和Excel用户模型易于开发,但支持功能简单,仅适用于简单功能模型的开发,而基于建模工具的用户模型开发则需要专用建模工具的支持,应用范围有限。基于CAPE-OPEN COM技术的用户模型开发可以在功能强大的集成开发环境中进行,有灵活的向导程序引导用户完成模型的开发,开发的模型可在任何支持CAPE-OPEN接口的模拟软件中使用,是最具潜力的用户模型开发方法。 相似文献
18.
Most Web search engines use the content of the Web documents and their link structures to assess the relevance of the document
to the user’s query. With the growth of the information available on the web, it becomes difficult for such Web search engines
to satisfy the user information need expressed by few keywords. First, personalized information retrieval is a promising way
to resolve this problem by modeling the user profile by his general interests and then integrating it in a personalized document
ranking model. In this paper, we present a personalized search approach that involves a graph-based representation of the
user profile. The user profile refers to the user interest in a specific search session defined as a sequence of related queries.
It is built by means of score propagation that allows activating a set of semantically related concepts of reference ontology,
namely the ODP. The user profile is maintained across related search activities using a graph-based merging strategy. For the purpose of
detecting related search activities, we define a session boundary recognition mechanism based on the Kendall rank correlation measure that tracks changes in the dominant concepts held by the user profile relatively to a new submitted
query. Personalization is performed by re-ranking the search results of related queries using the user profile. Our experimental
evaluation is carried out using the HARD 2003 TREC collection and showed that our session boundary recognition mechanism based
on the Kendall measure provides a significant precision comparatively to other non-ranking based measures like the cosine and the WebJaccard similarity measures. Moreover, results proved that the graph-based search personalization is effective for improving the
search accuracy. 相似文献
19.
Two user authentication schemes for multi-server environments have been proposed by Tsai and Wang et al., respectively. However, there are some flaws existing in both schemes. Therefore, a new scheme for improving these drawbacks is proposed in this paper. The proposed scheme has the following benefits: (1) it complies with all the requirements for multi-server environments; (2) it can withstand all the well-known attacks at the present time; (3) it is equipped with a more secure key agreement procedure; and (4) it is quite efficient in terms of the cost of computation and transmission. In addition, the analysis and comparisons show that the proposed scheme outperforms the other related schemes in various aspects. 相似文献
20.
Moving between devices is omnipresent, but not for people with disabilities or those who require specific accessibility options. Setting up assistive technologies or finding settings to overcome a certain barrier can be a demanding task for people without technical skills. Context-sensitive adaptive user interfaces are advancing, although migrating access features from one device to another is very rarely addressed. In this paper, we describe the knowledge-based component of the Global Public Inclusive Infrastructure that infers how a device shall be best configured at the operating system layer, the application layer and the web layer to meet the requirements of a user including possible special needs or disabilities. In this regard, a mechanism to detect and resolve conflicting accessibility policies as well as recommending preference substitutes is a main requirement, as elaborated in this paper. As the proposed system emulates decision-making of accessibility experts, we validated the automatic deduced configurations against manual configurations of ten accessibility experts. The assessment result shows that the average matching score of the developed system is high. Thus, the proposed system can be considered capable of making precise decisions towards personalizing user interfaces based on user needs and preferences. 相似文献
|