首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this age of (near-)adequate computing power, the power and usability of the user interface is as key to an application's success as its functionality. Most of the code in modern desktop productivity applications resides in the user interface. But despite its centrality, the user interface field is currently in a rut: the WIMP (Windows, Icons, Menus, Point-and-Click GUI based on keyboard and mouse) has evolved little since it was pioneered by Xerox PARC in the early '70s. Computer and display form factors will change dramatically in the near future and new kinds of interaction devices will soon become available. Desktop environments will be enriched not only with PDAs such as the Newton and Palm Pilot, but also with wearable computers and large-screen displays produced by new projection technology, including office-based immersive virtual reality environments. On the input side, we will finally have speech-recognition and force-feedback devices. Thus we can look forward to user interfaces that are dramatically more powerful and better matched to human sensory capabilities than those dependent solely on keyboard and mouse. 3D interaction widgets controlled by mice or other interaction devices with three or more degrees of freedom are a natural evolution from their two-dimensional WIMP counterparts and can decrease the cognitive distance between widget and task for many tasks that are intrinsically 3D, such as scientific visualization and MCAD. More radical post-WIMP UIs are needed for immersive virtual reality where keyboard and mouse are absent. Immersive VR provides good driving applications for developing post-WIMP UIs based on multimodal interaction that involve more of our senses by combining the use of gesture, speech, and haptics.  相似文献   

2.
长期以来 ,图形用户界面 (GUIs)一直是人机交互 (HCI)的主流平台。GUI风格的交互让用户使用计算机时更为方便 ,尤其对于办公自动化应用。然而 ,随着人们使用计算机方式的变化以及计算任务种类和数量的大量增加 ,GUIs已经很难实现满足用户需求所必需的所有交互形式。为了适应更多更广的情况 ,人们需要一种更自然、更直观、能自适应、更易为人们接受的界面形式。于是 ,感知用户界面 (PUIs—PerceptualUserInterfaces)成为人机界面中的新热点 ,其主要目的是使人机交互更像人与人之间的交互以及人与世界的交互。本文介绍了蓬勃发展的PUI领域 ,然后简要介绍了三个基于视觉技术的实际系统  相似文献   

3.
Writing applications that are easily moved to various computer platforms with different graphical user interfaces (GUIs) is a complex task. Yet this concept is important for the creator of commercial software, as it is not likely to be clear for many years whether one or two GUIs will survive and become industry ‘standards’ or whether the growth in GUIs will continue because of new developments in human-computer interfaces. Providing a user interface abstraction that maps into all toolkits seems to be an appropriate way to proceed, but is fraught with difficulty. For example, different GUIs present a different look-and-feel that often causes system-specific information to be embedded in an application. This paper surveys the problems inherent in designing a user interface abstraction, and describes the experiences gained from a specific implementation called CIRL/PIWI.
  • 1 CIRL and PIWI are acronyms for Co-ordinate-Independent Resource Language and Presentation-Independent Windowed Interface.
  • The user interface abstraction contains a knowledge base that allows many components of the user interface to be defined independent of look-and-feel thereby increasing the portability of an application.  相似文献   

    4.
    is a comprehensive set of tools for creating customized graphical user interfaces (GUIs). It draws from the concept of computing portals, which are here seen as interfaces to application-specific computing services for user communities. While was originally designed for the use in computational grids, it can be used in client/server environments as well.Compared to other GUI generators, is more versatile and more portable. It can be employed in many different application domains and on different target platforms. With , application experts (rather than computer scientists) are able to create their own individually tailored GUIs.  相似文献   

    5.
    The advent of mobile devices and the wireless Internet is having a profound impact on the way people communicate, as well as on the user interaction paradigms used to access information that was traditionally accessible only through visual interfaces. Applications for mobile devices entail the integration of various data sources optimized for delivery to limited hardware resources and intermittently connected devices through wireless networks. Although telephone interfaces arise as one of the most prominent pervasive applications, they present interaction challenges such as the augmentation of speech recognition through natural language (NL) understanding and high-quality text-to-speech conversion. This article presents an experience in building an automated assistant that is natural to use and could become an alternative to a human assistant. The Mobile Assistant (MA) can read e-mail messages, book appointments, take phone messages, and provide access to personal-organizer information. Key components are a conversational interface, enterprise integration, and notifications tailored to user preferences. The focus of the research has been on supporting the pressing communication needs of mobile workers and overcoming technological hurdles such as achieving high accuracy speech recognition in noisy environments, NL understanding, and optimal message presentation on a variety of devices and modalities. The article outlines findings from the 2 broad field trials and lessons learned regarding the support of mobile workers with pervasive computing devices and emerging technologies.  相似文献   

    6.

    Most of today's virtual environments are populated with some kind of autonomous life - like agents . Such agents follow a preprogrammed sequence of behaviors that excludes the user as a participating entity in the virtual society . In order to make inhabited virtual reality an attractive place for information exchange and social interaction , we need to equip the autonomous agents with some perception and interpretation skills . In this paper we present one skill: human action recognition . By opposition to human - computer interfaces that focus on speech or hand gestures , we propose a full - body integration of the user . We present a model of human actions along with a real - time recognition system . To cover the bilateral aspect in human - computer interfaces , we also discuss some action response issues . In particular , we describe a motion management library that solves animation continuity and mixing problems . Finally , we illustrate our systemwith two examples and discuss what we have learned .  相似文献   

    7.
    A tangible goal for 3D modeling   总被引:1,自引:0,他引:1  
    As we progress into applications that incorporate interactive life-like 3D computer graphics, the mouse falls short as a user interface device, and it becomes obvious that 3D computer graphics could achieve much more with a more intuitive user interface mechanism. Haptic interfaces, or force feedback devices, promise to increase the quality of human-computer interaction by accommodating our sense of touch. The article discusses the application of touch feedback systems to 3D modelling. To achieve a high interactivity level requires novel rendering techniques such as volume-based rendering algorithms  相似文献   

    8.
    Matth.  T 《计算机学报》2000,23(12):1235-1244
    1 IntroductionThe interface between people and computershas progressed over the years from the early daysof switches and LEDs to punched cards,interactivecommand- line interfaces,and the directmanipula-tion style of graphical user interfaces.The“desk-top metaphor” of graphical user interfaces,a.k.a.WIMP interfaces (for Windows,Icons,Menus,and Pointing devices) ,has been the standard inter-face between people and computers for manyyears.Of course,software and technology for hu-man- compu…  相似文献   

    9.
    Today's computer–human interfaces are typically designed with the assumption that they are going to be used by an able-bodied person, who is using a typical set of input and output devices, who has typical perceptual and cognitive abilities, and who is sitting in a stable, warm environment. Any deviation from these assumptions may drastically hamper the person's effectiveness—not because of any inherent barrier to interaction, but because of a mismatch between the person's effective abilities and the assumptions underlying the interface design.We argue that automatic personalized interface generation is a feasible and scalable solution to this challenge. We present our Supple system, which can automatically generate interfaces adapted to a person's devices, tasks, preferences, and abilities. In this paper we formally define interface generation as an optimization problem and demonstrate that, despite a large solution space (of up to 1017 possible interfaces), the problem is computationally feasible. In fact, for a particular class of cost functions, Supple produces exact solutions in under a second for most cases, and in a little over a minute in the worst case encountered, thus enabling run-time generation of user interfaces. We further show how several different design criteria can be expressed in the cost function, enabling different kinds of personalization. We also demonstrate how this approach enables extensive user- and system-initiated run-time adaptations to the interfaces after they have been generated.Supple is not intended to replace human user interface designers—instead, it offers alternative user interfaces for those people whose devices, tasks, preferences, and abilities are not sufficiently addressed by the hand-crafted designs. Indeed, the results of our study show that, compared to manufacturers' defaults, interfaces automatically generated by Supple significantly improve speed, accuracy and satisfaction of people with motor impairments.  相似文献   

    10.
    Current environments for metacomputing generally have tools for managing the resources of a metacomputer but often lack adequate tools for designing, writing, and executing programs. Building an application for a metacomputer typically involves writing source codes on a local node, transferring and compiling codes on every node, and starting their execution. Without such tools, the application development phases can come up against considerable difficulties. In order to alleviate these problems, some graphical user interfaces (GUIs) based on PVM, such as XPVM, Parallel Application Development Environment (PADE) and Wide Area Metacomputing Manager (WAMM) have been implemented. These GUIs integrate a programming environment which facilitates the user in performing the application development phases and the application execution.

    This paper outlines the general requirements for designing GUIs for metacomputing management, and compares WAMM, a graphical user interface, with some related works.  相似文献   


    11.
    Multi-touch, which has been heralded as a revolution in human–computer interaction, provides features such as gestural interaction, tangible interfaces, pen-based computing, and interface customization—features embraced by an increasingly tech-savvy public. However, multi-touch platforms have not been adopted as “everyday” computer interaction devices that support important text entry intensive applications such as word processing and spreadsheets. In this paper, we present two studies that begin to explore user performance and experience with entering text using a multi-touch input. The first study establishes a benchmark for text entry performance on a multi-touch platform across input modes that compare uppercase-only to mixed-case, single-touch to multi-touch and copy to memorization tasks. The second study includes mouse style interaction for formatting rich text to simulate a word processing task using multi-touch input. As expected, our results show that users do not perform as well in terms of text entry efficiency and speed using a multi-touch interface as with a traditional keyboard. Not as expected was the result that degradation in performance was significantly less for memorization versus copy tasks, and consequently willingness to use multi-touch was substantially higher (50% versus 26%) in the former case. Our results, which include preferred input styles of participants, also provide a baseline for further research to explore techniques for improving text entry performance on multi-touch systems.  相似文献   

    12.
    Jacob  R.J.K. 《Computer》1993,26(7):65-66
    As computers are becoming more powerful, the critical bottleneck in their use is often in the user interface, not in the computer processing. Research in human-computer interaction that seeks to increase the communication bandwidth between the user and the machine by using input from the user's eye movement is discussed. The speed potential, processing stages, interaction techniques, and problems associated with these eye-gaze interfaces are described  相似文献   

    13.
    喻纯  史元春 《软件学报》2012,23(9):2522-2532
    提高图形用户界面(graphical user interface)的输入效率,是人机交互中的一项重要研究内容.已有的研究包括点击增强技术和自适应界面技术,前者改变光标的控制方式或呈现方式,后者改变界面上控件的位置布局,但两种技术都存在不足.通过分析界面操作,提出了图形用户界面输入效率的评价模型;然后,在此基础上提出一种人机交互效率优化技术:自适应光标.它以自适应的方式,有选择地对界面上用户可能访问的控件通过点击增强技术支持,实现快速访问.该方法既解决了以往的自适应界面技术因频繁调整控件布局而给用户带来额外认知成本的问题,也克服了点击增强技术仅适用于稀疏控件布局的限制.为了检验其可用性,在控件较多的Visual Studio上实现了自适应光标技术.实验结果表明,使用自适应光标技术可以将获取目标的时间缩短27.7%,显著提高了图形用户界面的输入效率.  相似文献   

    14.
    多通道用户界面关键技术和未来发展趋势研究   总被引:3,自引:0,他引:3  
    用户界面的研究旨在为用户提供一种高效的人机通信方式。近年来随着计算机软、硬件的迅猛发展以及因特网的异军突起,传统的图形用户界面面临着新的挑战。多通道人机界面和智能网络界面的提出更启发人们去寻求新一代用户界面范式来解决人机交互过程中的特殊问题。总结性地论述了多通道用户界面研究中所涉及到的关键性问题,介绍了相应的一些主要解决方法,指出了其中的不足,提出了一些看法和构想,结合当前计算机新技术对下一代用户界面进行了讨论和展望。  相似文献   

    15.
    16.
    As computer interfaces can display more life-like qualities such as speech output and personable characters or agents, it becomes important to understand and assess users’ interaction behavior within a social interaction framework rather than only a narrower machine interaction one. We studied how the appearance of a life-like interface agent influenced people’s interaction with it, using a social interaction framework of making and keeping promises to cooperate. Participants played a social dilemma game with a human confederate via realtime video conferencing or with one of three interface agents: a person-like interface agent, a dog-like interface agent, or a cartoon dog interface agent. Technology improvements from a previous version of the human-like interface led to increased cooperation with it; participants made and kept promises to cooperate with the person-like interface agent as much as with the confederate. Dog owners also made and kept promises to dog-like interface agents. General evaluations of likability and appealingness of the interface agent did not lead people to cooperate with it. Our findings demonstrate the importance of placing user interface studies within a social interaction framework as interfaces become more social.  相似文献   

    17.
    As computers become increasingly powerful and complex, software designers are employing anthropomorphism to enhance the usability of computer interfaces (i.e., “user-centered” design). The potential for implementing a social mode of interface behavior, however, can only be realized through understanding the role anthropomorphism plays in modifying the behavior and perceptions of users. The present study compares human-like versus machine-like interactional styles of computer interfaces, testing hypotheses that evaluative feedback conveyed through a human-like interface will have greater impact on individuals' self-appraisals. College students received experimentally manipulated positive or negative computerized feedback in response to their performance on a purported “psychic ability” task. In general, computer feedback had considerable impact upon reflected appraisals (participants' perceptions of the computer's evaluations of their performance and ability) as well as upon their self-appraisals of performance and ability. Reflected appraisals were more influenced by computer feedback than were self-appraisals. Human-like and machine-like interface styles did not have significantly different effects.  相似文献   

    18.
    The considerable and significant progress achieved in the design and development of new interaction devices between man and machine has enabled the emergence of various powerful and efficient input and/or output devices. Each of these new devices brings specific interaction modes.With the emergence of these devices, new interaction techniques and modes arise and new interaction capabilities are offered. New user interfaces need to be designed or former ones need to evolve. The design of so called plastic user interfaces contributes to handling such evolutions. The key requirement for the design of such a user interface is that the new obtained user interface shall be adapted to the application and have, at least, the same behavior as the previous (adapted) one. This paper proposes to address the problem of user interface evolution due to the introduction of new interaction devices and/or new interaction modes. More, precisely, we are interested by the study of the design process of a user interface resulting from the evolution of a former user interface due to the introduction of new devices and/or new interaction capabilities. We consider that interface behaviors are described by labelled transition systems and comparison between user interfaces is handled by an extended definition of the bi-simulation relationship to compare user interface behaviors when interaction modes are replaced by new ones.  相似文献   

    19.
    In this paper we review some problems with traditional approaches for acquiring and representing knowledge in the context of developing user interfaces. Methodological implications for knowledge engineering and for human-computer interaction are studied. It turns out that in order to achieve the goal of developing human-oriented (in contrast to technology-oriented) human-computer interfaces developers have to develop sound knowledge of the structure and the representational dynamics of the cognitive system which is interacting with the computer.We show that in a first step it is necessary to study and investigate the different levels and forms of representation that are involved in the interaction processes between computers and human cognitive systems. Only if designers have achieved some understanding about these representational mechanisms, user interfaces enabling individual experiences and skill development can be designed. In this paper we review mechanisms and processes for knowledge representation on a conceptual, epistemological, and methodologieal level, and sketch some ways out of the identified dilemmas for cognitive modeling in the domain of human-computer interaction.  相似文献   

    20.
    Gaining a better understanding of human–computer interaction in multiple-goal environments, such as driving, is critical as people increasingly use information technology to accomplish multiple tasks simultaneously. Extensive research shows that decision biases can be utilized as effective cues to guide user interaction in single-goal environments. This article is a first step toward understanding the effect of decision biases in multiple-goal environments. This study analyzed data from a field experiment during which a comparison was made between drivers’ decisions on parking lots in a single-goal environment and drivers’ decisions in a multiple-goal environment when being exposed to the default option bias. The article shows that the default option bias is effective in multiple-goal environments. The results have important implications for the design of human–computer interaction in multiple-goal environments.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号