首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
In user interfaces of modern systems, users get the impression of directly interacting with application objects. In 3D based user interfaces, novel input devices, like hand and force input devices, are being introduced. They aim at providing natural ways of interaction. The use of a hand input device allows the recognition of static poses and dynamic gestures performed by a user's hand. This paper describes the use of a hand input device for interacting with a 3D graphical application. A dynamic gesture language, which allows users to teach some hand gestures, is presented. Furthermore, a user interface integrating the recognition of these gestures and providing feedback for them, is introduced. Particular attention has been spent on implementing a tool for easy specification of dynamic gestures, and on strategies for providing graphical feedback to users' interactions. To demonstrate that the introduced 3D user interface features, and the way the system presents graphical feedback, are not restricted to a hand input device, a force input device has also been integrated into the user interface.  相似文献   

2.
The context of mobility raises many issues for geospatial applications providing location-based services. Mobile device limitations, such as small user interface footprint and pen input whilst in motion, result in information overload on such devices and interfaces which are difficult to navigate and interact with. This has become a major issue as mobile GIS applications are now being used by a wide group of users, including novice users such as tourists, for whom it is essential to provide easy-to-use applications. Despite this, comparatively little research has been conducted to address the mobility problem. We are particularly concerned with the limited interaction techniques available to users of mobile GIS which play a primary role in contributing to the complexity of using such an application whilst mobile. As such, our research focuses on multimodal interfaces as a means to present users with a wider choice of modalities for interacting with mobile GIS applications. Multimodal interaction is particularly advantageous in a mobile context, enabling users of location-based applications to choose the mode of input that best suits their current task and location. The focus of this article concerns a comprehensive user study which demonstrates the benefits of multimodal interfaces for mobile geospatial applications.  相似文献   

3.
The development of IP-Telephony in recent years has been substantial. The improvement in voice quality, the integration between voice and data, especially the interaction with multimedia has made the 3G communication more promising. The value added services of Telephony techniques alleviate the dependence on the phone and provide a universal platform for the multimodal telephony applications. For example, the web-based application with VoiceXML has been developed to simplify the human–machine interaction because it takes the advantage of the speech-enabled services and makes the telephone-web access a reality. However, it is not cost-efficient to build voice only stand-alone web application and is more reasonable that voice interfaces should be retrofitted to be compatible or collaborate with the existing HTML or XML-based web applications. Therefore, this paper considers that the functionality of the web service should enable multiple access modalities so that users can perceive and interact with the site in either visual or speech response simultaneously. Under this principle, our research develops a prototype system of multimodal VoIP with the integrated web-based Mandarin dialog system which adopts automatic speech recognition (ASR), text-to-speech (TTS), VoiceXML browser, and VoIP technologies to create user friendly graphic user interface (GUI) and voice user interface (VUI). The users can use traditional telephone, cellular phone, or even VoIP connection via personal computer to interact with the VoiceXML server. In the mean time, the users browse the web and access the same content with common HTML or XML-based browser. The proposed system shows excellent performance and can be easily incorporated into voice ordering service for a wider accessibility.  相似文献   

4.
The development of IP-Telephony in recent years has been substantial. The improvement in voice quality, the integration between voice and data, especially the interaction with multimedia has made the 3G communication more promising. The value added services of Telephony techniques alleviate the dependence on the phone and provide a universal platform for the multimodal telephony applications. For example, the web-based application with VoiceXML has been developed to simplify the human–machine interaction because it takes the advantage of the speech-enabled services and makes the telephone-web access a reality. However, it is not cost-efficient to build voice only stand-alone web application and is more reasonable that voice interfaces should be retrofitted to be compatible or collaborate with the existing HTML or XML-based web applications. Therefore, this paper considers that the functionality of the web service should enable multiple access modalities so that users can perceive and interact with the site in either visual or speech response simultaneously. Under this principle, our research develops a prototype system of multimodal VoIP with the integrated web-based Mandarin dialog system which adopts automatic speech recognition (ASR), text-to-speech (TTS), VoiceXML browser, and VoIP technologies to create user friendly graphic user interface (GUI) and voice user interface (VUI). The users can use traditional telephone, cellular phone, or even VoIP connection via personal computer to interact with the VoiceXML server. In the mean time, the users browse the web and access the same content with common HTML or XML-based browser. The proposed system shows excellent performance and can be easily incorporated into voice ordering service for a wider accessibility.  相似文献   

5.
Web-based solutions and interfaces should be easy, more intuitive, and should also adapt to the natural and cognitive information processing and presentation capabilities of humans. Today, human-controlled multimodal systems with multimodal interfaces are possible. They allow for a more natural and more advanced exchange of information between man and machine. The fusion of web-based solutions with natural modalities is therefore an effective solution for users who would like to access services and web content in a more natural way. This article presents a novel multimodal web platform (MWP) that enables flexible migration from traditionally closed and purpose-oriented multimodal systems to the wider scope offered by web applications. The MWP helps to overcome problems of interoperability, compatibility, and integration that usually accompany migrations from standard (task-oriented) applications to web-based solutions and multiservice networks, thus enabling the enrichment of general web-based user interfaces with several advanced natural modalities in order to communicate and exchange information. The MWP is a system in which all modules are embedded within generic network-based architecture. When using it, the fusion of user front ends with new modalities requires as little intervention to the code of the web application as possible. The fusion is implemented within user front ends and retains the web-application code and its functionalities intact.  相似文献   

6.
7.
This paper presents a new approach to make current and future television universally accessible. The proposed approach provides a means of universal accessibility both for remotely operating the TV set and for interacting with online services delivered through the TV. This proposal is based on the ISO/IEC 24752 “Universal Remote Console” (URC) standard. This standard defines an abstract user interface layer called the “user interface socket” and allows the development of pluggable (plug-in) user interfaces for any type of user and any control device. The proposed approach lays the foundation for the development of advanced user interfaces that can be interacted within various modalities. Different prototypes have been developed based on this approach and tested with end users. The user tests have shown this approach to be a viable option for the proposed scenarios. Based on the experience gathered with the prototypes, recommendations and implementation options are suggested for commercial adoption.  相似文献   

8.
Multimodal interfaces have attracted more and more attention. Most researches focus on each interaction mode independently and then fuse information at the application level. Recently, several frameworks and models have been proposed to support the design and development of multimodal interfaces. However, it is challenging to provide automatic modality adaptation in multimodal interfaces. Existing approaches are using rule-based specifications to define the adaptation of input/output modalities. Rule-based specifications have the problems of completeness and coherence. Distinct from previous work, this paper presents a novel approach that quantifies the user preference of each modality and considers the adaptation as an optimization issue that searches for a set of input/output modalities matching user's preference. Our approach applies a cross-layer design, which considers the adaptation from the perspectives of the interaction context, available system resources, and QoS requirements. Furthermore, our approach supports human-centric adaptation. A user can report the preference of a modality so that selected modalities fit user's personal needs. An optimal solution and a heuristic algorithm have been developed to automatically select an appropriate set of modality combinations under a specific situation. We have designed a framework based on the heuristic algorithm and existing ontology, and applied the framework to conduct a utility evaluation, in which we have employed a within-subject experiment. Fifty participants were invited to go through three scenarios and compare automatically selected modalities with randomly selected modalities. The results from the experiment show that users perceived the automatically selected modalities as appropriate and satisfactory.  相似文献   

9.
近年来,随着嵌入式系统的迅猛发展,嵌入式技术的研究已经成为当今的一个热点话题,尤其在航天领域中,更是大量应用到嵌入式技术。伴随着载人航天事业的发展,提供一整套图形化的人机显示界面给仪表设备,可以更加方便宇航员的操作与监测。所介绍的是一种基于FPGA技术实现的显示支持系统的研究,二次开发者可以在此基础上进行用户界面的开发,从而可以大大提高开发的效率。  相似文献   

10.
Richard Hesketh 《Software》1991,21(11):1165-1187
  相似文献   

11.
C. Baber  T. Hoyes  N.A. Stanton   《Displays》1993,14(4):207-215
While packages using graphical user interfaces appear to have gained a sizeable share of the computer software market, there is surprisingly little research into whether they are superior to more traditional, character-based interface designs; research conducted so far often provides conflicting evidence. The purpose of this paper is to investigate the qualitative and quantitative differences between graphical and character-based user interfaces, using a range of methods and software packages.  相似文献   

12.
IMMIView is an interactive system that relies on multiple modalities and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large-scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. In this paper, we describe how we design a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke-based interaction supported by simple laser pointers as input devices for large-scale displays. Furthermore, input devices such as speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, co-located, environment, i.e. with two or more users interacting at the same time, on the same system. The multi-modal fusion system listens to inputs from all the IMMIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based sub-module developed in IMMIView and presented in this paper. User evaluation performed over IMMIView is presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented.  相似文献   

13.
One of the challenges that Ambient Intelligence (AmI) faces is the provision of a usable interaction concept to its users, especially for those with a weak technical background. In this paper, we describe a new approach to integrate interactive services provided by an AmI environment with the television set, which is one of the most widely used interaction client in the home environment. The approach supports the integration of different TV set configurations, guaranteeing the possibility to develop universally accessible solutions. An implementation of this approach has been carried out as a multimodal/multi-purpose natural human computer interface for elderly people, by creating adapted graphical user interfaces and navigation menus together with multimodal interaction (simplified TV remote control and voice interaction). In addition, this user interface can also be suited to other user groups. We have tested a prototype that adapts the videoconference and the information service with a group of 83 users. The results from the user tests show that the group found the prototype to be both satisfactory and efficient to use.  相似文献   

14.
P. Sukaviriya 《Knowledge》1993,6(4):220-229
Research on adaptive interfaces in the past has lacked support from user interface tools which allow interfaces to be easily created and modified. Also, current user interface tools provide no support for user models which can collect task-oriented information about users. Developing an adaptive interface requires a user model and an adaptation strategy. It also, however, requires a user interface which can be adapted. The latter task is often time-consuming, especially in relation to more sophisticated user interfaces.

The paper presents a user interface design environment, UIDE, which has a different software infrastracture. Designers use high-level specifications to create a model of an application and links from the application to various interface components. The model is the heart of all the design and run-time support in UIDE, including automatic dialog sequencing and help generation. UIDE provides automatic support for collecting task-oriented information about users, by the use of its high-level specifications in its application model as a basic construct for a user model. Some examples of adaptive interfaces and adaptive help are presented that use the information that is collectable in UIDE.  相似文献   


15.
Numerous engineering application systems have been developed over the past twenty years, and many of these applications will continue to be used for many years to come. Examples of such applications include CAD Systems, finite-element analysis packages and inspection systems. Because many of these applications were developed before graphical workstations became available, they often have simple command-line user interfaces. Thus, there is a need for a graphical user interface management system (UIMS) that can be used to build point-and-click style interfaces for these existing engineering applications. In this paper we describe such a UIMS, and discuss its implementation using an object-oriented database tool. This UIMS allows users to create and modify user interfaces by editing graphical representations of the interfaces, thus eliminating the need to write code to build or modify an interface. The UIMS is implemented using an object-oriented database tool to take advantage of the data manipulation and storage management capabilities it provides. This approach reduces both the quantity and complexity of the code needed to implement the UIMS. It also allowed the UIMS to be implemented in a minimal amount of time.  相似文献   

16.
Emerging input modalities could facilitate more efficient user interactions with mobile devices. An end-user customization tool based on user-defined context-action rules lets users specify personal, multimodal interaction with smart phones and external appliances. The tool's input modalities include sensor-based, user-trainable free-form gestures; pointing with radio frequency tags; and implicit inputs based on such things as sensors, the Bluetooth environment, and phone platform events. The tool enables user-defined functionality through a blackboard-based context framework enhanced to manage the rule-based application control. Test results on a prototype implemented on a smart phone with real context sources show that rule-based customization helps end users efficiently customize their smart phones and use novel input modalities.  相似文献   

17.
18.
This paper proposes a novel tabletop display system for natural communication and flexible information sharing. The proposed system is specifically designed to integrate two‐dimensional (2D) and three‐dimensional (3D) user interfaces by using a multi‐user stereoscopic display, IllusionHole. The proposed system takes awareness into consideration and provides both 2D and 3D information and user interfaces. On the display, a number of standard Windows desktop environments are provided as personal workspaces, as well as a shared workspace with a dedicated graphical user interface. In the personal workspaces, users can simultaneously access existing applications and data, and exchange information between personal and shared workspaces. In this way, the proposed system can seamlessly integrate personal, shared, 2D, and 3D workspaces with conventional user interfaces and effectively support communication and information sharing. To demonstrate the capabilities of the proposed display system, a modeling application was implemented. A preliminary experiment confirmed the effectiveness of this system. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
1 引言国内外近年来兴起的多通道用户界面研究强调:(1)多个通道以并行、协作的方式工作;(2)以充分性代替精确性,通常允许不精确的输入信息。其中,允许不精确输入进入人机交互过程是多通道用户界面的显著特征之一,它意味着可以消除不必要的精确性,从而大大降低用户的认知负担。然而,目前国内外提出的多通道整合算法在处理手段和算法实现方面基本上沿用传统的精确方式,使得问题和方法配合方面存在一些不足。本文提出利用模糊数学理论解决多通道用户界面整合中目标指称这一关键问题,可以更好地实现对不精确信息进行整合。  相似文献   

20.
In this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号