首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The considerable and significant progress achieved in the design and development of new interaction devices between man and machine has enabled the emergence of various powerful and efficient input and/or output devices. Each of these new devices brings specific interaction modes.With the emergence of these devices, new interaction techniques and modes arise and new interaction capabilities are offered. New user interfaces need to be designed or former ones need to evolve. The design of so called plastic user interfaces contributes to handling such evolutions. The key requirement for the design of such a user interface is that the new obtained user interface shall be adapted to the application and have, at least, the same behavior as the previous (adapted) one. This paper proposes to address the problem of user interface evolution due to the introduction of new interaction devices and/or new interaction modes. More, precisely, we are interested by the study of the design process of a user interface resulting from the evolution of a former user interface due to the introduction of new devices and/or new interaction capabilities. We consider that interface behaviors are described by labelled transition systems and comparison between user interfaces is handled by an extended definition of the bi-simulation relationship to compare user interface behaviors when interaction modes are replaced by new ones.  相似文献   

2.
笔式用户界面作为Post-WIMP界面中的一种,以触控技术为依托,摒弃了物理键盘和鼠标,在一定程度上改变了人机交互的方式.草图绘制和识别软件不断涌现,但是却一直没有成熟的笔式界面设计开发工具.基于PGIS交互范式,利用场景设计方法,开发了基于笔式交互原语的图形和草图混合输入的场景设计工具SDT.首先,基于软件工程领域的高内聚低耦合原则提出了"分离-融合"设计方法,并据此提出了系统的总体架构;其次,从界面形式化描述、笔式交互原语和单字符、混合输入这3个方面介绍了关键技术;再次,通过一个完整示例对该工具进行了更具体的展示,同时佐证了该系统的可用性和可行性;最后,通过两个评估实验,验证了该工具的先进性和有效性.  相似文献   

3.
以交互为中心的Post-WIMP界面模型   总被引:1,自引:1,他引:1  
随着硬件设备和软件技术的发展,国内外开展了大量基于Post-WIMP界面的新型交互技术的研究.面对多种交互设备以及使用环境,在界面设计的过程中,根据上下文选择适当的交互组件或技术,并有效地进行组合与评估,成为构造Post-WIMP界面的关键问题.将界面设计与应用语义分离,使设计者能够灵活地置换各种交互技术.通过对Post-WIMP界面交互过程与界面设计层次的分析,建立起以交互为中心的分层Post-WIMP界面模型,将交互的各个层次进行分离.在模型的基础上,描述一个了Post-WIMP界面生成工具.借助该工具,设计者能够在设计过程中方便地引入新的交互技术,并能在最终的软件系统中加以灵活应用,从而可以快速、有效地进行界面原型实现和迭代评估.应用实例表明,Post-WIMP模型的建立以及生成工具的实现,有利于设计者确定设计方案及对方案的整体评估.  相似文献   

4.
5.
Today's computer–human interfaces are typically designed with the assumption that they are going to be used by an able-bodied person, who is using a typical set of input and output devices, who has typical perceptual and cognitive abilities, and who is sitting in a stable, warm environment. Any deviation from these assumptions may drastically hamper the person's effectiveness—not because of any inherent barrier to interaction, but because of a mismatch between the person's effective abilities and the assumptions underlying the interface design.We argue that automatic personalized interface generation is a feasible and scalable solution to this challenge. We present our Supple system, which can automatically generate interfaces adapted to a person's devices, tasks, preferences, and abilities. In this paper we formally define interface generation as an optimization problem and demonstrate that, despite a large solution space (of up to 1017 possible interfaces), the problem is computationally feasible. In fact, for a particular class of cost functions, Supple produces exact solutions in under a second for most cases, and in a little over a minute in the worst case encountered, thus enabling run-time generation of user interfaces. We further show how several different design criteria can be expressed in the cost function, enabling different kinds of personalization. We also demonstrate how this approach enables extensive user- and system-initiated run-time adaptations to the interfaces after they have been generated.Supple is not intended to replace human user interface designers—instead, it offers alternative user interfaces for those people whose devices, tasks, preferences, and abilities are not sufficiently addressed by the hand-crafted designs. Indeed, the results of our study show that, compared to manufacturers' defaults, interfaces automatically generated by Supple significantly improve speed, accuracy and satisfaction of people with motor impairments.  相似文献   

6.
《Computer Networks》1999,31(11-16):1695-1708
Today's Internet appliances feature user interface technologies almost unknown a few years ago: touch screens, styli, handwriting and voice recognition, speech synthesis, tiny screens, and more. This richness creates problems. First, different appliances use different languages: WML for cell phones; SpeechML, JSML, and VoxML for voice enabled devices such as phones; HTML and XUL for desktop computers, and so on. Thus, developers must maintain multiple source code families to deploy interfaces to one information system on multiple appliances. Second, user interfaces differ dramatically in complexity (e.g, PC versus cell phone interfaces). Thus, developers must also manage interface content. Third, developers risk writing appliance-specific interfaces for an appliance that might not be on the market tomorrow. A solution is to build interfaces with a single, universal language free of assumptions about appliances and interface technology. This paper introduces such a language, the User Interface Markup Language (UIML), an XML-compliant language. UIML insulates the interface designer from the peculiarities of different appliances through style sheets. A measure of the power of UIML is that it can replace hand-coding of Java AWT or Swing user interfaces.  相似文献   

7.
A method called “SymbolDesign” is proposed that can be used to design user-centered interfaces for pen-based input devices. It can also extend the functionality of pointer input devices, such as the traditional computer mouse or the Camera Mouse, a camera-based computer interface. Users can create their own interfaces by choosing single-stroke movement patterns that are convenient to draw with the selected input device, and by mapping them to a desired set of commands. A pattern could be the trace of a moving finger detected with the Camera Mouse or a symbol drawn with an optical pen. The core of the SymbolDesign system is a dynamically created classifier, in the current implementation an artificial neural network. The architecture of the neural network automatically adjusts according to the complexity of the classification task. In experiments, subjects used the SymbolDesign method to design and test the interfaces they created, for example, to browse the web. The experiments demonstrated good recognition accuracy and responsiveness of the user interfaces. The method provided an easily-designed and easily-used computer input mechanism for people without physical limitations, and, with some modifications, has the potential to become a computer access tool for people with severe paralysis.  相似文献   

8.
Ari Jaaksi 《Software》1995,25(11):1203-1221
This paper presents an object-oriented approach for the specification of graphical user interfaces. Specification starts with the analysis of the end user's operations. The user interface is then designed on the basis of this analysis. Operation analysis is followed by structure and component specification which presents the dialogue structure of the application and the contents of each dialogue. Visualization produces the final screen layouts, and task specification documents the usage of the user interface for the purpose of creating user's guides. The method presented in this paper makes it easier for a designer to take the end user's needs into account. Still, it does not automatically guarantee good quality user interfaces. The top-down nature of the method allows the designer to concentrate on the most important aspects of the user interface and split the design procedure into manageable pieces. Also, the visibility of the process allows the designer to communicate with other people while specifying the user interface. This paper connects the method with the object-oriented specification of entire applications. It briefly explains the connections with object-oriented analysis and design, and demonstrates how to implement the specified user interface in an object oriented fashion. The approach presented in this paper is being applied in the development of a large network management system with about two million lines of C++ code running in the XII environment. Still, the method does not require the specification being implemented with any specific windowing system. The only requirement is that the user interface is based on graphical elements, such as dialogues, push-buttons and text fields.  相似文献   

9.
In user interfaces of modern systems, users get the impression of directly interacting with application objects. In 3D based user interfaces, novel input devices, like hand and force input devices, are being introduced. They aim at providing natural ways of interaction. The use of a hand input device allows the recognition of static poses and dynamic gestures performed by a user's hand. This paper describes the use of a hand input device for interacting with a 3D graphical application. A dynamic gesture language, which allows users to teach some hand gestures, is presented. Furthermore, a user interface integrating the recognition of these gestures and providing feedback for them, is introduced. Particular attention has been spent on implementing a tool for easy specification of dynamic gestures, and on strategies for providing graphical feedback to users' interactions. To demonstrate that the introduced 3D user interface features, and the way the system presents graphical feedback, are not restricted to a hand input device, a force input device has also been integrated into the user interface.  相似文献   

10.
As the use of mobile touch devices continues to increase, distinctive user experiences can be provided through a direct manipulation. Therefore, the characteristics of touch interfaces should be considered regarding their controllability. This study aims to provide a design approach for touch-based user interfaces. A derivation procedure for the touchable area is proposed as a design guideline based on input behavior. To these ends, two empirical tests were conducted through a smart phone interface. Fifty-five participants were asked to perform a series of input tasks on a screen. As results, touchable area with a desirable hit rate of 90% could be yielded depending on the icon design. To improve the applicability of the touchable area, user error was analyzed based on omission-commission classification. The most suitable design had a hit rate of 95% compared to 90 and 99%. This study contributes practical implications for user interaction design with finger-based controls.Relevance to industryThis research describes a distinctive design approach that guarantees the desired touch accuracy for effective use of mobile touch devices. Therefore, the results will encourage interface designers to take into account the input behavior of fingers from a user-centered perspective.  相似文献   

11.
Control centric approach in designing scrolling and zooming user interfaces   总被引:1,自引:0,他引:1  
The dynamic systems approach to the design of continuous interaction interfaces allows the designer to use simulations, and analytical tools to analyse the behaviour and stability of the controlled system alone and when it is coupled with a manual control model of user behaviour. This approach also helps designers to calibrate and tune the parameters of the system before the actual implementation, and in response to user feedback. In this work we provide a dynamic systems interpretation of the coupling of internal states involved in speed-dependent automatic zooming, and test our implementation on a text browser on a Pocket PC instrumented with a tilt sensor. We illustrate simulated and experimental results of the use of the proposed coupled navigation and zooming interface using tilt and touch screen input.  相似文献   

12.
A task that can be decomposed into subtasks with different technological demands may be a challenge, since it requires multiple interactive environments as well as transitions between them. Some of these transitions may involve changes in hardware devices and interface paradigms at the same time. Some previous works have proposed various setups for hybrid user interfaces, but none of them focused on the design of transition interactions. Our work emphasizes the importance of interaction continuity as a guideline in the design and evaluation of transitional interfaces within a hybrid user interface (HUI). Finally, an exploratory study demonstrates how this design aspect is perceived by users during transitions in an HUI composed by three interactive environments.  相似文献   

13.
用户界面自动构造工具的结构模式   总被引:2,自引:0,他引:2  
本文介绍了一个用户界面自动构造工具的结构模式。重点阐述了该模式的三层视域思想。第一层视域是一个直接操纵环境,也叫可视界面编辑器。在这个环境中,界面设置人员直接创建他所喜欢的界面,并用鼠标和其它输入设备确定该界面的运行机制。第二层视域是一个可编辑的界面描述语言,称做IDL。第三层视域是一个交互技术类支持库和运行支撑工具集,它构筑了其它两层视域的工作平台,它的设计采用了面向对象技术。最后,讨论了三者之间的关系,并给出了一个应用实例来验证该模型。  相似文献   

14.
用户接口管理程序UIMS是一类软件开发工具。GI是根据VLSI/CAD软件用户接口的特点而设计和实现的一个图形交互式UIMS。它基于简化事件响应模型,以使用简便为主要目标,由用户说明而自动生成相应的应用程序用户接口。它支持菜单、 键盘、窗口等输入方式,以异步并行方式为应用程序服务。它不仅有较好的交互性能,还保持了应用程序的独立和完整性。  相似文献   

15.
Haptic technologies are often used to improve access to the structural content of graphical user interfaces, thereby augmenting the interaction process for blind users. While haptic design guidelines offer valuable assistance when developing non-visual interfaces, the recommendations presented are often tailored to the feedback produced via one particular haptic input/output device. A blind user is therefore restricted to interacting with a device which may be unfamiliar to him/her, rather than selecting from the range of commercially available products. This paper reviews devices available on the first and second-hand markets, and describes an exploratory study undertaken with 12 blindfolded sighted participants to determine the effectiveness of three devices for non-visual web interaction. The force-feedback devices chosen for the study, ranged in the number of translations and rotations that the user was able to perform when interacting with them. Results have indicated that the Novint Falcon could be used to target items faster in the first task presented, compared with the other devices. However, participants agreed that the force-feedback mouse was most comfortable to use when interacting with the interface. Findings have highlighted the benefits which low cost haptic input/output devices can offer to the non-visual browsing process, and any changes which may need to be made to accommodate their deficiencies. The study has also highlighted the need for web designers to integrate appropriate haptic feedback on their web sites to cater for the strengths and weaknesses of various devices, in order to provide universally accessible sites and online applications.  相似文献   

16.
17.
User interfaces of current 3D and virtual reality environments require highly interactive input/output (I/O) techniques and appropriate input devices, providing users with natural and intuitive ways of interacting. This paper presents an interaction model, some techniques, and some ways of using novel input devices for 3D user interfaces. The interaction model is based on a tool‐object syntax, where the interaction structure syntactically simulates an action sequence typical of a human's everyday life: One picks up a tool and then uses it on an object. Instead of using a conventional mouse, actions are input through two novel input devices, a hand‐ and a force‐input device. The devices can be used simultaneously or in sequence, and the information they convey can be processed in a combined or in an independent way by the system. The use of a hand‐input device allows the recognition of static poses and dynamic gestures performed by a user's hand. Hand gestures are used for selecting, or acting as, tools and for manipulating graphical objects. A system for teaching and recognizing dynamic gestures, and for providing graphical feedback for them, is described.  相似文献   

18.
IMMIView is an interactive system that relies on multiple modalities and multi-user interaction to support collaborative design review. It was designed to offer natural interaction in visualization setups such as large-scale displays, head mounted displays or TabletPC computers. To support architectural design, our system provides content creation and manipulation, 3D scene navigation and annotations. Users can interact with the system using laser pointers, speech commands, body gestures and mobile devices. In this paper, we describe how we design a system to answer architectural user requirements. In particular, our system takes advantage of multiple modalities to provide a natural interaction for design review. We also propose a new graphical user interface adapted to architectural user tasks, such as navigation or annotations. The interface relies on a novel stroke-based interaction supported by simple laser pointers as input devices for large-scale displays. Furthermore, input devices such as speech and body tracking allow IMMIView to support multiple users. Moreover, they allow each user to select different modalities according to their preference and modality adequacy for the user task. We present a multi-modal fusion system developed to support multi-modal commands on a collaborative, co-located, environment, i.e. with two or more users interacting at the same time, on the same system. The multi-modal fusion system listens to inputs from all the IMMIView modules in order to model user actions and issue commands. The multiple modalities are fused based on a simple rule-based sub-module developed in IMMIView and presented in this paper. User evaluation performed over IMMIView is presented. The results show that users feel comfortable with the system and suggest that users prefer the multi-modal approach to more conventional interactions, such as mouse and menus, for the architectural tasks presented.  相似文献   

19.
In this article, we present a practical approach to analyzing mobile usage environments. We propose a framework for analyzing the restrictions that characteristics of different environments pose on the user's capabilities. These restrictions along with current user interfaces form the cost of interaction in a certain environment. Our framework aims to illustrate that cost and what causes it. The framework presents a way to map features of the environment to the effects they cause on the resources of the user and in some cases on the mobile device. This information can be used for guiding the design of adaptive and/or multimodal user interfaces or devices optimized for certain usage environments. An example of using the framework is presented along with some major findings and three examples of applying them in user interface design.  相似文献   

20.
Designing user interfaces which can cope with unconventional control properties is challenging, and conventional interface design techniques are of little help. This paper examines how interactions can be designed to explicitly take into account the uncertainty and dynamics of control inputs. In particular, the asymmetry of feedback and control channels is highlighted as a key design constraint, which is especially obvious in current non-invasive brain–computer interfaces (BCIs). Brain–computer interfaces are systems capable of decoding neural activity in real time, thereby allowing a computer application to be directly controlled by thought. BCIs, however, have totally different signal properties than most conventional interaction devices. Bandwidth is very limited and there are comparatively long and unpredictable delays. Such interfaces cannot simply be treated as unwieldy mice. In this respect they are an example of a growing field of sensor-based interfaces which have unorthodox control properties. As a concrete example, we present the text entry application “Hex-O-Spell”, controlled via motor-imagery based electroencephalography (EEG). The system utilizes the high visual display bandwidth to help compensate for the limited control signals, where the timing of the state changes encodes most of the information. We present results showing the comparatively high performance of this interface, with entry rates exceeding seven characters per minute.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号