首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using a mobile device together with a large shared screen supports collaborative tasks and potentially prevents interference among users. In order to evaluate the usability of inter-device interaction, this paper compared two fundamental inter-device interaction styles, i.e., one-handed and two-handed interaction. The one-handed interaction style only uses one hand to select an object from a large display device while the two-handed interaction style needs the cooperation of two hands to realize a selection. A framework was developed to implement these two interaction styles. Based on the framework, a pretest-posttest, repeated-measures study was conducted to compare their differences. All participants went through eight tasks, which were differentiated by both the selection order (sequential or random order) and the density level (sparse or dense layout), using both interaction styles. During the study, both the completion time and the error rate in each task with each interaction style were recorded. In addition, the IBM Post-Study Usability Questionnaire (PSSUQ) was used to evaluate the subjective satisfaction on each interaction style. The overall PSSUQ score indicates that both interaction styles receive positive feedback with high user satisfaction. The study also revealed that the one-handed interaction took less time to complete tasks (i.e., more efficient) than the two-handed interaction, while the two-handed interaction style had a lower error rate than the one-handed interaction, and especially so in a dense layout.  相似文献   

2.
The RELATE interaction model is designed to support spontaneous interaction of mobile users with devices and services in their environment. The model is based on spatial references that capture the spatial relationship of a user’s device with other co-located devices. Spatial references are obtained by relative position sensing and integrated in the mobile user interface to spatially visualize the arrangement of discovered devices, and to provide direct access for interaction across devices. In this paper we discuss two prototype systems demonstrating the utility of the model in collaborative and mobile settings, and present a study on usability of spatial list and map representations for device selection.  相似文献   

3.
Storage bins: mobile storage for collaborative tabletop displays   总被引:1,自引:0,他引:1  
The ability to store resource items anywhere in the workspace and move them around can be critical for coordinating task and group interactions on a table. However, existing casual storage techniques for digital workspaces only provide access to stored items at the periphery of the workspace, potentially compromising collaborative interactions at a digital tabletop display. To facilitate this storage behavior in a digital tabletop workspace, we developed the storage bin mobile storage mechanism, which combines the space-preserving features of existing peripheral storage mechanisms with the capability to relocate stored items in the workspace. A user study explores the utility of storage bins on tabletop display collaboration.  相似文献   

4.
Distributable user interfaces enable users to distribute user interface interaction objects (i.e. panels, buttons, input fields, checkboxes, etc.) across different displays using a set of distribution primitives to manipulate them in real time. This work presents how this kind of user interfaces facilitates the computer supported collaborative learning in modern classrooms. These classrooms provide teachers and students with display ecosystems consisting of stationary displays, such as smart projectors and smart TVs as well as mobile displays owned by teachers and students, such as smartphones, tablets, and laptops. The distribution of user interface interaction objects enables teachers to modify the user interface interaction objects that are available to students in real time to control and promote the collaboration and participation among them during learning activities. We propose the development of this type of applications using an extension of the CAMELEON reference framework that supports the definition of UI distribution models. The Essay exercise is presented as a case of study where teachers control the collaboration among students by distributing user interface interaction objects.  相似文献   

5.
In this paper, we present a believable interaction mechanism for manipulation multiple objects in ubiquitous/augmented virtual environment. A believable interaction in multimodal framework is defined as a persistent and consistent process according to contextual experiences and common‐senses on the feedbacks. We present a tabletop interface as a quasi‐tangible framework to provide believable processes. An enhanced tabletop interface is designed to support multimodal environment. As an exemplar task, we applied the concept to fast accessing and manipulating distant objects. A set of enhanced manipulation mechanisms is presented for remote manipulations including inertial widgets, transformable tabletop, and proxies. The proposed method is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed technique uses intuitive hand gestures and provides higher level of believability. It can also support other types of accessing techniques such as browsing and manipulation. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

6.
周明骏  徐礼爽  田丰  戴国忠 《软件学报》2008,19(10):2780-2788
笔式用户界面是一种重要的Post-WIMP(window icon menu pointer)界面,它给用户提供了自然的交互方式.然而,当前的笔式用户界面工具箱大多是面向单用户任务的,不能很好地支持协作应用场景.通过对笔式交互特征和协作环境功能需求的分析,设计并实现了一个工具箱CoPen Toolkit,用于支持协作笔式用户界面的开发.它提供了灵活的架构和可扩展的组件,支持笔迹描述、事件处理和网络协作等功能.基于CoPen Toolkit,构造了多个原型系统,实践表明,它能够很好地支持协作笔式用户界面的开发.  相似文献   

7.
As more interactive surfaces enter public life, casual interactions from passersby are bound to increase. Most of these users can be expected to carry a mobile phone or PDA, which nowadays offers significant computing capabilities of its own. This offers new possibilities for interaction between these users’ private displays and large public ones. In this paper, we present a system that supports such casual interactions. We first explore a method to track mobile phones that are placed on a horizontal interactive surface by examining the shadows which are cast on the surface. This approach detects the presence of a mobile device, as opposed to any other opaque object, through the signal strength emitted by the built-in Bluetooth transceiver without requiring any modifications to the devices’ software or hardware. We then go on to investigate interaction between a Sudoku game running in parallel on the public display and on mobile devices carried by passing users. Mobile users can join a running game by placing their devices on a designated area. The only requirement is that the device is in discoverable Bluetooth mode. After a specific device has been recognized, a client software is sent to the device which then enables the user to interact with the running game. Finally, we explore the results of a study which we conducted to determine the effectiveness and intrusiveness of interactions between users on the tabletop and users with mobile devices.  相似文献   

8.
This paper proposes a novel tabletop display system for natural communication and flexible information sharing. The proposed system is specifically designed to integrate two‐dimensional (2D) and three‐dimensional (3D) user interfaces by using a multi‐user stereoscopic display, IllusionHole. The proposed system takes awareness into consideration and provides both 2D and 3D information and user interfaces. On the display, a number of standard Windows desktop environments are provided as personal workspaces, as well as a shared workspace with a dedicated graphical user interface. In the personal workspaces, users can simultaneously access existing applications and data, and exchange information between personal and shared workspaces. In this way, the proposed system can seamlessly integrate personal, shared, 2D, and 3D workspaces with conventional user interfaces and effectively support communication and information sharing. To demonstrate the capabilities of the proposed display system, a modeling application was implemented. A preliminary experiment confirmed the effectiveness of this system. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we reflect on the design, development, and deployment of G-nome Surfer; a multi-touch tabletop user interface for collaborative exploration of genomic data. G-nome Surfer lowers the threshold for using advanced bioinformatics tools, reduces the mental workload associated with manipulating genomic information, and fosters effective collaboration. We describe our two-year-long effort from design strategy to iterations of design, development, and evaluation. This paper presents four main contributions: (1) a set of design requirements for supporting collaborative exploration in data-intensive domains, (2) the design, implementation, and validation of a multi-touch tabletop interface for collaborative exploration, (3) a methodology for evaluating the strengths and limitations of tabletop interaction for collaborative exploration, and (4) empirical evidence for the feasibility and value of integrating tabletop interaction in college-level education.  相似文献   

10.
With world population ageing, how to help seniors to adapt to technology life is an important issue. Technology is becoming life rather than resistance, because many of the technology applications are often accompanied by a lot of information to process. This makes the user interface to become an important bridge between human computer interactions. Especially the inconvenience caused by human ageing, these related issues from the cognitive and operational of products are derived. This study proposes a study of user interface design based on natural interaction to increase seniors’ usage intention. In the proposed contents, the Kinect sensor is used to retrieve seniors’ in-depth information in movements, thus the user interface of system can be operated by the gesture intuitively. In the framework of the system, in the first all, the morphology is applied to identify the features of a hand from depth values obtained from the sensor. Gesture is used to recognize operating behavior of users to implement the interactive action, and collision detection is applied to confirm effectiveness of operation. On the other hand, through interpretive structural model (ISM), each design element of interactive interface can be decomposed and realized, and the solution for target and direction of design problem is also proposed. At the meanwhile, the concept of affordance is conducted to the development of interface for graphic users that proposed in this study, and the design achievement contains operation and usability of intuition can further be acquired. Finally, based on the proposed methodology, an intuitive user interface of digital devices is constructed by Java programming language that allows for verifying the feasibility of user interface for seniors. Besides, the proposed method can be widely used to develop the user interface for various products.  相似文献   

11.
This article proposes a 3-dimensional (3D) vision-based ambient user interface as an interaction metaphor that exploits a user's personal space and its dynamic gestures. In human-computer interaction, to provide natural interactions with a system, a user interface should not be a bulky or complicated device. In this regard, the proposed ambient user interface utilizes an invisible personal space to remove cumbersome devices where the invisible personal space is virtually augmented through exploiting 3D vision techniques. For natural interactions with the user's dynamic gestures, the user of interest is extracted from the image sequences by the proposed user segmentation method. This method can retrieve 3D information from the segmented user image through 3D vision techniques and a multiview camera. With the retrieved 3D information of the user, a set of 3D boxes (SpaceSensor) can be constructed and augmented around the user; then the user can interact with the system by touching the augmented SpaceSensor. In the user's dynamic gesture tracking, the computational complexity of SpaceSensor is relatively lower than that of conventional 2-dimensional vision-based gesture tracking techniques, because the touched positions of SpaceSensor are tracked. According to the experimental results, the proposed ambient user interface can be applied to various systems that require real-time user's dynamic gestures for their interactions both in real and virtual environments.  相似文献   

12.
This article presents the design aspects and development processes to transform a general‐purpose mobile robotic platform into a semi‐autonomous agricultural robot sprayer focusing on user interfaces for teleoperation. The hardware and the software modules that must be installed onto the system are described, with particular emphasis on human–robot interaction. Details of the technology are given focusing on the user interface aspects. Two laboratory experiments and two studies in the field to evaluate the usability of the user interface provide evidence for the increased usability of a prototype robotic system. Specifically, the study aimed to empirically evaluate the type of target selection input device mouse and digital pen outperformed Wiimote in terms of usability. A field experiment evaluated the effect of three design factors: (a) type of screen output, (b) number of views, (c) type of robot control input device. Results showed that participants were significantly more effective but less efficient when they had multiple views, than when they had a single view. PC keyboard was also found to significantly outperform PS3 gamepad in terms of interaction efficiency and perceived usability. Heuristic evaluations of different user interfaces were also performed using research‐based HRI heuristics. Finally, a study on participants’ overall user experience found that the system was evaluated positively on the User Experience Questionnaire scales.  相似文献   

13.
This paper discusses a particular issue in the context of disappearing computing, namely, user mobility. Mobile users may carry with them a variety of wireless gadgets while being immersed in a physical environment encompassing numerous computing devices. In such a situation, it is most likely that the number and type of devices may dynamically vary during interactions. The Voyager development framework supports the implementation of ambient dialogues, i.e., dynamically distributed user Interfaces, which exploit, on-the-fly, the wireless devices available at a given point in time. This paper describes the Voyager implementation, focusing on: device discovery and registry architecture, device-embedded software implementation, ambient dialogue style and corresponding software toolkit development, and a method for dynamic interface adaptation, ensuring dialogue state persistence. Additionally, this paper presents two ambient dialogue applications developed using Voyager, namely, a game and a navigator.  相似文献   

14.
刘扬  郑逢斌 《计算机工程》2008,34(18):251-253
提出一种基于多Agent系统(MAS)的智能用户界面分层模型和交互框架。采用基于互信息的“化学键”模型解决多通道融合和多媒体展现问题,使用基于环境感知的“变色龙”机制实现移动的普适访问策略,利用经济学“市场”模型和Web服务技术实现P2P协同网格计算,并给出系统动态交互模拟的化学抽象机定义。应用实例表明,该模型能提供自然高效的交互模式,减少人机隔阂。  相似文献   

15.
Although multi-touch applications and user interfaces have become increasingly common in the last few years, there is no agreed-upon multi-touch user interface language yet. In order to gain a deeper understanding of the design of multi-touch user interfaces, this paper presents semiotic analysis of multi-touch applications as an interesting approach to gain deeper understanding of the way users use and understand multi-touch interfaces. In a case study example, user tests of a multi-touch tabletop application platform called MuTable are analysed with the Communicability Evaluation Method to evaluate to what extent users understand the intended messages (e.g., cues about interaction and functionality) the MuTable platform communicates. The semiotic analysis of this case study shows that although multi-touch interfaces can facilitate user exploration, the lack of well-known standards in multi-touch interface design and in the use of gestures makes the user interface difficult to use and interpret. This conclusion points to the importance of the elusive balance between letting users explore multi-touch systems on their own on one hand, and guiding users, explaining how to use and interpret the user interface, on the other.  相似文献   

16.
People routinely carry mobile devices in their daily lives and obtain a variety of information from the Internet in many different situations. In searching for information (content) with a mobile device, a user’s activity (e.g., moving or stationary) and context (e.g., commuting in the morning or going downtown in the evening) often change, and such changes can affect the user’s degree of concentration on his or her mobile device’s display and information needs. Therefore, a search system should provide the user with an amount of information suitable for the current activity and a type of information suitable for the current context. In this study, we present the design and implementation of a content search system that considers a mobile user’s activity and context, with the goal of reducing the user’s operation load for content search. The proposed system switches between two kinds of content search systems according to the user’s activity: the location-based content search system is activated when the user is stationary (e.g., standing and sitting), while a menu-based content search system is activated when the user is moving (e.g., walking). Both systems present information according to user context. The location-based system presents detailed information via menus and a map according to location-based categories. The menu-based system presents only a few options to enable users to get content easily. Through user experiments, we confirmed that participants could get desired information more easily with this system than with a commercial search system.  相似文献   

17.
Digital tabletop environments offer a huge potential to realize application scenarios where multiple users interact simultaneously or aim to solve collaborative tasks. So far, research in this field focuses on touch and tangible interaction, which only takes place on the tabletop’s surface. First approaches aim at involving the space above the surface, e.g., by employing freehand gestures. However, these are either limited to specific scenarios or employ obtrusive tracking solutions. In this paper, we propose an approach to unobtrusively segment and detect interaction above a digital surface using a depth sensing camera. To achieve this, we adapt a previously presented approach that segments arms in depth data from a front-view to a top-view setup facilitating the detection of hand positions. Moreover, we propose a novel algorithm to merge segments and give a comparison to the original segmentation algorithm. Since the algorithm involves a large number of parameters, estimating the optimal configuration is necessary. To accomplish this, we describe a low effort approach to estimate the parameter configuration based on simulated annealing. An evaluation of our system to detect hands shows that a repositioning precision of approximately 1 cm is achieved. This accuracy is sufficient to reliably realize interaction metaphors above a surface.  相似文献   

18.
In this paper we present TangiWheel,a collection manipulation widget for tabletop displays.Our implementation is flexible,allowing either multi-touch or interaction,or even a hybrid scheme to better suit user choice and convenience.Different TangiWheel aspects and features are compared with other existing widgets for collection manipulation.The study reveals that TangiWheel is the first proposal to support a hybrid input modality with large resemblance levels between touch and tangible interaction styles.Several experiments were conducted to evaluate the techniques used in each input scheme for a better understanding of tangible surface interfaces in complex tasks performed by a single user (e.g.,involving a typical master-slave exploration pattern).The results show that tangibles perform significantly better than fingers,despite dealing with a greater number of interactions,in situations that require a large number of acquisitions and basic manipulation tasks such as establishing location and orientation.However,when users have to perform multiple exploration and selection operations that do not require previous basic manipulation tasks,for instance when collections are fixed in the interface layout,touch input is significantly better in terms of required time and number of actions.Finally,when a more elastic collection layout or more complex additional insertion or displacement operations are needed,the hybrid and tangible approaches clearly outperform finger-based interactions.  相似文献   

19.
20.
一种面向个人信息管理的Post-WIMP 用户界面模型   总被引:1,自引:0,他引:1  
陈明炫  任磊  田丰  邓昌智  戴国忠 《软件学报》2011,22(5):1082-1096
提出了一种面向个人信息管理(personal information management,简称PIM)的Post-WIMP界面模型(Post-WIMP PIM interface model,简称PWPIM).首先给出了PWPIM的形式化描述,从5个方面对个人信息管理进行建模,分别描述了用户特征、领域对象和交互过程等;在此基础上给出了PWPIM的建模方法;最后,将PWPIM应用于一个基于实物界面的PIM系统.应用实例表明,PWPIM能够有效地描述PIM的Post-WIMP界面,能够满足PIM面向大众用户、多样信息及自然交互的需求,能够为PIM的Post-WIMP界面设计、开发与评估提供理论支持.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号