首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
移动终端三维用户界面的可用性评估   总被引:3,自引:0,他引:3  
本文针对移动终端的三维用户界面进行了可用性评估.从系统性能,任务绩效和用户偏好三个方面进行了全面的考察.物体选取是三维用户界面中基础的交互任务,我们结合实验数据分析,基于二维环境下的指点模型,提出并验证了一个新的基于三维用户界面的通用性能模型.同时通过用户调查和问卷分析,说明了三维用户界面在移动终端上具有较高的用户偏好.  相似文献   

2.
桌面环境下的笔式三维交互框架   总被引:1,自引:1,他引:0       下载免费PDF全文
笔输入和三维交互结合是研究三维交互范型的一种新方法。提出一个桌面虚拟环境下的笔式三维交互框架,包含交互原语和交互任务构造两个核心组件。首先采用词法到语法的二级封装机制来生成高级事件和交互原语,然后综合交互上下文、用户修正和手势识别实现基本交互任务到复杂交互任务的整合机制。框架中内嵌的手势交互、约束感知和混合交互技术能有效降低任务分解和模式切换带来的认知负担,提高交互自然性。  相似文献   

3.
Along with the improvement of eye-tracking technology, more and more distinct field of researches have introduced movements of the eye in relation to the head to understand user behavior. Most of current researches focus on the perception process of single 2-dimensional images by fixed eye-tracking devices or the head-mount devices. A method of applying eye-tracking on the analysis of the interaction between users and objects in 3D navigational space is proposed in this article. It aims to understand the visual stimulation of 3D objects and the user’s spatial navigational reactions while receiving the stimulation, and proposes the concept of 3D object attention heat map. It also proposes to construct a computational visual attention model for different geometric featured 3D objects by applying the method of feature curves. The VR results of this study also provide future assistance in the incoming immersive world. This study sets to promote eye-tracking from the mainstream of 2D field to 3D spaces and points to a deeper understanding between human and artificial product or natural objects. It would also serve an important role in the field of human-computer interaction, product usability, aids devices for cognition degenerative individuals, and even the field of visual recognition of daily human behavior.  相似文献   

4.
5.
The use of large displays is becoming increasingly prevalent, but development of the usability of three-dimensional (3D) interaction with large displays is still in the early stage. One way to improve the usability of 3D interaction is to develop appropriate control–display (CD) gain function. Nevertheless, unlike in desktop environments, the effects of the relationship between control space and display space in 3D interaction have not been investigated. Moreover, 3D interaction with large displays is natural and intuitive similar to how we work in the physical world. Therefore, a CD gain function that considers human behavior might improve the usability of interaction with large displays. The first experiment was conducted to identify the characteristics of user’s natural hand motion and the user perception of target in distal pointing. Thirty people participated and the characteristics of users’ natural hand movements and the 3D coordinates of their pointing positions were derived. These characteristics were considered in development of motion–display (MD) gain which is a new position-to-position CD mapping. Then, MD gain was experimentally verified by comparing it with Laser pointing, which is currently the best existing CD mapping technique; 30 people participated. MD gain was superior to the existing pointing technique in terms of both performance and subjective satisfaction. MD gain can also be personalized for further improvement. This is an initial attempt to reflect natural human pointing gesture in distal pointing technique, and the developed technique (MD gain) was experimentally proved to be superior to the existing techniques. This achievement is worthy because even a marginal improvement in the performance of pointing task, which is a fundamental and frequent task, can have a large effect on users’ productivity. These results can be used as a resource to understand the characteristics of user’s natural hand movement, and MD gain can be directly applied to situations in which distal pointing is needed, such as interacting with smart TVs or with wall displays. Furthermore, the concept that maps natural human behavior in motor space and an object in visual space can be applied to any interactive system.  相似文献   

6.
The articles in this special issue present recent developments in research on 3D user interfaces. Topics covered include reality- and imagination-based interaction, pointing techniques, analysis of rapid aimed movements, temporal-data visualizations, and navigation of augmented CAD models.  相似文献   

7.
This paper reports the utility of eye-gaze,voice and manual response in the design of multimodal user interface.A device-and application-independent user interface model(VisualMan)of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task.The multimodal inpus are integrated in the prototype interface based on the priority of modalities and interaction context.The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.  相似文献   

8.
This pilot study explores the use of combining multiple data sources (subjective, physical, physiological, and eye tracking) in understanding user cost and behavior. Specifically, we show the efficacy of such objective measurements as heart rate variability (HRV), and pupillary response in evaluating user cost in game environments, along with subjective techniques, and investigate eye and hand behavior at various levels of user cost. In addition, a method for evaluating task performance at the micro-level is developed by combining eye and hand data. Four findings indicate the great potential value of combining multiple data sources to evaluate interaction: first, spectral analysis of HRV in the low frequency band shows significant sensitivity to changes in user cost, modulated by game difficulty—the result is consistent with subjective ratings, but pupillary response fails to accord with user cost in this game environment; second, eye saccades seem to be more sensitive to user cost changes than eye fixation number and duration, or scanpath length; third, a composite index based on eye and hand movements is developed, and it shows more sensitivity to user cost changes than a single eye or hand measurement; finally, timeline analysis of the ratio of eye fixations to mouse clicks demonstrates task performance changes and learning effects over time. We conclude that combining multiple data sources has a valuable role in human–computer interaction (HCI) evaluation and design.  相似文献   

9.
Part modelling in a CAD environment requires a bi-manual 3D input interface to fully exploit its potentialities. In this research we provide extensive user tests on bi-manual modelling using different devices to control 3D model’s rotation. Our results suggest that a simple trackball device is effective when the user task is mostly limited to rotation control (i.e. when modelling parts in a CAD environment). In our tests, performances are even better than those achieved with a specifically designed device. Since the task of rotating a CAD part often shows the need of flipping the controlled object, we introduce a non linear transfer function which combines the precision of a zero order control mode with the ability to recognise fast movements. This new modality shows a significant improvement in the user’s performances and candidates itself for integration in next generation CAD interfaces.  相似文献   

10.
Interaction between a personal service robot and a human user is contingent on being aware of the posture and facial expression of users in the home environment. In this work, we propose algorithms to robustly and efficiently track the head, facial gestures, and the upper body movements of a user. The face processing module consists of 3D head pose estimation, modeling nonrigid facial deformations, and expression recognition. Thus, it can detect and track the face, and classify expressions under various poses, which is the key for human–robot interaction. For body pose tracking, we develop an efficient algorithm based on bottom-up techniques to search in a tree-structured 2D articulated body model, and identify multiple pose candidates to represent the state of current body configuration. We validate these face and body modules in varying experiments with different datasets, and the experimental results are reported. The implementation of both modules can run in real-time, which meets the requirement for real-world human–robot interaction task. These two modules have been ported onto a real robot platform by the Electronics and Telecommunications Research Institute.  相似文献   

11.
In projection-based Virtual Reality (VR) systems, typically only one headtracked user views stereo images rendered from the correct view position. For other users, who are presented a distorted image, moving with the first user's head motion, it is difficult to correctly view and interact with 3D objects in the virtual environment. In close-range VR systems, such as the Virtual Workbench, distortion effects are especially large because objects are within close range and users are relatively far apart. On these systems, multi-user collaboration proves to be difficult. In this paper, we analyze the problem and describe a novel, easy to implement method to prevent and reduce image distortion and its negative effects on close-range interaction task performance. First, our method combines a shared camera model and view distortion compensation. It minimizes the overall distortion for each user, while important user-personal objects such as interaction cursors, rays and controls remain distortion-free. Second, our method retains co-location for interaction techniques to make interaction more consistent. We performed a user experiment on our Virtual Workbench to analyze user performance under distorted view conditions with and without the use of our method. Our findings demonstrate the negative impact of view distortion on task performance and the positive effect our method introduces. This indicates that our method can enhance the multi-user collaboration experience on close-range, projection-based VR systems.  相似文献   

12.
13.
We propose a new approach to the 3D layout problems based on the integration of constraint programming and virtual reality interaction techniques. Our method uses an open-source constraint solver integrated in a popular 3D game engine. We designed multimodal interaction techniques for the system, based on gesture and voice input. We conducted a user study with an interactive task of laying out room furniture to compare and evaluate the mono- and multimodal interaction techniques. Results showed that voice command provided the best performance and was most preferred by participants, based on the analysis of both objective and subjective data. Results also revealed that there was no significant difference between the voice and multimodal input (voice and gesture). Our original approach opens the way to multidisciplinary theoretical work and promotes the development of high-level applications for the VR applications.  相似文献   

14.
The system design and results of a user evaluation of Co-Star an immersive design system for cable harness design is described. The system used a stereoscopic head mounted graphical display, user motion tracking and hand-gesture controlled interface to enable cable harnesses to be designed using direct 3D user interaction with a product model. In order to determine how such a system interface would be used by a designer and to obtain user feedback on its main features a practical user evaluation was undertaken involving ten participants each completing three cable harness design tasks with the system. All user interactions with the system were recorded in a time stamped log file during each of the tasks, which were also followed by questionnaire (5 point scale) and interview sessions with each participant. The recorded interaction data for the third task was analysed using functional decomposition techniques and used to construct a single activity profile for the task based on the mean results obtained from the participant group. The goal was to identify in general terms the relative distribution of user activity between specific purposes during practical system operation, and it was found that in this task Navigation accounted for 41%, Design 27%, System Operation 23% and looking at Task Instructions 9% of all user activity. The scored questionnaire data collected immediately after the completion of each task was used to rank the major features of the system according to user opinion. This was further enhanced by also collecting real interview comments from the user group about these same features. The combination of both quantitative performance analysis and subjective user opinion data obtained during a practical design exercise has enabled an in depth evaluation of the system, leading to a much greater understanding of many of the key user and interface requirements that should be considered during the development of immersive interfaces and systems for practical engineering applications.  相似文献   

15.
李娟妮  华庆一  吴昊  陈锐  苏荟  周筠 《软件学报》2018,29(12):3692-3715
为了适应普适计算环境中用户、设备、使用环境和开发平台的多样性,基于模型的方法被应用于用户界面开发过程中,试图在抽象层次上描述界面,通过模型转换,使其适用于不同的平台.然而,由于目前基于模型的用户界面开发方法(model-based user interface development,简称MBUID)中所采用任务模型的局限性,致使生成的界面难以满足动态环境下用户的可用性需求.提出一种基于任务模型的用户界面开发框架,旨在建模和生成有效、高效、令用户满意的用户界面.在可用性方面,为了准确描述普适计算环境中用户任务,提出一种基于感知控制理论的任务分析方法(perceptual-control-theory-based task analysis,简称PCTBTA),将使用上下文信息引入到任务分析过程中,并且在较高的抽象层次上反映交互的内容,给可用性设计提供任务空间;在技术方面,为PCTBTA任务模型向界面模型的转换提供技术支持.最后,通过实例说明所提出方法的可行性,并通过与其他方法在可用性和性能方面的比较,表明该方法的有效性.  相似文献   

16.
The equilibrium of complex systems often depends on a set of constraints. Thus, credible virtual reality modeling of these systems must respect these constraints, in particular for 3D interactions. In this paper, we propose a generic framework for designing assistance to 3D user interaction in constraints-based virtual environment that associates constraints, interaction tasks and assistance tools, such as virtual fixtures (VFs). This framework is applied to design assistance tools for molecular biology analysis. Evaluation shows that VF designed using our framework improve effectiveness of the manipulation task.  相似文献   

17.
There has been a recent commercialization of 3D stereoscopic displays in order to implement them in a virtual reality environment. However, there is a lack of extensive research into user interfaces for 3D applications on stereoscopic display. This study focused on three representative interaction techniques (ray-casting, keypad and hand-motion techniques) utilizing a head-mounted display and 3D CAVE. In addition, the compatibility with 3D menus was also investigated based on performance and subjective assessment. Nine 3D menus were designed for the experiment in regards to three 2D metaphors (pop-up, pull-down and stack menus) and three structural layouts (list, cubic and circular menus). The most suitable technique for the 3D user interface on a stereoscopic display was the ray-casting technique and the stack menu which provided the user with good performance and subjective response. In addition, it was found that the cubic menu was not as effective as other menus when used with the three interaction techniques.Relevance to industryThis research describes a distinctive evaluation method and recommendations that guarantee the suitability for interactive 3D environments. Therefore, the results will encourage practitioners and researchers that are new to the area of 3D interface design.  相似文献   

18.
19.
This paper presents the design and performance of a body-machine-interface (BoMI) system, where a user controls a robotic 3D virtual wheelchair with the signals derived from his/her shoulder and elbow movements. BoMI promotes the perspective that system users should no longer be operators of the engineering design but should be an embedded part of the functional design. This BoMI system has real-time controllability of robotic devices based on user-specific dynamic body response signatures in high-density 52-channel sensor shirt. The BoMI system not only gives access to the user’s body signals, but also translates these signals from user’s body to the virtual reality device-control space. We have explored the efficiency of this BoMI system in a semi-cylinderic 3D virtual reality system. Experimental studies are conducted to demonstrate, how this transformation of human body signals of multiple degrees of freedom, controls a robotic wheelchair navigation task in a 3D virtual reality environment. We have also presented how machine learning can enhance the interface to adapt towards the degree of freedoms of human body by correcting the errors performed by the user.  相似文献   

20.
This paper presents a framework for automatic simulated accessibility and ergonomy testing of virtual prototypes of products using virtual user models. The proposed virtual user modeling framework describes virtual humans focusing on the elderly and people with disabilities. Geometric, kinematic, physical, behavioral and cognitive aspects of the user affected by possible disabilities are examined, in order to create virtual user models able to represent people with various functional limitations. Hierarchical task and interaction models are introduced, in order to describe the user’s capabilities at multiple levels of abstraction. The use of alternative ways of a user task’s execution, exploiting different modalities and assistive devices, is supported by the proposed task analysis. Experimental results on the accessibility and ergonomy evaluation of different workplace designs for the use of a telephone and a stapler show how the proposed framework can be put into practice and demonstrate its significant potential.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号