首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Accurate performance models are important for interaction technique, application, and hardware design. The limited screen size of mobile devices and use of touch interaction require unique considerations, especially when interacting with large amounts of information. This paper considers the performance impact of target visibility on mobile smartphone applications that provide on- and offscreen content with the commonly used direct-touch interactions and four cursor-based interaction methods for precise selection. Three existing and 12 novel performance models are experimentally validated. Fitts' Law, which was not designed for modelling selection of offscreen targets, did not predict interaction times for mobile interaction methods as accurately as is commonly observed with desktop interaction with onscreen targets. Target visibility was found to greatly impact interaction times (particularly for direct-touch interaction). The presented models that incorporate variables related to target visibility greatly improve predicted interaction times. The use and merits of the top models are discussed, emphasizing the importance and implications of accepted user-interface design guidelines.  相似文献   

2.
A user task is often distributed across devices, e.g., a student listening to a lecture in a classroom while watching slides on a projected screen and making notes on her laptop, and sometimes checking Twitter for comments on her smartphone. In scenarios like this, users move between heterogeneous devices and have to deal with task resumption overhead from both physical and mental perspectives. To address this problem, we created Smooth Gaze, a framework for recording the user’s work state and resuming it seamlessly across devices by leveraging implicit gaze input. In particular, we propose two novel and intuitive techniques, smart watching and smart posting, for detecting which display and target region the user is looking at, and transferring and integrating content across devices respectively. In addition, we designed and implemented a cross-device reading system SmoothReading that captures content from secondary devices and generates annotations based on eye tracking, to be displayed on the primary device. We conducted a study that showed that the system supported information seeking and task resumption, and improved users’ overall reading experience.  相似文献   

3.
ABSTRACT

Head-mounted displays (HMDs) are increasingly available to users after the launch of new-generation consumer devices. Moreover, mobile HMDs such as Samsung Gear VR and Google Daydream View allow users to experience VR through a smartphone, without requiring connection to a PC. Commercial applications for mobile HMDs exploit different techniques to perform menu selection tasks. This paper contrasts the two most used techniques, i.e., dwell-based and touchpad-based selection, which were not experimentally compared before. We consider different versions of a menu pointing and selection task in which participants interacted with a Samsung Gear VR. Results show that participants were slower with the dwell-based technique rather than the touchpad-based technique. However, the dwell-based technique led to fewer errors and was perceived as more usable, more comfortable and less fatiguing than the touchpad-based technique. We also evaluated two different active areas for the selection, discussing the results.  相似文献   

4.
桌面虚拟现实环境下的双手交互技术   总被引:1,自引:1,他引:1  
面向桌面虚拟环境,在分析通用交互设备特点的基础上,提出了具有广泛适用性的双手交互的设备组合;根据虚拟现实通用交互任务的需求,结合设备特点,提出了为左、右手设备分配不同的子任务的策略;组合不同的单手交互技术,提出了包括基于辅助平面的双手交互等3个适用于典型桌面设备组合的双手交互技术;开发了双手交互技术工具箱,并进行了应用验证.  相似文献   

5.
《Ergonomics》2012,55(5):818-831
Touch screens are popular nowadays as seen on public kiosks, industrial control panels and personal mobile devices. Numerical typing is one frequent task performed on touch screens, but this task on touch screen is subject to human errors and slow responses. This study aims to find innate differences of touch screens from standard physical keypads in the context of numerical typing by eliminating confounding issues. Effects of precise visual feedback and urgency of numerical typing were also investigated. The results showed that touch screens were as accurate as physical keyboards, but reactions were indeed executed slowly on touch screens as signified by both pre-motor reaction time and reaction time. Provision of precise visual feedback caused more errors, and the interaction between devices and urgency was not found on reaction time. To improve usability of touch screens, designers should focus more on reducing response complexity and be cautious about the use of visual feedback.

Practitioner Summary: The study revealed that slower responses on touch screens involved more complex human cognition to formulate motor responses. Attention should be given to designing precise visual feedback appropriately so that distractions or visual resource competitions can be avoided to improve human performance on touch screens.  相似文献   

6.
The emergence of small handheld devices such as tablets and smartphones, often with touch sensitive surfaces as their only input modality, has spurred a growing interest in the subject of gestures for human–computer interaction (HCI). It has been proven before that eye movements can be consciously controlled by humans to the extent of performing sequences of predefined movement patterns, or “gaze gestures” that can be used for HCI purposes in desktop computers. Gaze gestures can be tracked noninvasively using a video-based eye-tracking system. We propose here that gaze gestures can also be an effective input paradigm to interact with handheld electronic devices. We show through a pilot user study how gaze gestures can be used to interact with a smartphone, how they are easily assimilated by potential users, and how the Needleman-Wunsch algorithm can effectively discriminate intentional gaze gestures from otherwise typical gaze activity performed during standard interaction with a small smartphone screen. Hence, reliable gaze–smartphone interaction is possible with accuracy rates, depending on the modality of gaze gestures being used (with or without dwell), higher than 80 to 90%, negligible false positive rates, and completion speeds lower than 1 to 1.5 s per gesture. These encouraging results and the low-cost eye-tracking equipment used suggest the possibilities of this new HCI modality for the field of interaction with small-screen handheld devices.  相似文献   

7.
A variety of studies have been conducted to improve methods of selecting a tiny virtual target on small touch screen interfaces of handheld devices such as mobile phones and PDAs. These studies, however, focused on a specific selection method, and did not consider various layouts resulting from different target sizes and densities on the screen. This study proposes a Two-Mode Target Selection (TMTS) method that automatically detects the target layout and changes to an appropriate mode using the concept of an activation area. The usability of TMTS was compared experimentally to those of other methods. TMTS changed to the appropriate mode successfully for a given target layout and showed the shortest task completion time and the fewest touch inputs. TMTS was also rated by the users as the easiest to use and the most preferred. TMTS could significantly increase the ease, accuracy, and efficiency of target selection, and thus enhance user satisfaction when the users select targets on small touch screen devices.

Relevance to Industry

The results of this study can be used to develop fast and accurate target selection methods in handheld devices with touch screen interfaces especially when the users use their thumb to activate the desired target.  相似文献   

8.
Interactive horizontal surfaces provide large semi-public or public displays for colocated collaboration. In many cases, users want to show, discuss, and copy personal information or media, which are typically stored on their mobile phones, on such a surface. This paper presents three novel direct interaction techniques (Select&Place2Share, Select&Touch2Share, and Shield&Share) that allow users to select in private which information they want to share on the surface. All techniques are based on physical contact between mobile phone and surface. Users touch the surface with their phone or place it on the surface to determine the location for information or media to be shared. We compared these three techniques with the most frequently reported approach that immediately shows all media files on the table after placing the phone on a shared surface. The results of our user study show that such privacy-preserving techniques are considered as crucial in this context and highlight in particular the advantages of Select&Place2Share and Select&Touch2Share in terms of user preferences, task load, and task completion time.  相似文献   

9.
With the advent of the portable projector (also embeddable in a smart phone), projection-based augmented reality (AR) will be an attractive form of AR as the augmentation is made directly in the real space (instead of on the video screen). Several interaction methods for such “Procam”-based projection AR systems have been developed, but their comparative usability has not been studied in depth. In this paper, we compared the usability of four representative interaction methods, as applied to the menu selection task, for the handheld projection-based AR. In particular, we explored the possibility of using just one hand for enhanced convenience and mobility. As such, the four menu selection methods chosen for the study were formed by combinations of two types of cursor control (projector cursor vs. on-device touch screen), and two types of object selection (explicit click vs. crossing), all feasible with only one hand. Other considerations included the need for maintaining the stability of the handheld projector and effectively taking advantage of the smart phone as the interaction device. Experimental results have shown that the menu selection task was the most efficient, usable and preferred when the projector cursor with the crossing widget was used. Furthermore, the task performance was not statistically different among using the dominant, non-dominant hand and even both hands.  相似文献   

10.
ABSTRACT

The objective of this study was to investigate the touch area that can be comfortably reached by the thumb during one-handed smartphone interaction. To achieve the research objective, we introduced the concept of natural thumb position when designing a tapping task and conducted an user experiment. The independent variables were the target distance and direction from the natural thumb position, and the three dependent variables were the task performance, information throughput, and touch accuracy. The results showed that participants performed the task comfortably in the diagonal direction between the upper right and the lower left side of the screen. The task performance deteriorated as the target distance increased, especially at 45 mm or more. The touch accuracy was measured using X- and Y-coordinates data. Participants touched the left side of the target center, except near the proximal area of the hand. They also touched the points above the center of the target in the upper screen area and points below the center of the target in the lower screen area. The findings of this study provided insights for designing a smartphone touch interface considering the comfortable touch areas of one-handed interaction.  相似文献   

11.
This paper proposes various interaction methods for smartphones that allow users to call an intended contact without touching the smartphone screen. Existing applications allow users to answer phone calls without touching the screen—by shaking the phone, for example—but do not allow users to make phone calls. The proposed interaction allows users to select and call an intended contact by utilizing the iPhone’s accelerometer. The interaction also involves video camera scanning for commands to switch between group-selection and individual-selection modes to facilitate the selection of the call candidate. Furthermore, the proposed interaction secures transparency by displaying the camera’s video stream on the smartphone screen. In order to evaluate the efficacy of the interaction, an application using the interaction was developed, and two simple experiments were conducted, in which participants were asked to make phone calls using the application. The success rate was 98 %, and user satisfaction with the proposed interaction was approximately 90 %. Therefore, the results showed that the proposed interaction could be an effective solution to allow users to make phone calls in situations where they cannot physically touch the iPhone screen. Furthermore, this solution could be used in many fields that need interactions with users in mobile applications.  相似文献   

12.
This study aimed to determine the most appropriate touch-based interaction technique for I2Vote, an image-based audience response system for radiology education in which users need to accurately mark a target on a medical image. Four plausible techniques were identified: land-on, take-off, zoom-pointing, and shift. The techniques were implemented in such a way that they could be used on any modern device. An empirical study was performed in which users marked a target on an image using all four techniques on either a smartphone or a tablet. The techniques were compared in terms of accuracy, efficiency, ease of use, intuitiveness, and compatibility with the different devices. The results showed that shift was the most accurate technique, but it was hampered by its high complexity and low intuitiveness. Land-on was the fastest technique but also the least accurate. Take-off and zoom-pointing provided the best trade-off between accuracy, efficiency, ease of use, and intuitiveness. We therefore conclude that both take-off and zoom-pointing are viable interaction techniques for I2Vote.  相似文献   

13.
This study investigated the relationships between thumb muscle activity and thumb operating tasks on a smartphone touch screen with one-hand posture. Six muscles in the right thumb and forearm were targeted in this study, namely adductor pollicis, flexor pollicis brevis, abductor pollicis brevis (APB), abductor pollicis longus, first dorsal interosseous (FDI) and extensor digitorum. The performance measures showed that the thumb developed fatigue rapidly when tapping on smaller buttons (diameter: 9 mm compared with 3 mm), and moved more slowly in flexion–extension than in adduction–abduction orientation. Meanwhile, the electromyography and perceived exertion values of FDI significantly increased in small button and flexion–extension tasks, while those of APB were greater in the adduction–abduction task. This study reveals that muscle effort among thumb muscles on a touch screen smartphone varies according to the task, and suggests that the use of small touch buttons should be minimised for better thumb performance.  相似文献   

14.
An evaluation approach for correspondence-driven domains is suggested and implemented. Touch screen and trackball controls were evaluated as interaction devices for large-area displays in the cockpits of highly agile jet aircraft. To account for the context conditions of selected use cases, informatory quality and the difficulty of situational demands were analysed and manipulated experimentally in dual-task scenarios, which were completed by experienced pilots. Results indicate a clear performance advantage of touch compared to trackball interaction, accompanied by less workload. Informatory dimensions induce different performance and workload ratings. Cognitive demands interfere the least with aiming performance, followed by visual and motor. Task and device influences are interdependent. Motor components of an additional task interfere especially with trackball control actions. Workload operates as a buffer. When difficulty increases, performance decrements are lower than workload increments. It is argued that this cognitive manipulation of informatory context is advisable for correspondence-driven domains, where context is expected to influence human–machine interaction. Transfer to automotive display evaluation appears to be straightforward.  相似文献   

15.
The usability of three-dimensional (3D) interaction techniques depends upon both the interface software and the physical devices used. However, little research has addressed the issue of mapping 3D input devices to interaction techniques and applications. This is especially crucial in the field of Virtual Environments (VEs), where there exists a wide range of potential 3D input devices. In this paper, we discuss the use of Pinch Gloves™ – gloves that report contact between two or more fingers – as input devices for VE systems. We begin with an analysis of the advantages and disadvantages of the gloves as a 3D input device. Next, we present a broad overview of three novel interaction techniques we have developed using the gloves, including a menu system, a text input technique, and a two-handed navigation technique. All three of these techniques have been evaluated for both usability and task performance. Finally, we speculate on further uses for the gloves.  相似文献   

16.
目的触摸输入方式存在"肥手指"、目标遮挡和肢体疲劳现象,会降低触摸输入的精确度。本文旨在探索在移动式触摸设备上,利用设备固有特性来解决小目标选择困难与触摸输入精确度低的具体策略,并对具体的策略进行对比。方法结合手机等移动式触摸设备所支持的倾斜和运动加速度识别功能,针对手机和平板电脑等移动式触摸输入设备,实证地考察了直接触摸法、平移放大法、倾斜法和吸引法等4种不同的目标选择技术的性能、特点和适用场景。结果通过目标选择实验对4种技术进行了对比,直接触摸法、平移放大法、倾斜法、吸引法的平均目标选择时间,错误率和主观评价分别为(86.06 ms,62.28%,1.95),(1 327.99 ms,6.93%,3.87),(1 666.11 ms,7.63%,3.46)和(1 260.34 ms,6.38%,3.74)。结论 3种改进的目标选择技术呈现出了比直接触摸法更优秀的目标选择能力。  相似文献   

17.

Head-operated computer accessibility tools (CATs) are useful solutions for the ones with complete head control; but when it comes to people with only reduced head control, computer access becomes a very challenging task since the users depend on a single head-gesture like a head nod or a head tilt to interact with a computer. It is obvious that any new interaction technique based on a single head-gesture will play an important role to develop better CATs to enhance the users’ self-sufficiency and the quality of life. Therefore, we proposed two novel interaction techniques namely HeadCam and HeadGyro within this study. In a nutshell, both interaction techniques are based on our software switch approach and can serve like traditional switches by recognizing head movements via a standard camera or a gyroscope sensor of a smartphone to translate them into virtual switch presses. A usability study with 36 participants (18 motor-impaired, 18 able-bodied) was also conducted to collect both objective and subjective evaluation data in this study. While HeadGyro software switch exhibited slightly higher performance than HeadCam for each objective evaluation metrics, HeadCam was rated better in subjective evaluation. All participants agreed that the proposed interaction techniques are promising solutions for computer access task.

  相似文献   

18.
Searching for an item in an ordered list is a frequently reoccurring task while using computers. The search can be carried out in several ways. In this paper, we present a new, efficient technique to find an alphanumeric item in a sorted list. This technique, called BinScroll, is based on the well-known binary search algorithm. BinScroll can be used with a minimum of four buttons, making it ideal for keyboardless mobile use. It can also be implemented with a minimum of one line of text, making it suitable for devices with limited screen space or text-only displays. Our evaluation showed that after 15 minutes of training, a novel user is able to locate any item from a list of 10,000 movie names in 14 seconds on average, and an expert user with a few hours of learning can find any item in about seven seconds. This makes it one of the most efficient selection techniques when long lists are concerned.  相似文献   

19.
This study examines text input performance on a smartwatch using tap and trace input methods on a standard QWERTY keyboard (SwypeTM). Participants were either experts or novices to the tracing technique on their smartphone. No users had experience typing on a smartwatch. Participants were able to type at speeds comparable to, or exceeding, those reported in the literature for smartphones and small screen devices. Both novices and tracing experts typed faster overall using the trace input method than the tap method and experts typed the fastest using trace (37 WPM). Word error rates were also comparable to those reported for smartphone text input. These results suggest that using a standard QWERTY keyboard that allows both tap and trace for text input is a viable option on a smartwatch.  相似文献   

20.
A large body of HCI research focuses on devices and techniques to interact with applications in more natural ways, such as gestures or direct pointing with fingers or hands. In particular, recent years have seen a growing interest in laser pointer-style (LPS) interaction, which allows users to point directly at the screen from a distance through a device handled like a common laser pointer. Several LPS techniques have been evaluated in the literature, usually focusing on users' performance and subjective ratings, but not on the effects of these techniques on the musculoskeletal system. One cannot rule out that “natural” interaction techniques, although found attractive by users, require movements that might increase likelihood of musculoskeletal disorders (MSDs) with respect to traditional keyboard and mouse. Our study investigates the physiological effects of a LPS interaction technique (based on the Wii Remote) compared to a mouse and keyboard setup, used in a sitting and a standing posture. The task (object arrangement) is representative of user actions repeatedly carried out with 3D applications. The obtained results show that the LPS interaction caused more muscle exertion than mouse and keyboard. Posture played also a significant role. The results highlight the importance of extending current studies of novel interaction techniques with thorough electromyographic (EMG) analyses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号