首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
As the full‐touch screen is being implemented in more smart phones, controllability of touch icons need to be considered. Previous research focused on recommendations for absolute key size. However, the size of tactual input on touch interface is not precisely equal to the icon size. This study aims to determine the suitable touchable area to improve touch accuracy. In addition, there was an investigation into the effect of layout (3 × 4, 4 × 5, 5 × 6, and 6 × 8) and icon ratio (0.5, 0.7, and 0.9). To achieve these goals, 40 participants performed a set of serial tasks on the smart phone. Results revealed that the layout and icon ratio were statistically significant on the user response: input offset, hit rate, task completion time, and preference. The 3 × 4 and 4 × 5 layouts were shown to have better performance. The icon ratio of 0.9 was shown to have greater preference. Furthermore, the hit rate (proportion of correct input) of touchable area was estimated through the bivariate normal distribution of input offset. The hit rate could vary, depending on the size of touchable area, which is a rectangle that yields a specific hit rate. A derivation procedure of the touchable area was proposed to guarantee the desirable hit rate. Meanwhile, the locations of the central region indicated a pattern of vertical touch and showed better performance. The users felt more difficulty when approaching the edge of the frame. The results of this study could be used in the design of touch interfaces for mobile devices.  相似文献   

2.
When creating highly interactive, direct-manipulation interfaces, one of the most difficult design and implementation tasks is handling the mouse and other input devices. Peridot, a new user interface management system, addresses this problem by allowing the designer of the user interface to demonstrate how the input device should be handled by giving an example of the interface in action. The designer uses sample values for parameters, and the system automatically infers the general operation and creates the code. After an interaction is specified, it can be executed rapid prototyping, since it is very easy to design, implement, and modify mouse-based interfaces. Perudit also supports such additional input devices as touch tablets, as well as multiple input devices operating in parallel (for example, one in each hand) in a natural, easy-to-specify manner. All interaction techniques are implemented using active values, which are like variables except that the objects that depend on active values are updated immediately whenever they change. Active values are a straightforward and efficient mechanism for implementing dynamic interactions.  相似文献   

3.
The fast aging of many western and eastern societies and their increasing reliance on information technology create a compelling need to reconsider older users' interactions with computers. Changes in perceptual and motor skill abilities that often accompany the aging process have important implications for the design of information input devices. This paper summarises the results of two comparative studies on information input with 90 subjects aged between 20 and 75 years. In the first study, three input devices – mouse, touch screen and eye-gaze control – were analysed concerning efficiency, effectiveness and subjective task difficulty with respect to the age group of the computer user. In the second study, an age-differentiated analysis of hybrid user interfaces for input confirmation was conducted combining eye-gaze control with additional input devices. Input confirmation was done with the space bar of a PC keyboard, speech input or a foot pedal. The results of the first study show that regardless of participants' age group, the best performance in terms of short execution time results from touch screen information input. This effect is even more pronounced for the elderly. Regarding the hybrid interfaces, the lowest mean execution time, error rate and task difficulty were found for the combination of eye-gaze control with the space bar. In conclusion, we recommend using direct input devices, particularly a touch screen, for the elderly. For user groups with severe motor impairments, we suggest eye-gaze information input.  相似文献   

4.
How we interface and interact with computing, communications, and entertainment devices is going through revolutionary changes, with natural user inputs based on touch, gesture, and voice replacing or augmenting the use of traditional user interfaces based on the keyboard, mouse, trackballs, joysticks, and so forth. In particular, mobile devices have rapidly transitioned to adopt touch screen technologies as the primary means of user interface during recent years, enabling fun new interactive applications and experiences for the users. We have dedicated this special section of the Journal of the Society for Information Display to present an overview of recent developments and trends in the emerging field of interactive display technologies.  相似文献   

5.
The considerable and significant progress achieved in the design and development of new interaction devices between man and machine has enabled the emergence of various powerful and efficient input and/or output devices. Each of these new devices brings specific interaction modes.With the emergence of these devices, new interaction techniques and modes arise and new interaction capabilities are offered. New user interfaces need to be designed or former ones need to evolve. The design of so called plastic user interfaces contributes to handling such evolutions. The key requirement for the design of such a user interface is that the new obtained user interface shall be adapted to the application and have, at least, the same behavior as the previous (adapted) one. This paper proposes to address the problem of user interface evolution due to the introduction of new interaction devices and/or new interaction modes. More, precisely, we are interested by the study of the design process of a user interface resulting from the evolution of a former user interface due to the introduction of new devices and/or new interaction capabilities. We consider that interface behaviors are described by labelled transition systems and comparison between user interfaces is handled by an extended definition of the bi-simulation relationship to compare user interface behaviors when interaction modes are replaced by new ones.  相似文献   

6.
目的触摸输入方式存在"肥手指"、目标遮挡和肢体疲劳现象,会降低触摸输入的精确度。本文旨在探索在移动式触摸设备上,利用设备固有特性来解决小目标选择困难与触摸输入精确度低的具体策略,并对具体的策略进行对比。方法结合手机等移动式触摸设备所支持的倾斜和运动加速度识别功能,针对手机和平板电脑等移动式触摸输入设备,实证地考察了直接触摸法、平移放大法、倾斜法和吸引法等4种不同的目标选择技术的性能、特点和适用场景。结果通过目标选择实验对4种技术进行了对比,直接触摸法、平移放大法、倾斜法、吸引法的平均目标选择时间,错误率和主观评价分别为(86.06 ms,62.28%,1.95),(1 327.99 ms,6.93%,3.87),(1 666.11 ms,7.63%,3.46)和(1 260.34 ms,6.38%,3.74)。结论 3种改进的目标选择技术呈现出了比直接触摸法更优秀的目标选择能力。  相似文献   

7.
Navigating vast information spaces through mobile interfaces has become a common activity in older adults' everyday lives. Studies suggested that interface metaphors could be used to facilitate users' metal model development and information processing when using mobile technologies. However, we know little about how metaphors affect older adults' mobile navigation behavior, and which user characteristics matter during this perceptual and cognitive process. To investigate this, a card interface with a 3D metaphor and a list interface without 3D metaphors were compared among twenty-two participants when performing four navigation tasks. User characteristics including demographic factors, technology experience, and user capabilities were examined. The participants' navigation performance and subjective evaluations were measured as the dependent variables. From the results, we recommend the list interface without 3D metaphors as a beneficial choice for older adults. It performed better in navigation performance, although the differences are not statistically significant. Moreover, navigation performance using the card interface with a 3D metaphor was significantly associated with participants' perceptual speed, thus this interface may be more sensitive to capability declines. Valuable insights into the older adults’ mobile navigation performance and preferences are discussed and important implications for the design of mobile navigation user interfaces are proposed based on the results.Relevance to industryThe experimental results propose a more beneficial way to present contents on a mobile user interface for older adults and provide valuable insights for the designers and industry to help them understand the older adults’ usage and perceptions towards the application of 3D metaphors when navigating with mobile interfaces.  相似文献   

8.
Computer users with motor impairments find it difficult and, in many cases, impossible to access PC functionality through the physical keyboard-and-mouse interface. Studies show that even able-bodied users experience similar difficulties when interacting with mobile devices; this is due to the reduced size/usability of the input interfaces. Advances in speech recognition have made it possible to design speech interfaces for alphanumeric data entry and indirect manipulation (cursor control). Although several related commercial applications exist, such systems do not provide a complete solution for arbitrary keyboard and mouse access, such as the access needed for, say, typing, compiling, and executing a C++ program.We carried out a usability study to support the development of a speech user interface for arbitrary keyboard access and mouse control. The study showed that speech interaction with an ideal listening keyboard is better for users with motor impairments than handstick, in terms of task completion time (37% better), typing rate (74% better), and error rates (63% better). We believe that these results apply to both permanent and task-induced motor impairments. In particular, a follow-up experiment showed that handstick approximates conventional modes of alphanumeric input available on mobile devices (e.g., PDAs, cellular phones, and personal organizers). These modes of input include miniaturized keyboards, stylus soft keyboards, cellular phone numberpads, and handwriting recognition software. This result suggests that a listening keyboard would be an effective mode for alphanumeric input on future mobile devices.This study contributed to the development of SUITEKeys—a speech user interface for arbitrary keyboard and mouse access available for MS platforms as freeware.  相似文献   

9.
Emotion is a key aspect of user experience. To design a user interface for positive emotional experience, the affective quality of the user interface needs to be carefully considered. A major factor of affective quality in today's user interface for digital media is interactivity, in which motion feedback plays a significant role as an element. This role of motion feedback is particularly evident in touchscreen user interfaces that have been adopted rapidly in mobile devices. This paper presents two empirical studies performed to increase our understanding of motion feedback in terms of affective quality in mobile touchscreen user interfaces. In the first study, the relationships between three general motion properties and a selected set of affective qualities are examined. The results of this study provide a guideline for the design of motion feedback in existing mobile touchscreen user interfaces. The second study explores a new dimension of interactivity that is the Weight factor of Laban's Effort system. To experiment the Weight factor in a mobile touchscreen user interface, a pressure sensitive prototype was developed to recognize the amount of force applied by the user's finger action. With this prototype, the effects of implementing pressure requirements on four different types of user interfaces were examined. Results show that implementing the Weight factor can significantly influence the affective quality and complement the physical feel of a user interface. The issues to consider for effective implementation are also discussed.  相似文献   

10.
Over the past few decades, users have been feeling clumsy inputting Chinese on mobile devices, partly because the layout of the keyboard/keypad is originally designed for inputting Latin alphabets. To improve this user experience, we propose Stroke++, a novel Chinese input method for touch screen mobile devices. More specifically, Stroke++ provides efficient keypad layout, a friendly user interface and a intelligent character/phrase candidate set generation algorithms. Stroke++ splits a Chinese character into multiple radicals. By leveraging hieroglyphic properties of Chinese characters, our method requires users to only input a subset of the radicals to identify the target character, making it much faster and easier to input Chinese on mobile phones. Our user study results show that Stroke++ outperforms most major Chinese input methods on mobile devices, including Stroke, Pinyin and Hand Writing Recognition (HWR), in terms of the input efficiency and usability. Moreover, we also demonstrate that Stroke++ offers a low entry barrier for Chinese-input novices.  相似文献   

11.
Several studies have been carried out on augmented reality (AR)-based environments that deal with user interfaces for manipulating and interacting with virtual objects aimed at improving immersive feeling and natural interaction. Most of these studies have utilized AR paddles or AR cubes for interactions. However, these interactions overly constrain the users in their ability to directly manipulate AR objects and are limited in providing natural feeling in the user interface. This paper presents a novel approach to natural and intuitive interactions through a direct hand touchable interface in various AR-based user experiences. It combines markerless augmented reality with a depth camera to effectively detect multiple hand touches in an AR space. Furthermore, to simplify hand touch recognition, the point cloud generated by Kinect is analyzed and filtered out. The proposed approach can easily trigger AR interactions, and allows users to experience more intuitive and natural sensations and provides much control efficiency in diverse AR environments. Furthermore, it can easily solve the occlusion problem of the hand and arm region inherent in conventional AR approaches through the analysis of the extracted point cloud. We present the effectiveness and advantages of the proposed approach by demonstrating several implementation results such as interactive AR car design and touchable AR pamphlet. We also present an analysis of a usability study to compare the proposed approach with other well-known AR interactions.  相似文献   

12.
ABSTRACT

This article identifies, catalogues, and discusses factors that are responsible for causing visual impairment of either a pathological or situational nature for touch and gesture input on smart mobile devices. Because the vast majority of interactions with touchscreen devices are highly visual in nature, any factor that prevents a clear, direct view of the mobile device’s screen can have potential negative implications on the effectiveness and efficiency of the interaction. This work presents the first overview of such factors, which are grouped in a catalogue of users, devices, and environments. The elements of the catalogue (e.g., psychological factors that relate to the user, or the social acceptability of mobile device use in public that relates to the social environment) are discussed in the context of current eye pathology classification from medicine and the recent literature in human–computer interaction on mobile touch and gesture input for people with visual impairments, for which a state-of-the-art survey is conducted. The goal of this work is to help systematize research on visual impairments and mobile touchscreen interaction by providing a catalogue-based view of the main causes of visual impairments affecting touch and gesture input on smart mobile devices.  相似文献   

13.
《Ergonomics》2012,55(4):590-611
Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of ‘best’ scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design.

Practitioner summary: This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of ‘best’ scores.  相似文献   

14.
When services providing real-time information are accessible from mobile devices, functionality is often restricted and no adaptation of the user interface to the mobile device is attempted. Mobile access to real-time information requires designs for multi-device access and automated facilities for the adaptation of user interfaces. We present TapBroker, a push update service that provides mobile and stationary access to information on autonomous agents trading stocks. TapBroker is developed for the Ubiquitous Interactor system and is accessible from Java Swing user interfaces and Web user interfaces on desktop computers, and from a Java Awt user interface on mobile phones. New user interfaces can easily be added without changes in the service logic.  相似文献   

15.
Beyond WIMP     
The WIMP (windows, icons, menus, point-and-click devices) graphical user interface (GUI) is not an ideal interface. Expert users find pure WIMP GUIs frustratingly slow and thus use keyboard shortcuts. In addition, they don't scale well-GUI bloat accompanies feature bloat. The most serious limitation, however, is that WIMP GUIs were designed for keyboard-plus-mouse desktop computing environments-environments that takes advantage only of vision and primitive touch. Our goal should be to design user interfaces that match our human perceptual, cognitive, manipulative and social abilities. We want to interact as naturally with computers and intelligent devices as we communicate and collaborate with each other and as we manipulate our physical environments. Indeed, computers are increasingly being used to facilitate human communication, collaboration and social interaction. Therefore, we should increasingly focus on human-human interaction, not just human-computer interaction. This change in focus reflects not only changes in the mode of computer use but also the increasingly invisible nature of the computer in our environments. New environments must serve as forcing functions to give us far higher expectations for user interfaces than we have had previously. Furthermore, the potential of future interfaces for handicapped people is phenomenal. Thus, despite the technology challenges, the greatest challenge lies in our understanding of human capabilities and how to incorporate that understanding into new design tools, methodologies and user interfaces  相似文献   

16.
In this article, we present a practical approach to analyzing mobile usage environments. We propose a framework for analyzing the restrictions that characteristics of different environments pose on the user's capabilities. These restrictions along with current user interfaces form the cost of interaction in a certain environment. Our framework aims to illustrate that cost and what causes it. The framework presents a way to map features of the environment to the effects they cause on the resources of the user and in some cases on the mobile device. This information can be used for guiding the design of adaptive and/or multimodal user interfaces or devices optimized for certain usage environments. An example of using the framework is presented along with some major findings and three examples of applying them in user interface design.  相似文献   

17.
The context of mobility raises many issues for geospatial applications providing location-based services. Mobile device limitations, such as small user interface footprint and pen input whilst in motion, result in information overload on such devices and interfaces which are difficult to navigate and interact with. This has become a major issue as mobile GIS applications are now being used by a wide group of users, including novice users such as tourists, for whom it is essential to provide easy-to-use applications. Despite this, comparatively little research has been conducted to address the mobility problem. We are particularly concerned with the limited interaction techniques available to users of mobile GIS which play a primary role in contributing to the complexity of using such an application whilst mobile. As such, our research focuses on multimodal interfaces as a means to present users with a wider choice of modalities for interacting with mobile GIS applications. Multimodal interaction is particularly advantageous in a mobile context, enabling users of location-based applications to choose the mode of input that best suits their current task and location. The focus of this article concerns a comprehensive user study which demonstrates the benefits of multimodal interfaces for mobile geospatial applications.  相似文献   

18.
Various interfaces have been suggested for mobile devices, including touch gesture and embedded sensor‐based interfaces. However, if a user's task requires a thorough look, these interfaces hinder sight of view in display and thus can be inappropriate for the tasks. This problem is more important especially in the mobile computer aided design (CAD) context, which performs visually demanding tasks on the limited screen. To address this point, this study suggests a mobile interface utilizing the rear camera of the device and describes the function mapping for CAD. The suggested interface detects finger attachment and movement direction. The number of colors in camera vision and local binary pattern are used as features, and a support vector machine (SVM) is used for feature classification. A prototype application is designed for validation, and appropriate SVM models are selected through benchmarking tests. The validation results show that the suggested interface can perform with high accuracy and low computational resources.  相似文献   

19.
Pointing devices, essential input tools for the graphical user interface (GUI) of desktop computers, require precise motor control and dexterity to use. Haptic force-feedback devices provide the human operator with tactile cues, adding the sense of touch to existing visual and auditory interfaces. However, the performance enhancements, comfort, and possible musculoskeletal loading of using a force-feedback device in an office environment are unknown. Hypothesizing that the time to perform a task and the self-reported pain and discomfort of the task improve with the addition of force feedback, 26 people ranging in age from 22 to 44 years performed a point-and-click task 540 times with and without an attractive force field surrounding the desired target. The point-and-click movements were approximately 25% faster with the addition of force feedback (paired t-tests, p < 0.001). Perceived user discomfort and pain, as measured through a questionnaire, were also smaller with the addition of force feedback (p < 0.001). However, this difference decreased as additional distracting force fields were added to the task environment, simulating a more realistic work situation. These results suggest that for a given task, use of a force-feedback device improves performance, and potentially reduces musculoskeletal loading during mouse use. Actual or potential applications of this research include human-computer interface design, specifically that of the pointing device extensively used for the graphical user interface.  相似文献   

20.
A variety of studies have been conducted to improve methods of selecting a tiny virtual target on small touch screen interfaces of handheld devices such as mobile phones and PDAs. These studies, however, focused on a specific selection method, and did not consider various layouts resulting from different target sizes and densities on the screen. This study proposes a Two-Mode Target Selection (TMTS) method that automatically detects the target layout and changes to an appropriate mode using the concept of an activation area. The usability of TMTS was compared experimentally to those of other methods. TMTS changed to the appropriate mode successfully for a given target layout and showed the shortest task completion time and the fewest touch inputs. TMTS was also rated by the users as the easiest to use and the most preferred. TMTS could significantly increase the ease, accuracy, and efficiency of target selection, and thus enhance user satisfaction when the users select targets on small touch screen devices.

Relevance to Industry

The results of this study can be used to develop fast and accurate target selection methods in handheld devices with touch screen interfaces especially when the users use their thumb to activate the desired target.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号