首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Neck pain is a significant health problem due to its high incident rates and economic costs. Increased use of touch screen mobile devices is becoming pervasive in the modern society and may further influence this already prevalent health problem. However, our current understanding of the cervical spine biomechanics during the operation of touch screen mobile devices is very limited. This study evaluated the neck extensor muscle activities and the kinematics of the cervical spine during the operation of a touch screen tablet and a smart phone. Three-variables, DEVICE, LOCATION and TASK were treated as the independent variables. NASA TLX revealed that “Gaming” was the least difficult task and “Typing” was the most difficult task. Participants of this study maintained significantly deeper neck flexion when operating a smart phone (44.7°), with the mobile devices set on a table (46.4°), and while performing a “Typing” task (45.6°). Lower levels of neck muscle activities were observed while performing a “Reading” task and holding mobile devices with hand. Lower levels of neck muscle activities were also observed when using a smart phone vs. a tablet, however such change was statistically insignificant.Relevance to industryThe current study demonstrated that users maintain deep neck flexion when using touchscreen mobile devices. In the recent years, there is an increasing popularity of mobile smart devices in various occupational environments. The findings of this study may be useful in implementing human-centered task designs to reduce neck injury risks among mobile device users.  相似文献   

2.
3.
The objective of this study is to determine whether ruggedized handheld scanning devices used for industrial purposes should contain one of the most prominent features provided on commercial smart devices: data entry via touchscreen as opposed to a physical keypad. Due to harsh environments, physical keys have been the preferred means of input for rugged handhelds. Advancement in touchscreen technology along with technology expectations brought about by the workforce demographic shift are influencing a notable shift to touch-only input for rugged equipment. Hypotheses expect there to be a difference in usability by worker generation and so 20 Gamers (Millennials) and 20 Baby Boomers performed manual data entry on ruggedized handhelds: one with physical keys and one touchscreen only. All participants took 19% less time on touchscreen than physical keys. Gamers were 31% faster than Boomers on physical keyed devices and 28% faster on touchscreen only. There was no significant difference in errors entered for either device by either age group; however, an 83% increase in permanent errors by Gamers on touchscreen was noted. Transitioning to a rugged device with touch-only input is recommended for industry as it could offer an increase in work productivity. This study presents timely insight into a new tool option for industrial workers.Relevance to industryThis research describes the paradigm shift in the ruggedized handheld device market from physical keys to touchscreen only input and identifies real time productivity savings and error risks that can be expected by different generations of workers in the industrial workforce.  相似文献   

4.
刘杰  黄进  田丰  胡伟平  戴国忠  王宏安 《软件学报》2017,28(8):2080-2095
分析了触控交互技术在移动手持设备及可穿戴设备应用的应用现状及存在的问题;基于交互动作的时间连续性及空间连续性,提出了将触控交互动作的接触面轨迹与空间轨迹相结合,同时具有空中手势及触控手势的特性及优点的混合手势输入方法;基于连续交互空间的概念,将混合交互手势,空中手势、表面触控手势进行统一,建立了包括空中层、表面层、混合层的连续交互空间分层处理模型;给出了统一的信息数据定义及数转换流程;构建了通用性的手势识别框架,并对轨迹切分方法及手势分类识别方法进行了阐述.最后设计了应用实例,通过实验,对混合交互手势的可用性及连续空间分层处理模型的可行性进行了验证.实验表明,混合手势输入方式同时兼具了表面触控输入及空中手势输入的优点,在兼顾识别效率的同时,具有较好的空间自由度.  相似文献   

5.
As the use of mobile touch devices continues to increase, distinctive user experiences can be provided through a direct manipulation. Therefore, the characteristics of touch interfaces should be considered regarding their controllability. This study aims to provide a design approach for touch-based user interfaces. A derivation procedure for the touchable area is proposed as a design guideline based on input behavior. To these ends, two empirical tests were conducted through a smart phone interface. Fifty-five participants were asked to perform a series of input tasks on a screen. As results, touchable area with a desirable hit rate of 90% could be yielded depending on the icon design. To improve the applicability of the touchable area, user error was analyzed based on omission-commission classification. The most suitable design had a hit rate of 95% compared to 90 and 99%. This study contributes practical implications for user interaction design with finger-based controls.Relevance to industryThis research describes a distinctive design approach that guarantees the desired touch accuracy for effective use of mobile touch devices. Therefore, the results will encourage interface designers to take into account the input behavior of fingers from a user-centered perspective.  相似文献   

6.
《Ergonomics》2012,55(5):818-831
Touch screens are popular nowadays as seen on public kiosks, industrial control panels and personal mobile devices. Numerical typing is one frequent task performed on touch screens, but this task on touch screen is subject to human errors and slow responses. This study aims to find innate differences of touch screens from standard physical keypads in the context of numerical typing by eliminating confounding issues. Effects of precise visual feedback and urgency of numerical typing were also investigated. The results showed that touch screens were as accurate as physical keyboards, but reactions were indeed executed slowly on touch screens as signified by both pre-motor reaction time and reaction time. Provision of precise visual feedback caused more errors, and the interaction between devices and urgency was not found on reaction time. To improve usability of touch screens, designers should focus more on reducing response complexity and be cautious about the use of visual feedback.

Practitioner Summary: The study revealed that slower responses on touch screens involved more complex human cognition to formulate motor responses. Attention should be given to designing precise visual feedback appropriately so that distractions or visual resource competitions can be avoided to improve human performance on touch screens.  相似文献   

7.
In this work, we propose a new mode of interaction using hand gestures captured by the back camera of a mobile device. Using a simple intuitive two‐finger picking gesture, the user can perform mouse click and drag‐and‐drop operations from the back of the device, which provides an unobstructed touch‐free interaction. This method allows users to operate on a tablet and maintain full awareness of the display, which is especially suitable for certain working environments, such as the machine shop, garage, kitchen, gym, or construction site, where people's hands may be dirty, wet, or in gloves. The speed, accuracy, and error rate of this interaction have been evaluated and compared with the typical touch interaction in an empirical study. The results show that, although this method is not, in general, as efficient and accurate as direct touch, participants still consider it an effective and intuitive method of interacting with mobile devices in environments where direct touch is impractical.  相似文献   

8.
Auditory interfaces can overcome visual interfaces when a primary task, such as driving, competes for the attention of a user controlling a device, such as radio. In emerging interfaces enabled by camera tracking, auditory displays may also provide viable alternatives to visual displays. This paper presents a user study of interoperable auditory and visual menus, in which control gestures remain the same in the visual and the auditory domain. Tested control methods included a novel free-hand gesture interaction with camera-based tracking, and touch screen interaction with a tablet. The task of the participants was to select numbers from a visual or an auditory menu including a circular layout and a numeric keypad layout. Results show, that even with participant's full attention to the task, the performance and accuracy of the auditory interface are the same or even slightly better than the visual when controlled with free-hand gestures. The auditory menu was measured to be slower in touch screen interaction, but questionnaire revealed that over half of the participants felt that the circular auditory menu was faster than the visual menu. Furthermore, visual and auditory feedback in touch screen interaction with numeric layout was measured fastest, touch screen with circular menu second fastest, and the free-hand gesture interface was slowest. The results suggest that auditory menus can potentially provide a fast and desirable interface to control devices with free-hand gestures.  相似文献   

9.
We have developed a gesture input system that provides a common interaction technique across mobile, wearable and ubiquitous computing devices of diverse form factors. In this paper, we combine our gestural input technique with speech output and test whether or not the absence of a visual display impairs usability in this kind of multimodal interaction. This is of particular relevance to mobile, wearable and ubiquitous systems where visual displays may be restricted or unavailable. We conducted the evaluation using a prototype for a system combining gesture input and speech output to provide information to patients in a hospital Accident and Emergency Department. A group of participants was instructed to access various services using gestural inputs. The services were delivered by automated speech output. Throughout their tasks, these participants could see a visual display on which a GUI presented the available services and their corresponding gestures. Another group of participants performed the same tasks but without this visual display. It was predicted that the participants without the visual display would make more incorrect gestures and take longer to perform correct gestures than the participants with the visual display. We found no significant difference in the number of incorrect gestures made. We also found that participants with the visual display took longer than participants without it. It was suggested that for a small set of semantically distinct services with memorable and distinct gestures, the absence of a GUI visual display does not impair the usability of a system with gesture input and speech output.  相似文献   

10.
Touchscreen human–machine interfaces (HMIs) are commonly employed as the primary control interface and touch-point of vehicles. However, there has been very little theoretical work to model the demand associated with such devices in the automotive domain. Instead, touchscreen HMIs intended for deployment within vehicles tend to undergo time-consuming and expensive empirical testing and user trials, typically requiring fully functioning prototypes, test rigs, and extensive experimental protocols. While such testing is invaluable and must remain within the normal design/development cycle, there are clear benefits, both fiscal and practical, to the theoretical modeling of human performance. We describe the development of a preliminary model of human performance that makes a priori predictions of the visual demand (total glance time, number of glances, and mean glance duration) elicited by in-vehicle touchscreen HMI designs, when used concurrently with driving. The model incorporates information theoretic components based on Hick–Hyman Law decision/search time and Fitts’ Law pointing time and considers anticipation afforded by structuring and repeated exposure to an interface. Encouraging validation results, obtained by applying the model to a real-world prototype touchscreen HMI, suggest that it may provide an effective design and evaluation tool, capable of making valuable predictions regarding the limits of visual demand/performance associated with in-vehicle HMIs, much earlier in the design cycle than traditional design evaluation techniques. Further validation work is required to explore the behavior associated with more complex tasks requiring multiple screen interactions, as well as other HMI design elements and interaction techniques. Results are discussed in the context of facilitating the design of in-vehicle touchscreen HMI to minimize visual demand.  相似文献   

11.
User input on television (TV) typically requires a mediator device such as a handheld remote control. While this is a well-established interaction paradigm, a handheld device has serious drawbacks: it can be easily misplaced due to its mobility and in case of a touch screen interface, it also requires additional visual attention. Emerging interaction paradigms such as 3D mid-air gestures using novel depth sensors (e.g. Microsoft Kinect), aim at overcoming these limitations, but are known to be tiring. In this article, we propose to leverage the palm as an interactive surface for TV remote control. We present three user studies which set the base for our four contributions: We (1) qualitatively explore the conceptual design space of the proposed imaginary palm-based remote control in an explorative study, (2) quantitatively investigate the effectiveness and accuracy of such an interface in a controlled experiment, (3) identified user acceptance in a controlled laboratory evaluation comparing PalmRC concept with two most typical existing input modalities, here conventional remote control and touch-based remote control interfaces on smart phones for their user experience, task load, as well as overall preference, and (4) contribute PalmRC, an eyes-free, palm-surface-based TV remote control. Our results show that the palm has the potential to be leveraged for device-less eyes-free TV remote interaction without any third-party mediator device.  相似文献   

12.
现今智能手机发展迅猛,人们生活质量得到大幅提高。为了更有效地利用智能手机资源,提高用户体验,提出了一种使用超声波的智能手机手势识别系统(AGRS系统)。该系统使用移动设备自带的扬声器发射20 kHz的超声波信号,使用话筒接收反射信号。AGRS系统可通过陀螺仪辅助判断当前手机摆放状态。系统使用虚警率以降低手势误识别率。AGRS利用声波的Doppler效应提取特征值,用FFT算法处理声波信号,最后选择适合的分类器对手势进行识别。实验结果证明AGRS系统手势识别率超过95%。  相似文献   

13.
The recent developments in technology have made noteworthy positive impacts on the human-computer interaction (HCI). It is now possible to interact with computers using voice commands, touchscreen, eye movement, hand gesture, etc. This paper compiles some of the innovative HCI progresses in various areas, e.g., specialised input/output devices, virtual or augmented reality, wearable technology, etc. It also identifies some future research directions.  相似文献   

14.
How we interface and interact with computing, communications, and entertainment devices is going through revolutionary changes, with natural user inputs based on touch, gesture, and voice replacing or augmenting the use of traditional user interfaces based on the keyboard, mouse, trackballs, joysticks, and so forth. In particular, mobile devices have rapidly transitioned to adopt touch screen technologies as the primary means of user interface during recent years, enabling fun new interactive applications and experiences for the users. We have dedicated this special section of the Journal of the Society for Information Display to present an overview of recent developments and trends in the emerging field of interactive display technologies.  相似文献   

15.
Recent advances in computing devices push researchers to envision new interaction modalities that go beyond traditional mouse and keyboard input. Typical examples are large displays for which researchers hope to create more “natural” means of interaction by using human gestures and body movements as input. In this article, we reflect about this goal of designing gestures that people can easily understand and use and how designers of gestural interaction can capitalize on the experience of 30 years of research on visual languages to achieve it. Concretely, we argue that gestures can be regarded as “visual expressions to convey meaning” and thus are a visual language. Based on what we have learned from visual language research in the past, we then explain why the design of a generic gesture set or language that spans many applications and devices is likely to fail. We also discuss why we recommend using gestural manipulations that enable users to directly manipulate on-screen objects instead of issuing commands with symbolic gestures whose meaning varies among different users, contexts, and cultures.  相似文献   

16.
《Ergonomics》2012,55(4):590-611
Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of ‘best’ scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design.

Practitioner summary: This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of ‘best’ scores.  相似文献   

17.
Wearable projector and camera (PROCAM) interfaces, which provide a natural, intuitive and spatial experience, have been studied for many years. However, existing hand input research into such systems revolved around investigations into stable settings such as sitting or standing, not fully satisfying interaction requirements in sophisticated real life, especially when people are moving. Besides, increasingly more mobile phone users use their phones while walking. As a mobile computing device, the wearable PROCAM system should allow for the fact that mobility could influence usability and user experience. This paper proposes a wearable PROCAM system, with which the user can interact by inputting with finger gestures like the hover gesture and the pinch gesture on projected surfaces. A lab-based evaluation was organized, which mainly compared two gestures (the pinch gesture and the hover gesture) in three situations (sitting, standing and walking) to find out: (1) How and to what degree does mobility influence different gesture inputs? Are there any significant differences between gesture inputs in different settings? (2) What reasons cause these differences? (3) What do people think about the configuration in such systems and to what extent does the manual focus impact such interactions? From qualitative and quantitative points of view, the main findings imply that mobility impacts gesture interactions in varying degrees. The pinch gesture undergoes less influence than the hover gesture in mobile settings. Both gestures were impacted more in walking state than in sitting and standing states by all four negative factors (lack of coordination, jittering hand effect, tired forearms and extra attention paid). Manual focus influenced mobile projection interaction. Based on the findings, implications are discussed for the design of a mobile projection interface with gestures.  相似文献   

18.
《Ergonomics》2012,55(8):733-744
An experimental study was conducted to evaluate physical risk factors associated with the use of touchscreen in a desktop personal computer (PC) setting. Subjective rating of visual/body discomfort, shoulder and neck muscle activity, elbow movement and user-preferred positions of the workstation were quantified from 24 participants during a standardised computer use task with a standard keyboard and a mouse (traditional setting), with a touchscreen and the standard keyboard (mixed-use condition) and with the touchscreen only. The use of a touchscreen was associated with a significant increase of subjective discomfort on the shoulder, neck and fingers, myoelectric activity of shoulder and neck muscles and percentage of task duration that arms were in the air. Participants placed the touchscreen closer and lower when using touch interfaces compared with the traditional setting. Results suggest that users would need more frequent breaks and proper armrests to reduce physical risks associated with the use of a touchscreen in desktop PC settings.

Statement of Relevance: In this study, subjective discomfort, work posture and muscle activity of touchscreendesktop PC users were quantitatively evaluated. The findings of this study can be used to understand potential risksfrom the use of a touchscreen desktop PC and to suggest design recommendations for computer workstations with the touchscreen.  相似文献   

19.
目的触摸输入方式存在"肥手指"、目标遮挡和肢体疲劳现象,会降低触摸输入的精确度。本文旨在探索在移动式触摸设备上,利用设备固有特性来解决小目标选择困难与触摸输入精确度低的具体策略,并对具体的策略进行对比。方法结合手机等移动式触摸设备所支持的倾斜和运动加速度识别功能,针对手机和平板电脑等移动式触摸输入设备,实证地考察了直接触摸法、平移放大法、倾斜法和吸引法等4种不同的目标选择技术的性能、特点和适用场景。结果通过目标选择实验对4种技术进行了对比,直接触摸法、平移放大法、倾斜法、吸引法的平均目标选择时间,错误率和主观评价分别为(86.06 ms,62.28%,1.95),(1 327.99 ms,6.93%,3.87),(1 666.11 ms,7.63%,3.46)和(1 260.34 ms,6.38%,3.74)。结论 3种改进的目标选择技术呈现出了比直接触摸法更优秀的目标选择能力。  相似文献   

20.
Reading e‐books on touch‐based mobile devices such as smartphones and tablet personal computer (PCs) are increasing. We conducted a comparative study on the usability of e‐books provided on smartphones and tablet PCs, which are typical touch‐based mobile devices. An experiment was carried out to see the effects of graphic metaphor and gesture interaction. This study evaluated reading speed, readability, similarity, and satisfaction for 16 combinations of e‐book interfaces (two Metaphor levels × four Display size and Screen modes × two Gesture levels). Overall, performance and subjective ratings showed better results on tablet PCs with larger fonts on a larger screen than on smartphones with smaller fonts on smaller screens. In the smartphone‐landscape mode, the effect of turning a page is a factor that hinders the speed of user reading. In contrast, the users’ readability, similarity, and satisfaction were higher when the page‐turning effect was provided. It showed faster reading speeds when a flicking interaction is provided on tablet PCs. From the standpoint of readability, the portrait mode was better on smartphones. Also, the tablet PC‐portrait mode was the most satisfactory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号