首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Neck pain is a significant health problem due to its high incident rates and economic costs. Increased use of touch screen mobile devices is becoming pervasive in the modern society and may further influence this already prevalent health problem. However, our current understanding of the cervical spine biomechanics during the operation of touch screen mobile devices is very limited. This study evaluated the neck extensor muscle activities and the kinematics of the cervical spine during the operation of a touch screen tablet and a smart phone. Three-variables, DEVICE, LOCATION and TASK were treated as the independent variables. NASA TLX revealed that “Gaming” was the least difficult task and “Typing” was the most difficult task. Participants of this study maintained significantly deeper neck flexion when operating a smart phone (44.7°), with the mobile devices set on a table (46.4°), and while performing a “Typing” task (45.6°). Lower levels of neck muscle activities were observed while performing a “Reading” task and holding mobile devices with hand. Lower levels of neck muscle activities were also observed when using a smart phone vs. a tablet, however such change was statistically insignificant.Relevance to industryThe current study demonstrated that users maintain deep neck flexion when using touchscreen mobile devices. In the recent years, there is an increasing popularity of mobile smart devices in various occupational environments. The findings of this study may be useful in implementing human-centered task designs to reduce neck injury risks among mobile device users.  相似文献   

2.
由于沉浸式环境下的三维交互方式对二维界面操作不够友好,使得依赖于二维列表界面的流场数据管理任务变得复杂且低效。为了实现沉浸式虚拟环境下对流场数据高效的组织和管理,增强用户对流场空间信息的理解,提出一种基于多视图结合交互的沉浸式流场可视化数据块管理方法。该方法构建了一个三维小视图用于提供场景概览,并通过“主视图交互+小视图辅助“”小视图交互+主视图反馈”等多种多视图组合交互方式完成对多块流场数据的管理交互操作。最后构建了一个基于手势的沉浸式流场可视化系统,定义多项交互任务,从学习时长、完成时间和用户反馈几个方面对比了多视图方法和传统交互方法差异。实验结果表明,相比于传统交互方法,多视图方法可以显著提高数据管理任务的效率。  相似文献   

3.
This study proposes a new approach to developing a user behavior model to explain how a user finds the optimal use. This is achieved by considering user concerns, task significances, affordances, and emotional responses as the interaction components and by exploring behavior sequences for a goal in using a product the first time. The tasks in the same group at each level in the user concern structure are therefore in a competing relationship in going up to a higher task. The task tree with the significances and the affordance probabilities can be analyzed. The order of a user’s exploring behavior sequences can be determined through comparisons of the expected significances, which can be obtained by the modified subjective expected utility theory. A user’s emotional responses for the tasks that a behavior sequence is composed of can be calculated by the modified decision affect theory. Here, the emotional response refers to a user’s internal reactions for the degree to which a product’s affordance features can meet his or her mental model in use. The average emotional response for a behavior sequence can be a user’s decisional factor for the optimal use method in using a product with a goal. Also, the design problems of a product can be checked from users’ point of view, and the emotional losses/changes by usage failures can be discussed. For an illustrative purpose, the proposed model is applied to a numerical example with some assumptions.  相似文献   

4.
Amid the increasing popularity of smartphones fitted with touch screens, many studies have been conducted on touch screen input methods. In contrast to standardized stylus pens, the size and anthropometric features of the fingers can be vastly different depending on the person. These features significantly impact the touch screen’s usability for an individual. In this paper, the relationship between the anthropometrical dimensions of the thumb (i.e., length, breadth) and touch task performance was analyzed. Based on the results, subjects with relatively longer fingers required more time on average to complete the task. However, no correlation was evident between the thumb length and number of errors, or among the thumb breadth, task completion time, and error frequency. The results of this study are expected to be useful in the development of a mobile interface that considers the specific features of the user’s thumb.  相似文献   

5.
The potential benefits of thumb-based touch interaction have not been fully exploited due to its usability problems and performance deterioration. Despite the well-known problems, mobile phone users often prefer thumb-based input method in their daily context of use. Without understanding input performance under realistic variability, design solutions may not address the problems adequately. This research aims to evaluate performance of one-handed thumb-based input compared to cradled finger-based input for the large number of users and varying task conditions. By investigating performance under a range of user- and task-variability, common patterns can be identified to help infer realistic performance in context of use. For this experiment, 259 participants were recruited balanced on gender and age. They performed user testing of moving an icon on a mobile touch-screen. Overall, the one-handed thumb input showed a 30% reduction in throughput compared to the cradled finger-based input, with significant reduction in speed and accuracy. Reduced throughput is attributed to inaccuracy rather than speed. In addition, the partial effects of touch position, dragging direction, and target size were investigated and quantified. In conclusion, performance could maintain constant throughput only for the finger-based input of limited task conditions, when realistic variability was introduced. Also, high variance of throughput for the thumb-based input led to poor conformity to Fitts’s law. The findings have implications for design of thumb-based touch interface to offset performance reduction and characterizing performance measure for thumb-based input method.  相似文献   

6.
As the full‐touch screen is being implemented in more smart phones, controllability of touch icons need to be considered. Previous research focused on recommendations for absolute key size. However, the size of tactual input on touch interface is not precisely equal to the icon size. This study aims to determine the suitable touchable area to improve touch accuracy. In addition, there was an investigation into the effect of layout (3 × 4, 4 × 5, 5 × 6, and 6 × 8) and icon ratio (0.5, 0.7, and 0.9). To achieve these goals, 40 participants performed a set of serial tasks on the smart phone. Results revealed that the layout and icon ratio were statistically significant on the user response: input offset, hit rate, task completion time, and preference. The 3 × 4 and 4 × 5 layouts were shown to have better performance. The icon ratio of 0.9 was shown to have greater preference. Furthermore, the hit rate (proportion of correct input) of touchable area was estimated through the bivariate normal distribution of input offset. The hit rate could vary, depending on the size of touchable area, which is a rectangle that yields a specific hit rate. A derivation procedure of the touchable area was proposed to guarantee the desirable hit rate. Meanwhile, the locations of the central region indicated a pattern of vertical touch and showed better performance. The users felt more difficulty when approaching the edge of the frame. The results of this study could be used in the design of touch interfaces for mobile devices.  相似文献   

7.
An intelligent adaptable system, aware of a user’s experienced cognitive load, may help improve performance in complex, time-critical situations by dynamically deploying more appropriate output strategies to reduce cognitive load. However, measuring a user’s cognitive load robustly, in real-time is not a trivial task. Many research studies have attempted to assess users’ cognitive load using different measurements, but these are often unsuitable for deployment in real-life applications due to high intrusiveness. Relatively novel linguistic behavioral features as potential indices of user’s cognitive load is proposed. These features may be collected implicitly and nonintrusively supporting real-time assessment of users’ cognitive load and accordingly allowing adaptive usability evaluation and interaction. Results from a laboratory experiment show significantly different linguistic patterns under different task complexities and cognitive load levels. Implications of the research for adaptive interaction are also discussed, that is, how the cognitive load measurement-based approach could be used for user interface evaluation and interaction design improvement.  相似文献   

8.
ABSTRACT

The high mobility of smart watches can easily impair interaction performance, and many applications are squeezed into an extremely tiny screen, which causes disorientations. Therefore, this study examines the extent of performance impairment caused by user movements and proposes navigation aids to alleviate the impairment. An experiment was conducted among 28 college students to investigate the influence of user movements and navigation aids on users’ performance and subjective feedback. The results indicate that the performance of using smart watches in walking conditions is comparable to that in sitting conditions. However, the use of smart watches while running reduces the success rates of operating, perceived ease of use, perceived usefulness, and flow experience, and it increases subjective cognitive workload. To improve user experience, the effectiveness of providing navigation aids for smart watches is confirmed. Using static navigation aids while sitting and walking and using animated navigation aids while moving can significantly improve users’ perceived ease of use and perceived usefulness and decrease cognitive workload. Based on these results, guidelines for tailoring the interface design of smart watches to user movements through navigation aids are proposed.  相似文献   

9.
This paper describes concepts, design, implementation, and performance evaluation of a 3D-based user interface for accessing IoT-based Smart Environments (IoT-SE). The generic interaction model of the described work addresses some major challenges of Human-IoT-SE-Interaction such as cognitive overload associated with manual device selection in complex IoT-SE, loss of user control, missing system image or over-automation. To address these challenges we propose a 3D-based mobile interface for mixed-initiative interaction in IoT-SE. The 3D visualization and 3D UI, acting as the central feature of the system, create a logical link between physical devices and their virtual representation on the end user’s mobile devices. By so doing, the user can easily identify a device within the environment based on its position, orientation, and form, and access the identified devices through the 3D interface for direct manipulation within the scene. This overcomes the problem of manual device selection. In addition, the 3D visualization provides a system image for the IoT-SE, which supports users in understanding the ambience and things going on in it. Furthermore, the mobile interface allows users to control the amount and the way the IoT-SE automates the environment. For example, users can stop or postpone system triggered automatic actions, if they don’t like or want them. Users also can remove a rule forever. By so doing, users can delete smart behaviors of their IoT-SE. This helps to overcome the automation challenges. In this paper, we present the design, implementation and evaluation of the proposed interaction system. We chose smart meeting rooms as the context for prototyping and evaluating our interaction concepts. However, the presented concepts and methods are generic and could be adapted to similar environments such as smart homes. We conducted a subjective usability evaluation (ISO-Norm 9241/110) with 16 users. All in one the study results indicate that the proposed 3D-User Interface achieved a good high score according to the ISO-Norm scores.  相似文献   

10.
The aim of this study is to analyze and comparatively evaluate the usability of touch screen mobile applications through cognitive modeling and end-user usability testing methodologies. The study investigates the accuracy of the estimated results of a cognitive model produced for touch screen mobile phone interfaces. A mobile wallet application was chosen as the mobile software. The CogTool modeling tool was used as the cognitive modeling method. Eight tasks were determined and user tests were conducted in a usability laboratory with 10 participants. The tasks were compared on the basis of step time and total task completion time. This study reveals that CogTool gives approximate estimations with actual user performance on touch screen mobile phone application interfaces. However, if there are special cases in the tasks such that users are very accustomed to the steps or decision-making that is involved in the tasks, the “Think Operation” in CogTool should be modified.  相似文献   

11.
With world population ageing, how to help seniors to adapt to technology life is an important issue. Technology is becoming life rather than resistance, because many of the technology applications are often accompanied by a lot of information to process. This makes the user interface to become an important bridge between human computer interactions. Especially the inconvenience caused by human ageing, these related issues from the cognitive and operational of products are derived. This study proposes a study of user interface design based on natural interaction to increase seniors’ usage intention. In the proposed contents, the Kinect sensor is used to retrieve seniors’ in-depth information in movements, thus the user interface of system can be operated by the gesture intuitively. In the framework of the system, in the first all, the morphology is applied to identify the features of a hand from depth values obtained from the sensor. Gesture is used to recognize operating behavior of users to implement the interactive action, and collision detection is applied to confirm effectiveness of operation. On the other hand, through interpretive structural model (ISM), each design element of interactive interface can be decomposed and realized, and the solution for target and direction of design problem is also proposed. At the meanwhile, the concept of affordance is conducted to the development of interface for graphic users that proposed in this study, and the design achievement contains operation and usability of intuition can further be acquired. Finally, based on the proposed methodology, an intuitive user interface of digital devices is constructed by Java programming language that allows for verifying the feasibility of user interface for seniors. Besides, the proposed method can be widely used to develop the user interface for various products.  相似文献   

12.
现有时空感知的表示学习框架无法对强时空语义的实际场景存在的“When”、“Where”和“What”3个问题给出一个统一的解决方案。同时,现有的时间和空间建模上的研究方案也存在着一定的缺陷,无法在复杂的实际场景中取得最优的性能。为了解决这些问题,本文提出了一个统一的用户表示框架—GTRL (geography and time aware representation learning),可以同时在时间和空间的维度上对用户的历史行为轨迹进行联合建模。在时间建模上,GTRL采用函数式的时间编码以及连续时间和上下文感知的图注意力网络,在动态的用户行为图上灵活地捕获高阶的结构化时序信息。在空间建模上,GTRL采用了层级化的地理编码和深度历史轨迹建模模块高效地刻画了用户的地理位置偏好。GTRL设计了统一的联合优化方案,同时在交互预测、交互时间预测以及交互位置3个任务上进行模型学习。最后,本文在公开数据集和工业数据集上设计了大量的实验,分别验证了GTRL相较学术界基线模型的优势,以及在实际业务场景中的有效性。  相似文献   

13.
The usability of the user interface is a key aspect for the success of several industrial products. This assumption has led to the introduction of numerous design methodologies addressed to evaluate the user-friendliness of industrial products. Most of these methodologies follow the participatory design approach to involve the user in the design process. Virtual Reality is a valid tool to support Participatory Design, because it facilitates the collaboration among designers and users.The present study aims to evaluate the feasibility and the efficacy of an innovative Participatory Design approach where Virtual Reality plays a ‘double role’: a tool to evaluate the usability of the virtual product interface, and a communication channel that allows users to be directly involved in the design process as co-designers.In order to achieve these goals, we conducted three experiments: the purpose of the first experiment is to determine the influence of the virtual interface on the usability evaluation by comparing “user–real product” interaction and “user–virtual product” interaction. Subsequently, we tested the effectiveness of our approach with two experiments involving users (directly or through their participation in a focus group) in the redesign of a product user interface. The experiments were conducted with two typologies of consumer appliances: a microwave oven and a washing machine.  相似文献   

14.
《Interacting with computers》2006,18(5):1084-1100
This study aims to establish a model-based approach for user interface design that simultaneously considers the system's information hierarchy, users' task procedure knowledge, and system interfaces. The approach is based on a framework that contains multiple interaction models to express both system elements and users' knowledge. The framework evaluates system interface through the interaction between user's knowledge on interface, task procedure and information structure perceived by the user in the system. The interface is evaluated by its contribution to the users' task performance and system navigation.These three factors were defined as design factors that affect users' task performance. Through the crosscheck process of models, the relation between information, interface, and task procedure is calculated into combined difficulty index (CDI) that expresses the difficulty of a system interface that users would experience during the use of system. A user test was conducted for the validation of the CDI. The difficulties of the interface of a mobile healthcare system were predicted with the CDI, and the predictions were compared with the experimental results, where the users' performance showed consistence with the prediction.  相似文献   

15.
This paper presents the design and control of a novel assistive robotic walker that we call “JAIST active robotic walker (JARoW)”. JARoW is developed to provide potential users with sufficient ambulatory capability in an efficient, cost-effective way. Specifically, our focus is placed on how to allow easier maneuverability by creating a natural interface between the user and JARoW. For the purpose, we develop a rotating infrared sensor to detect the user’s lower limb movement. The implementation details of the JARoW control algorithms based on the sensor measurements are explained, and the effectiveness of the proposed algorithms is verified through experiments. Our results confirmed that JARoW can autonomously adjust its motion direction and velocity according to the user’s walking behavior without requiring any additional user effort.  相似文献   

16.
Does the delivery platform for a health behavior game contribute to its effectiveness? With the growing popularity of interactive video games that combine physical exercise with gameplay, known as “exergames,” there has been a burgeoning interest in their impact on users’ exercise attitudes and behavioral outcomes. This study examines how the level of user interface embodiment, the degree to which the user’s body interacts with the game, affects the user’s experience, game behavior, and intention for behavior change. We conducted a between-participants experiment in which participants (N = 119) played an exergame under one of the three levels of user interface embodiment (low, medium, and high). Our results revealed a significant positive main effect of user interface embodiment on user experience (i.e., the sense of being in the game, “presence,” and enjoyment); level of energy expenditure (change in heart rate); and intention to further engage in exergame-play exercise but not necessarily to increase exercise in the physical world. A further analysis revealed the mediating roles of user experience in the association between user interface embodiment and intention to repeat exergaming and a potential link between heart rate change and level of presence in the game. We conclude that type of interface is a key variable in this health communication environment, affecting user experience, behavior, and some intention for behavior change.  相似文献   

17.
Flexible user interfaces that can be customized to meet the needs of the task at hand are particularly important for telecollaboration. This article presents the design and implementation of a user interface for DISCIPLE, a platform-independent telecollaboration framework. DISCIPLE supports sharing of Java components that are imported into the shared workspace at run-time and can be interconnected into more complex components. As a result, run-time interconnection of various components allows user tailoring of the human-computer interface. Software architecture for customization of both a group-level and application-level interfaces is presented, with interface components that are loadable on demand. The architecture integrates the sensory modalities of speech, sight, and touch. Instead of imposing one "right" solution onto users, the framework lets users tailor the user interface that best suits their needs. Finally, laboratory experience with DISCIPLE tested on a variety of applications with the framework is discussed along with future research directions.  相似文献   

18.
It is a well-known fact that users vary in their preferences and needs. Therefore, it is very crucial to provide the customisation or personalisation for users in certain usage conditions that are more associated with their preferences. With the current limitation in adopting perceptual processing into user interface personalisation, we introduced the possibility of inferring interface design preferences from the user’s eye-movement behaviour. We firstly captured the user’s preferences of graphic design elements using an eye-tracker. Then we diagnosed these preferences towards the region of interests to build a prediction model for interface customisation. The prediction models from eye-movement behaviour showed a high potential for predicting users’ preferences of interface design based on the paralleled relation between their fixation and saccadic movement. This mechanism provides a novel way of user interface design customisation and opens the door for new research in the areas of human–computer interaction and decision-making.  相似文献   

19.
《Computers & Education》2010,54(4):1029-1039
Throughout their lives, people are faced with various learning situations, for example when they learn how to use new software, services or information systems. However, research in the field of Interactive Learning Environments shows that learners needing assistance do not systematically seek or use help, even when it is available. The aim of the present study is to explore the role of some factors from research in Interactive Learning Environments in another situation: using a new technology not as a means of acquiring knowledge but to realize a specific task. Firstly, we present the three factors included in this study (1) the role of the content of assistance, namely operative vs. function-oriented help; (2) the role of the user’s prior knowledge; (3) the role of the trigger of assistance, i.e. help provided after the user’s request vs. help provided by the system. In this latter case, it is necessary to detect the user’s difficulties. On the basis of research on problem-solving, we list behavioral criteria expressing the user’s difficulties. Then, we present two experiments that use “real” technologies developed by a large company and tested by “real” users. The results showed that (1) even when participants had reached an impasse, most of them never sought assistance, (2) operative assistance that was automatically provided by the system was effective for novice users, and (3) function-oriented help that was automatically provided by the system was effective for expert users. Assistance can support deadlock awareness and can also focus on deadlock solving by guiding task. Assistance must be adapted to prior knowledge, progress and goals of learners to improve learning.  相似文献   

20.
Multiple monitors are commonly used in the workplace nowadays. This study compares user productivity and windows management style (WMS) on single- and dual-monitor work stations for engineering tasks of three complexity levels. Four productivity measures including task time, cursor movement, the number of window switches, and the number of mouse clicks were compared. The results showed that dual-monitor setting resulted in significantly less window switches and mouse clicks. Most users preferred dual-monitor setting. To understand how users manage multiple windows in completing their tasks, a new WMS categorization method is proposed, toggler and resizer, and user behavior was categorized into one of these two styles. More users adopted “toggler” style, but as the task complexity level increased, some “toggler” style users switched to “resizer” style.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号