首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present an easy interaction technique for accessing location-based contextual data shown on a head-worn wearable computer display. Our technique, called Context Compass, is based on a regular compass metaphor. Each object belonging to the user’s current context is visualised on a linear compass shown on the screen. The object directly in front of the user is shown in the middle of the compass and can be activated. Whenever the user turns his or her head, the objects on the screen move accordingly. Therefore, an object can be selected by simply turning one’s head towards it. Context Compass consumes a minimal amount of screen space, making it ideal for usage with see-through head-worn displays. An initial pilot study, applying a newly developed usability method customised especially for Context Compass, revealed that Context Compass can be learned virtually immediately. Further, the method itself proved to be successful in evaluating techniques such as Context Compass.  相似文献   

2.
We have designed, implemented, and evaluated a map application for wearable computer users. Our application, called WalkMap, is targeted at a walking user in an urban environment, offering the user both navigational aids as well as contextual information. WalkMap uses augmented reality techniques to display a map on the surrounding area on the user's head-worn display. WalkMap is constructed by the means of software development, user interface design and evaluations, and existing knowledge on how humans use maps and navigate. The key design driver in our approach is intuitivity of use. In this paper, we present the design and implementation process of our application, considering human-map interfaces, technical implementation, and human-computer interfaces. We identify some of the key issues in these areas, and present the way they have been solved. We also present some usability evaluation results.  相似文献   

3.
An investigation of communicative modalities in relation to mobile device interaction while walking is presented. A user evaluation compared three communicative modality conditions: Auditory, Visual, and Mixed (a redundant audio-visual modality). Findings determined that redundant audio-visual modalities are as good as (but no better than) the visual modality, and both are superior to the auditory modality. Reported findings also determined that walking speeds are unaffected by communicative modality.Shape drawing tasks were performed on a touch screen using each modality, and a robust, novel error calculation algorithm was developed to assess the drawing error between the user input and the desired shapes. Drawing error was determined to be significantly higher with the Auditory condition, but drawing speed was unaffected by the communicative modality.The evaluation finds that the visual modality should be leveraged as the primary communicative modality for mobile, map-based interfaces. The drawing error algorithm can be applied to any domain that requires determining precise matchings to known information when drawing.  相似文献   

4.
This paper proposes the “Substitute Interface” to utilize the flat surfaces of objects around us as part of an ad hoc mobile device. The substitute interface is established by the combination of wearable devices such as a head-mounted display with camera and a ring-type microphone. The camera recognizes which object the user intends to employ. When the user picks up and taps the object, such as a notebook, a virtual display is overlaid on the object, and the user can operate the ad hoc mobile device as if the object were part of the device. Display size can be changed easily by selecting a larger object. The user’s pointing/selection action is recognized by the combination of the camera and the ring-type microphone. We first investigate the usage scene of tablet devices and create a prototype that can operate as a tablet device. Experiments on the prototype confirm that the proposal functions as intended.  相似文献   

5.
Wearable projector and camera (PROCAM) interfaces, which provide a natural, intuitive and spatial experience, have been studied for many years. However, existing hand input research into such systems revolved around investigations into stable settings such as sitting or standing, not fully satisfying interaction requirements in sophisticated real life, especially when people are moving. Besides, increasingly more mobile phone users use their phones while walking. As a mobile computing device, the wearable PROCAM system should allow for the fact that mobility could influence usability and user experience. This paper proposes a wearable PROCAM system, with which the user can interact by inputting with finger gestures like the hover gesture and the pinch gesture on projected surfaces. A lab-based evaluation was organized, which mainly compared two gestures (the pinch gesture and the hover gesture) in three situations (sitting, standing and walking) to find out: (1) How and to what degree does mobility influence different gesture inputs? Are there any significant differences between gesture inputs in different settings? (2) What reasons cause these differences? (3) What do people think about the configuration in such systems and to what extent does the manual focus impact such interactions? From qualitative and quantitative points of view, the main findings imply that mobility impacts gesture interactions in varying degrees. The pinch gesture undergoes less influence than the hover gesture in mobile settings. Both gestures were impacted more in walking state than in sitting and standing states by all four negative factors (lack of coordination, jittering hand effect, tired forearms and extra attention paid). Manual focus influenced mobile projection interaction. Based on the findings, implications are discussed for the design of a mobile projection interface with gestures.  相似文献   

6.
Searching for an item in an ordered list is a frequently reoccurring task while using computers. The search can be carried out in several ways. In this paper, we present a new, efficient technique to find an alphanumeric item in a sorted list. This technique, called BinScroll, is based on the well-known binary search algorithm. BinScroll can be used with a minimum of four buttons, making it ideal for keyboardless mobile use. It can also be implemented with a minimum of one line of text, making it suitable for devices with limited screen space or text-only displays. Our evaluation showed that after 15 minutes of training, a novel user is able to locate any item from a list of 10,000 movie names in 14 seconds on average, and an expert user with a few hours of learning can find any item in about seven seconds. This makes it one of the most efficient selection techniques when long lists are concerned.  相似文献   

7.
Aura 3D textures     
This paper presents a new technique, called aura 3D textures, for generating solid textures based on input examples. Our method is fully automatic and requires no user interactions in the process. Given an input texture sample, our method first creates its aura matrix representations and then generates a solid texture by sampling the aura matrices of the input sample constrained in multiple view directions. Once the solid texture is generated, any given object can be textured by the solid texture. We evaluate the results of our method based on extensive user studies. Based on the evaluation results using human subjects, we conclude that our algorithm can generate faithful results of both stochastic and structural textures with an average successful rate of 76.4 percent. Our experimental results also show that the new method outperforms Wei and Levoy's method and is comparable to that proposed by Jagnow et al. (2004)  相似文献   

8.
He  Yong  Li  Nan  Wang  Chao  Xia  Lin-qing  Yong  Xu  Wu  Xin-yu 《浙江大学学报:C卷英文版》2019,20(3):318-329

Today, exoskeletons are widely applied to provide walking assistance for patients with lower limb motor incapacity. Most existing exoskeletons are under-actuated, resulting in a series of problems, e.g., interference and unnatural gait during walking. In this study, we propose a novel intelligent autonomous lower extremity exoskeleton (Auto-LEE), aiming at improving the user experience of wearable walking aids and extending their application range. Unlike traditional exoskeletons, Auto-LEE has 10 degrees of freedom, and all the joints are actuated independently by direct current motors, which allows the robot to maintain balance in aiding walking without extra support. The new exoskeleton is designed and developed with a modular structure concept and multi-modal human-robot interfaces are considered in the control system. To validate the ability of self-balancing bipedal walking, three general algorithms for generating walking patterns are researched, and a preliminary experiment is implemented.

  相似文献   

9.
Part modelling in a CAD environment requires a bi-manual 3D input interface to fully exploit its potentialities. In this research we provide extensive user tests on bi-manual modelling using different devices to control 3D model’s rotation. Our results suggest that a simple trackball device is effective when the user task is mostly limited to rotation control (i.e. when modelling parts in a CAD environment). In our tests, performances are even better than those achieved with a specifically designed device. Since the task of rotating a CAD part often shows the need of flipping the controlled object, we introduce a non linear transfer function which combines the precision of a zero order control mode with the ability to recognise fast movements. This new modality shows a significant improvement in the user’s performances and candidates itself for integration in next generation CAD interfaces.  相似文献   

10.
Attention, memory, and wearable interfaces   总被引:1,自引:0,他引:1  
The paper considers the limits of human attention and how wearable interfaces should be developed to complement, not interfere, with normal human capabilities. Most interfaces on desktop computers do not have this problem; desktop interface designers can assume that the user is solely concentrating on the digital task. However, a major advantage of a wearable is that users can take it anywhere and use it anytime.  相似文献   

11.
当前,面对科学、工程和商业领域中海量的多维数据,用户迫切需要使用有效的可视化工具在知识发现、信息认知及信息决策过程中对其进行理解。针对传统基于降维映射的数据可视化方法计算复杂度高且无法提供维度分布信息的缺点,提出一种基于正2k边形的多维数据可视化方法RPES,通过建立多维数据空间的低维"参照物"——正2k边形坐标系,以减小多维对象在正2k边形坐标系及多维数据空间中的坐标差别为准则,使用最优化方法对其进行降维,以点云的形式标绘在低维可视空间中,完成多维数据的降维可视展现。实验证明,RPES的降维算法高效、容易实现,适用于数据量较大、维度较高的数据集,可视化结果不仅易于理解,而且能够有效提供维度分布信息,有利于用户发掘隐性知识,辅助其进行基于多维数据的决策。  相似文献   

12.
Conducting cognitive assessment tests throughout normal daily life offers new opportunities to early detect changes in cognitive efficiency. Such tests would allow identification of early symptoms of cognitive impairment, monitor the progress of disease processes related to cognitive efficiency and reduce the risk of cognitive overload. Reaction time tests are known as simple and sensitive tests for detecting variation in cognitive efficiency. A drawback of existing reaction time tests is that they require the full attention of a test person, which prohibits the measurement of cognitive efficiency during daily routine tasks. In this contribution we present the design, implementation and empirical evaluation of two wearable reaction time tests that can be operated throughout everyday life. We designed and implemented wearable watch-like devices, which combine the generation of haptic stimuli and the recognition of hand gestures as the subject’s response. For the evaluation of the wearable interface, we conducted a user study with 20 subjects to investigate to what extent we can measure changes in length and variability of user’s reaction time with the wearable interfaces in comparison to well accepted, traditional desktop-based tests. Based on the achieved statistical results, we conclude that the presented wearable reaction time tests are suitable to measure factors that influence length and variability of reaction times.  相似文献   

13.
In this article, a novel technique for user’s authentication and verification using gait as a biometric unobtrusive pattern is proposed. The method is based on a two stages pipeline. First, a general activity recognition classifier is personalized for an specific user using a small sample of her/his walking pattern. As a result, the system is much more selective with respect to the new walking pattern. A second stage verifies whether the user is an authorized one or not. This stage is defined as a one-class classification problem. In order to solve this problem, a four-layer architecture is built around the geometric concept of convex hull. This architecture allows to improve robustness to outliers, modeling non-convex shapes, and to take into account temporal coherence information. Two different scenarios are proposed as validation with two different wearable systems. First, a custom high-performance wearable system is built and used in a free environment. A second dataset is acquired from an Android-based commercial device in a ‘wild’ scenario with rough terrains, adversarial conditions, crowded places and obstacles. Results on both systems and datasets are very promising, reducing the verification error rates by an order of magnitude with respect to the state-of-the-art technologies.  相似文献   

14.
Wearable augmented reality (AR) smart glasses have been utilized in various applications such as training, maintenance, and collaboration. However, most previous research on wearable AR technology did not effectively supported situation-aware task assistance because of AR marker-based static visualization and registration. In this study, a smart and user-centric task assistance method is proposed, which combines deep learning-based object detection and instance segmentation with wearable AR technology to provide more effective visual guidance with less cognitive load. In particular, instance segmentation using the Mask R-CNN and markerless AR are combined to overlay the 3D spatial mapping of an actual object onto its surrounding real environment. In addition, 3D spatial information with instance segmentation is used to provide 3D task guidance and navigation, which helps the user to more easily identify and understand physical objects while moving around in the physical environment. Furthermore, 2.5D or 3D replicas support the 3D annotation and collaboration between different workers without predefined 3D models. Therefore, the user can perform more realistic manufacturing tasks in dynamic environments. To verify the usability and usefulness of the proposed method, we performed quantitative and qualitative analyses by conducting two user studies: 1) matching a virtual object to a real object in a real environment, and 2) performing a realistic task, that is, the maintenance and inspection of a 3D printer. We also implemented several viable applications supporting task assistance using the proposed deep learning-based task assistance in wearable AR.  相似文献   

15.
Due to the rapid proliferation of both user-generated and broadcasted content, the interfaces for search and browsing of visual media have become increasingly important. This paper presents a novel intuitive interactive interface for browsing of large-scale image and video collections. It visualises underlying structure of the dataset by the size and spatial relations of displayed images. In order to achieve this, images or video key-frames are initially clustered using an unsupervised graph-based clustering algorithm. By selecting images that are hierarchically laid out on the screen, user can intuitively navigate through the collection or search for specific content. The extensive experimental results based on user evaluation of photo search, browsing and selection as well as interactive video search demonstrate good usability of the presented system and improvement when compared to the standard methods for interaction with large-scale image and video collections.  相似文献   

16.
A video playing on a television screen emits a characteristic flickering, which serves as an identification feature for the video. This paper presents a method for video recognition by sampling the ambient light sensor of a smartphone or wearable computer. The evaluation shows that given a set of known videos, a recognition rate of up to 100% is possible by sampling a sequence of 15 to 120 s length. Our method works even if the device has no direct line of sight to the television screen, since ambient light reflected from walls is sufficient. A major factor of influence on the recognition is the number of video cuts that change the light emitted by the screen.  相似文献   

17.
(For pt.I see ibid., p.84-8, June 1989). The author continues an exploration of the kind of legal rights that might be asserted over the nondevice aspects of user interfaces and screen displays. He discusses further the difficulties and problems that protection of user interfaces and screen displays might cause. He examines good screen design, human factors analysis, conventional techniques and standards, the rapid rate of change, and keystrokes and interfaces  相似文献   

18.
Immersive spaces such as 4-sided displays with stereo viewing and high-quality tracking provide a very engaging and realistic virtual experience. However, walking is inherently limited by the restricted physical space, both due to the screens (limited translation) and the missing back screen (limited rotation). In this paper, we propose three novel locomotion techniques that have three concurrent goals: keep the user safe from reaching the translational and rotational boundaries; increase the amount of real walking and finally, provide a more enjoyable and ecological interaction paradigm compared to traditional controller-based approaches. We notably introduce the "Virtual Companion", which uses a small bird to guide the user through VEs larger than the physical space. We evaluate the three new techniques through a user study with travel-to-target and path following tasks. The study provides insight into the relative strengths of each new technique for the three aforementioned goals. Specifically, if speed and accuracy are paramount, traditional controller interfaces augmented with our novel warning techniques may be more appropriate; if physical walking is more important, two of our paradigms (extended Magic Barrier Tape and Constrained Wand) should be preferred; last, fun and ecological criteria would favor the Virtual Companion.  相似文献   

19.
In contrast to 2D scatterplots, the existing 3D variants have the advantage of showing one additional data dimension, but suffer from inadequate spatial and shape perception and therefore are not well suited to display structures of the underlying data. We improve shape perception by applying a new illumination technique to the pointcloud representation of 3D scatterplots. Points are classified as locally linear, planar, and volumetric structures—according to the eigenvalues of the inverse distance-weighted covariance matrix at each data element. Based on this classification, different lighting models are applied: codimension-2 illumination, surface illumination, and emissive volumetric illumination. Our technique lends itself to efficient GPU point rendering and can be combined with existing methods like semi-transparent rendering, halos, and depth or attribute based color coding. The user can interactively navigate in the dataset and manipulate the classification and other visualization parameters. We demonstrate our visualization technique by showing examples of multi-dimensional data and of generic pointcloud data.  相似文献   

20.
A variety of studies have been conducted to improve methods of selecting a tiny virtual target on small touch screen interfaces of handheld devices such as mobile phones and PDAs. These studies, however, focused on a specific selection method, and did not consider various layouts resulting from different target sizes and densities on the screen. This study proposes a Two-Mode Target Selection (TMTS) method that automatically detects the target layout and changes to an appropriate mode using the concept of an activation area. The usability of TMTS was compared experimentally to those of other methods. TMTS changed to the appropriate mode successfully for a given target layout and showed the shortest task completion time and the fewest touch inputs. TMTS was also rated by the users as the easiest to use and the most preferred. TMTS could significantly increase the ease, accuracy, and efficiency of target selection, and thus enhance user satisfaction when the users select targets on small touch screen devices.

Relevance to Industry

The results of this study can be used to develop fast and accurate target selection methods in handheld devices with touch screen interfaces especially when the users use their thumb to activate the desired target.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号