首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hybrid tracking for outdoor augmented reality applications   总被引:3,自引:0,他引:3  
We've developed a fully mobile, wearable AR system that combines a vision-based tracker (primarily software algorithms) that uses natural landmarks, with an inertial tracker (custom hardware and firmware) based on silicon micromachined accelerometers and gyroscopes. Unlike other vision-based and hybrid systems, both components recover the full 6 DOF pose. Fusing the two tracking subsystems gives us the benefits of both technologies, while the sensors' complementary nature helps overcome sensor-specific deficiencies. Our system is tailored to affordable, lightweight, energy-efficient mobile AR applications for urban environments, especially the historic centers of European cities.  相似文献   

2.
3.
4.
At the Wearable Computer Lab at the University of South Australia, we have been performing research into outdoor augmented reality (AR) systems for the last seven years. During this time the technology has vastly improved, resulting in more accurate systems that have better quality output. While tracking and registration are important issues in the area of AR, it's also important that we have suitable user interfaces that let people effectively view information and control the computer to get the desired output. Therefore, the Tinmith project explores the problem of interacting with a mobile AR system outdoors and the types of possible applications.  相似文献   

5.
This paper presents a novel software framework called AR-Room for fast prototyping of a variety of augmented reality applications. AR-Room consists of a lot of deployable components for core augmented reality technologies, modules for hardware abstraction, and an authoring toolkit for the rapid content design. On the AR-Room, application developers are only required to describe their content scenarios together with a configuration of software components. A content scenario is represented by a set of event-action pairs. Four major procedures in an augmented reality application are an image analyzer, an interaction handler, a rendering engine and an image synthesizer. According to the provided scenarios the designated components cooperatively provide real-time analysis and synthesis of input video frames. Several augmented reality applications are implemented on the AR-Room to show how the framework can be efficiently used for the fast prototyping of applications.  相似文献   

6.
Extended overview techniques for outdoor augmented reality   总被引:1,自引:0,他引:1  
In this paper, we explore techniques that aim to improve site understanding for outdoor Augmented Reality (AR) applications. While the first person perspective in AR is a direct way of filtering and zooming on a portion of the data set, it severely narrows overview of the situation, particularly over large areas. We present two interactive techniques to overcome this problem: multi-view AR and variable perspective view. We describe in details the conceptual, visualization and interaction aspects of these techniques and their evaluation through a comparative user study. The results we have obtained strengthen the validity of our approach and the applicability of our methods to a large range of application domains.  相似文献   

7.
Orientation tracking for outdoor augmented reality registration   总被引:5,自引:0,他引:5  
Our work stems from a program focused on developing tracking technologies for wide-area augmented realities in unprepared outdoor environments. Other participants in the Defense Advanced Research Projects Agency (Darpa) funded Geospatial Registration of Information for Dismounted Soldiers (Grids) program included University of North Carolina at Chapel Hill and Raytheon. We describe a hybrid orientation tracking system combining inertial sensors and computer vision. We exploit the complementary nature of these two sensing technologies to compensate for their respective weaknesses. Our multiple-sensor fusion is novel in augmented reality tracking systems, and the results demonstrate its utility  相似文献   

8.
The use of augmented reality (AR) techniques can revolutionize the way people interact with unfamiliar environments. By tracking the user's position and orientation, complicated spatial information can be registered against the real world. My colleagues and I are researching the problem of developing mobile AR systems to be worn by individual users operating in large, complicated environments such as cities. However, an urban environment is extremely complicated. It is populated by large numbers of buildings, each of which can have numerous facts stored about it. Therefore, it is easy for a user to experience information overload. This problem is illustrated. To minimize problems of information overload, we have begun to develop algorithms for information filtering. These tools automatically restrict the amount of information displayed  相似文献   

9.
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision, inertial tracking and low-latency rendering techniques. A prototype low-power and low-latency renderer using an off-the-shelf 3D card is discussed.  相似文献   

10.
11.
This paper presents the experimental analysis of mobile phones for Augmented Reality marker tracking, a core task that any CAR application must include. The results show that the most time consuming stage is the marker detection stage, followed by the image acquisition stage. Moreover, the rendering stage is decoupled on some devices, depending on the operative system used. This decoupling process allows avoiding low refresh rates, facilitating the collaborative work. However, the use of multicore devices does not significantly improve the performance provided by CAR applications. Finally, the results show that unless a poor network bandwidth makes the network to become the system bottleneck, the performance of CAR applications based on mobile phones will be limited by the detection stage. These results can be used as the basis for an efficient design of CAR systems and applications based on mobile phones.  相似文献   

12.
We describe an experimental mobile augmented reality system (MARS) testbed that employs different user interfaces to allow outdoor and indoor users to access and manage information that is spatially registered with the real world. Outdoor users can experience spatialized multimedia presentations that are presented on a head-tracked, see-through, head-worn display used in conjunction with a hand-held pen-based computer. Indoor users can get an overview of the outdoor scene and communicate with outdoor users through a desktop user interface or a head- and hand-tracked immersive augmented reality user interface.  相似文献   

13.
This paper addresses the challenging issue of marker less tracking for Augmented Reality. It proposes a real-time camera localization in a partially known environment, i.e. for which a geometric 3D model of one static object in the scene is available. We propose to take benefit from this geometric model to improve the localization of keyframe-based SLAM by constraining the local bundle adjustment process with this additional information. We demonstrate the advantages of this solution, called contrained SLAM, on both synthetic and real data and present very convincing augmentation of 3D objects in real-time. Using this tracker, we also propose an interactive augmented reality system for training application. This system, based on a Optical See-Through Head Mounted Display, allows to augment the users vision field with virtual information accurately co-registered with the real world. To keep greatly benefit of the potential of this hand free device, the system combines the tracker module with a simple user-interaction vision-based module to provide overlaid information in response to user requests.  相似文献   

14.
A common goal of outdoor augmented reality (AR) is the presentation of annotations that are registered to anchor points in the real world. We present an enhanced approach for registering and tracking such anchor points, which is suitable for current generation mobile phones and can also successfully deal with the wide variety of viewing conditions encountered in real life outdoor use. The approach is based on on-the-fly generation of panoramic images by sweeping the camera over the scene. The panoramas are then used for stable orientation tracking, while the user is performing only rotational movements. This basic approach is improved by several new techniques for the re-detection and tracking of anchor points. For the re-detection, specifically after temporal variations, we first compute a panoramic image with extended dynamic range, which can better represent varying illumination conditions. The panorama is then searched for known anchor points, while orientation tracking continues uninterrupted. We then use information from an internal orientation sensor to prime an active search scheme for the anchor points, which improves matching results. Finally, global consistency is enhanced by statistical estimation of a global rotation that minimizes the overall position error of anchor points when transforming them from the source panorama in which they were created, to the current view represented by a new panorama. Once the anchor points are redetected, we track the user's movement using a novel 3-degree-of-freedom orientation tracking approach that combines vision tracking with the absolute orientation from inertial and magnetic sensors. We tested our system using an AR campus guide as an example application and provide detailed results for our approach using an off-the-shelf smartphone. Results show that the re-detection rate is improved by a factor of 2 compared to previous work and reaches almost 90% for a wide variety of test cases while still keeping the ability to run at interactive frame rates.  相似文献   

15.
A framework for hardware/software codesign   总被引:1,自引:0,他引:1  
Kumar  S. Aylor  J.H. Johnson  B.W. Wulf  W.A. 《Computer》1993,26(12):39-45
It is argued that a hardware/software codesign methodology should support the following capabilities: integration of the hardware and software design processes; exploration of hardware/software tradeoffs and evaluation of hardware/software alternatives; and model continuity. A codesign methodology that supports many of these capabilities is outlined. The methodology is iterative in nature and serves to guide codesign exploration with the uninterpreted/interpreted modeling approach. It integrates performance (uninterpreted) models and functional (interpreted) models in a common simulation environment  相似文献   

16.
The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks.  相似文献   

17.
Augmented reality (AR) technology consists in adding computer-generated information (2D/3D) to a real video sequence in such a manner that the real and virtual objects appear coexisting in the same world. To get a realistic illusion, the real and virtual objects must be properly aligned with respect to each other, which requires a robust real-time tracking strategy—one of the bottlenecks of AR applications. In this paper, we describe the limitations and advantages of different optical tracking technologies, and we present our customized implementation of both recursive tracking and tracking by detection approaches. The second approach requires the implementation of a classifier and we propose the use of a Random Forest classifier. We evaluated both approaches in the context of an AR application for design review. Some conclusions regarding the performance of each approach are given.  相似文献   

18.
在对于一个增强现实系统的整体结构与各组成模块进行完整分析的基础之上,使用OpenCV、Coin3D和ARToolKit这3个软件开发包在Visual Studio 2003.net环境下开发了一个增强现实系统的软件平台.主要阐述了虚实环境的构建、三维注册、标志物检测、图像处理,视频融合等主要功能模块的实现原理与方法,最后给出了系统实际运行的效果.  相似文献   

19.
This paper addresses robust and ultrafast pose tracking on mobile devices, such as smartphones and small drones. Existing methods, relying on either vision analysis or inertial sensing, are either too computational heavy to achieve real-time performance on a mobile platform, or not sufficiently robust to address unique challenges in mobile scenarios, including rapid camera motions, long exposure time of mobile cameras, etc. This paper presents a novel hybrid tracking system which utilizes on-device inertial sensors to greatly accelerate the visual feature tracking process and improve its robustness. In particular, our system adaptively resizes each video frame based on inertial sensor data and applies a highly efficient binary feature matching method to track the object pose in each resized frame with little accuracy degradation. This tracking result is revised periodically by a model-based feature tracking method (Hare et al. 2012) to reduce accumulated errors. Furthermore, an inertial tracking method and a solution of fusing its results with the feature tracking results are employed to further improve the robustness and efficiency. We first evaluate our hybrid system using a dataset consisting of 16 video clips with synchronized inertial sensing data and then assess its performance in a mobile augmented reality application. Experimental results demonstrated our method’s superior performance to a state-of-the-art feature tracking method (Hare et al. 2012), a direct tracking method (Engel et al. 2014) and the Vuforia SDK (Ibañez and Figueras 2013), and can run at more than 40 Hz on a standard smartphone. We will release the source code with the pubilication of this paper.  相似文献   

20.
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号