首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 252 毫秒
1.
This article addresses the problem of creating interactive mixed reality applications where virtual objects interact in images of real world scenarios. This is relevant to create games and architectural or space planning applications that interact with visual elements in the images such as walls, floors and empty spaces. These scenarios are intended to be captured by the users with regular cameras or using previously taken photographs. Introducing virtual objects in photographs presents several challenges, such as pose estimation and the creation of a visually correct interaction between virtual objects and the boundaries of the scene. The two main research questions addressed in this article include, the study of the feasibility of creating interactive augmented reality (AR) applications where virtual objects interact in a real world scenario using the image detected high-level features and, also, verifying if untrained users are capable and motivated enough to perform AR initialization steps. The proposed system detects the scene automatically from an image with additional features obtained using basic annotations from the user. This operation is significantly simple to accommodate the needs of non-expert users. The system analyzes one or more photos captured by the user and detects high-level features such as vanishing points, floor and scene orientation. Using these features it will be possible to create mixed and augmented reality applications where the user interactively introduces virtual objects that blend with the picture in real time and respond to the physical environment. To validate the solution several system tests are described and compared using available external image datasets.  相似文献   

2.
The widespread use of smart phones with GPS and orientation sensors opens up new possibilities for location-based annotations in outdoor environments. However, a completely different approach is required for indoors. In this study, we introduce IMAF, a novel indoor modeling and annotation framework on a mobile phone. The framework produces a 3D room model in situ with five selections from user without prior knowledge on actual geometry distance or additional apparatus. Using the framework, non-experts can easily capture room dimensions and annotate locations and objects within the room for linking virtual information to the real space represented by an approximated box. For registering 3D room model to the real space, an hybrid method of visual tracking and device sensors obtains accurate orientation tracking result and still achieves interactive frame-rates for real-time applications on a mobile phone. Once the created room model is registered to the real space, user-generated annotations can be attached and viewed in AR and VR modes. Finally, the framework supports object-based space to space registration for viewing and creating annotations from different views other than the view that generated the annotations. The performance of the proposed framework is demonstrated with achieved model accuracy, modeling time, stability of visual tracking and satisfaction of annotation. In the last section, we present two exemplar applications built on IMAF.  相似文献   

3.
Advanced interaction techniques in virtual environments   总被引:4,自引:0,他引:4  
Fundamental to much of the Virtual Reality work is, in addition to high-level 3D graphical and multimedia scenes, research on advanced methods of interaction. The “visitor” of such virtual worlds must be able to act and behave intuitively, as he would in everyday situations, as well as receive expectable natural behaviour presented as feedback from the objects in the environment, in a way that he/she has the feeling of direct interaction with his/her application. In this paper we present several techniques to enrich the naturalness and enhance the user involvement in the virtual environment. We present how the user is enabled to grab objects without using any specific and elaborate hand gesture, which is more intuitive and close to the way humans are used to do. We also introduce a technique that makes it possible for the user to surround objects without any force-feedback interaction device. This technique allows the user to surround or “walk” with the virtual hand on the object's surface and look for the best position to grab it.  相似文献   

4.
Wearable augmented reality (AR) smart glasses have been utilized in various applications such as training, maintenance, and collaboration. However, most previous research on wearable AR technology did not effectively supported situation-aware task assistance because of AR marker-based static visualization and registration. In this study, a smart and user-centric task assistance method is proposed, which combines deep learning-based object detection and instance segmentation with wearable AR technology to provide more effective visual guidance with less cognitive load. In particular, instance segmentation using the Mask R-CNN and markerless AR are combined to overlay the 3D spatial mapping of an actual object onto its surrounding real environment. In addition, 3D spatial information with instance segmentation is used to provide 3D task guidance and navigation, which helps the user to more easily identify and understand physical objects while moving around in the physical environment. Furthermore, 2.5D or 3D replicas support the 3D annotation and collaboration between different workers without predefined 3D models. Therefore, the user can perform more realistic manufacturing tasks in dynamic environments. To verify the usability and usefulness of the proposed method, we performed quantitative and qualitative analyses by conducting two user studies: 1) matching a virtual object to a real object in a real environment, and 2) performing a realistic task, that is, the maintenance and inspection of a 3D printer. We also implemented several viable applications supporting task assistance using the proposed deep learning-based task assistance in wearable AR.  相似文献   

5.

The inspection of prefabricated buildings involves different stages and tasks such as the collection of measurements, the visual inspection of components and the written annotation of defects. Traditionally, inspectors have documented the process, the kind of defects and the proposed correction measures in paper format, hindering the collaboration with other experts (either simultaneously or asynchronously) and the collection of other types of annotations (e.g. images, 3D elements). In this paper, we present an AR tool designed to aid inspectors during this process. The tool has many benefits, as it allows simultaneously performing a collaborative inspection, taking multitype and geolocated annotations, their monitoring and edition, and performing in situ augmented visualizations. The quantitative and qualitative user evaluation carried out with our tool in a real environment (including usability and satisfaction evaluations) shows the relevance that such a technology might bring to the field and prove that our tool is usable and fulfils most of the inspectors’ expectations.

  相似文献   

6.
一种基于视沉的增强现实三维注册算法   总被引:8,自引:1,他引:8       下载免费PDF全文
为了提高增强现实系统三维注册的精度和效率,提出了一种基于计算机视觉的增强现实三维注射算法,并已将其用于采用透射式头盔显示器的增强现实系统中,该算法有如下特点:(1)构架简单,实用性强,一般情况下只需4个平面标志物就可实现三维注册;(2)工作范围大,甚至可以应用到室外的增强现实系统中;(30数值求解过程最线性过程,误差小,可以满足增强现实系统高精度三维注册的要求,另外,应用这种算法所需要作的图象处理计算量很小,其典型系统包括一个彩色CCD和几个不同颜色的标志点,由于很容易获得标志点的对应像面位置,而且不需要计算图象对,因此实时性好,是一般图形工作站和PC机上的增强现实系统进行实时注册的算法基础。  相似文献   

7.
The availability of powerful consumer-level smart devices and off-the-shelf software frameworks has tremendously popularized augmented reality (AR) applications. However, since the built-in cameras typically have rather limited field of view, it is usually preferable to position AR tools built upon these devices at a distance when large objects need to be tracked for augmentation. This arrangement makes it difficult or even impossible to physically interact with the augmented object. One solution is to adopt third person perspective (TPP) with which the smart device shows in real time the object to be interacted with, the AR information and the user herself, all captured by a remote camera. Through mental transformation between the user-centric coordinate space and the coordinate system of the remote camera, the user can directly interact with objects in the real world. To evaluate user performance under this cognitively demanding situation, we developed such an experimental TPP AR system and conducted experiments which required subjects to make markings on a whiteboard according to virtual marks displayed by the AR system. The same markings were also made manually with a ruler. We measured the precision of the markings as well as the time to accomplish the task. Our results show that although the AR approach was on average around half a centimeter less precise than the manual measurement, it was approximately three times as fast as the manual counterpart. Additionally, we also found that subjects could quickly adapt to the mental transformation between the two coordinate systems.  相似文献   

8.
A combined 2D, 3D approach is presented that allows for robust tracking of moving people and recognition of actions. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. Heading-guided recognition (HGR) is proposed as an efficient method for adaptive classification of activity. The HGR approach is demonstrated using “motion history images” that are then recognized via a mixture-of-Gaussians classifier. The system is tested in recognizing various dynamic human outdoor activities: running, walking, roller blading, and cycling. In addition, experiments with real and synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.  相似文献   

9.
This paper proposes an augmented reality content authoring system that enables ordinary users who do not have programming capabilities to easily apply interactive features to virtual objects on a marker via gestures. The purpose of this system is to simplify augmented reality (AR) technology usage for ordinary users, especially parents and preschool children who are unfamiliar with AR technology. The system provides an immersive AR environment with a head-mounted display and recognizes users’ gestures via an RGB-D camera. Users can freely create the AR content that they will be using without any special programming ability simply by connecting virtual objects stored in a database to the system. Following recognition of the marker via the system’s RGB-D camera worn by the user, he/she can apply various interactive features to the marker-based AR content using simple gestures. Interactive features applied to AR content can enlarge, shrink, rotate, and transfer virtual objects with hand gestures. In addition to this gesture-interactive feature, the proposed system also allows for tangible interaction using markers. The AR content that the user edits is stored in a database, and is retrieved whenever the markers are recognized. The results of comparative experiments conducted indicate that the proposed system is easier to use and has a higher interaction satisfaction level than AR environments such as fixed-monitor and touch-based interaction on mobile screens.  相似文献   

10.
Augmented reality (AR) constitutes a very powerful three-dimensional user interface for many “hands-on” application scenarios. To fully exploit the AR paradigm, the computer must not only augment the real world, but also accept feedback from it. In this paper, we present an optical approach for collecting such feedback by analyzing video sequences to track users and the objects they work with. Our system can be set up in any room after quickly placing a few known optical targets in the scene. We present two demonstration scenarios to illustrate the overall concept and potential of our approach and then discuss the research issues involved.  相似文献   

11.
Many civil engineering tasks require to access geospatial data in the field and reference the stored information to the real-world situation. Augmented reality (AR), which interactively overlays 3D graphical content directly over a view of the world, can be a useful tool to visualize but also create, edit and update geospatial data representing real-world artifacts. We present research results on the next-generation field information system for companies relying on geospatial data, providing mobile workforces with capabilities for on-site inspection and planning, data capture and as-built surveying. To achieve this aim, we used mobile AR technology for on-site surveying of geometric and semantic attributes of geospatial 3D models on the user’s handheld device. The interactive 3D visualizations automatically generated from production databases provide immediate visual feedback for many tasks and lead to a round-trip workflow where planned data are used as a basis for as-built surveying through manipulation of the planned data. Classically, surveying of geospatial objects is a typical scenario performed from utility companies on a daily basis. We demonstrate a mobile AR system that is capable of these operations and present first field trials with expert end users from utility companies. Our initial results show that the workflows of planning and surveying of geospatial objects benefit from our AR approach.  相似文献   

12.
Computer numerical control (CNC) simulation systems based on 3D graphics have been well researched and developed for NC tool path verification and optimization. Although widely used in the manufacturing industries, these CNC simulation systems are usually software-centric rather than machine tool-centric. The user has to adjust himself from the 3D graphic environment to the real machining environment. Augmented reality (AR) is a technology that supplements a real world with virtual information, where virtual information is augmented on to real objects. This paper builds on previous works of integrating the AR technology with a CNC machining environment using tracking and registration methodologies, with an emphasis on in situ simulation. Specifically configured for a 3-axis CNC machine, a multi-regional computation scheme is proposed to render a cutting simulation between a real cutter and a virtual workpiece, which can be conducted in situ to provide the machinist with a familiar and comprehensive environment. A hybrid tracking method and an NC code-adaptive cutter registration method are proposed and validated with experimental results. The experiments conducted show that this in situ simulation system can enhance the operator’s understanding and inspection of the machining process as the simulations are performed on real machines. The potential application of the proposed system is in training and machining simulation before performing actual machining operations.  相似文献   

13.
This paper presents a geospatial collision detection technique consisting of two methods: Find Object Distance (FOD) and Find Reflection Angle (FRA). We show how the geospatial collision detection technique using a computer vision system detects a computer generated virtual object and a real object manipulated by a human user and how the virtual object can be reflected on a real floor after being detected by a real object. In the geospatial collision detection technique, the FOD method detects the real and virtual objects, and the FRA method predicts the next moving directions of virtual objects. We demonstrate the two methods by implementing a floor based Augmented Reality (AR) game, Ting Ting, which is played by bouncing fire-shaped virtual objects projected on a floor using bamboo-shaped real objects. The results reveal that the FOD and the FRA methods of the geospatial collision detection technique enable the smooth interaction between a real object manipulated by a human user and a virtual object controlled by a computer. The proposed technique is expected to be used in various AR applications as a low cost interactive collision detection engine such as in educational materials, interactive contents including games, and entertainment equipments. Keywords: Augmented reality, collision detection, computer vision, game, human computer interaction, image processing, interfaces.  相似文献   

14.
The text searching paradigm still prevails even when users are looking for image data for example in the Internet. Searching for images mostly means searching on basis of annotations that have been made manually. When annotations are left empty, which is usually the case, searches on image file names are performed. This may lead to surprising retrieval results. The graphical search paradigm, searching image data by querying graphically, either with an image or with a sketch, currently seems not to be the preferred method partly because of the complexity in designing the query.In this paper we present our PictureFinder system, which currently supports “full image retrieval” in analogy to full text retrieval. PictureFinder allows graphical queries for the image the user has in his mind by sketching colored and/or textured regions or by whole images (query by example). By adjusting the search tolerances for each region and image feature (i.e. hue, saturation, lightness, texture pattern and coverage) the user can tune his query either to find images matching his sketch or images which differing from the specified colors and/or textures to a certain degree. To compare colors we propose a color distance measure that takes into account the fact that different colors spread differently in the color space, and which take into account that the position of a region in an image may be important.Furthermore, we show our query by example approach. Based on the example image chosen by the user, a graphical query is generated automatically and presented to the user. One major advantage of this approach is the possibility to change and adjust a query by example in the same way as a query which was sketched by the user. By deleting unimportant regions and by adjusting the tolerances of the remaining regions the user may focus on image details which are important to him.  相似文献   

15.
We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.  相似文献   

16.
Augmented reality (AR) technologies are just being used as interface in CAD tools allowing the user to perceive 3D models over a real environment. The influence of the use of AR in the conceptualization of products whose configuration, shape and dimensions depend mainly on the context remains unexplored. We aimed to prove that modelling in AR environments allows to use the context in real-time as an information input for making the iterative design process more efficient. In order to prove that, we developed a tool called AIR-MODELLING in which the designer is able to create virtual conceptual products by hand gestures meanwhile he/she is interacting directly with the real scenario. We conducted a test for comparing designers’ performance using AIR-MODELLING and a traditional CAD system. We obtained an average reduction of 44% on the modeling time in 76% of the cases. We found that modelling in AR environments using the hands as interface allows the designer to quickly and efficiently conceptualize potential solutions using the spatial restrictions of the context as an information input in real-time. Additionally, modelling in a natural scale, directly over the real scene, prevents the designer from drawing his/her attention on dimensional details and allows him/her to focus on the product itself and its relation with the environment.  相似文献   

17.
We describe a system designed to facilitate efficient communication of information relating to the physical world using augmented reality (AR). We bring together a range of technologies to create a system capable of operating in real-time, over wide areas and for both indoor and outdoor operations. The central concept is to integrate localised mapping and tracking based on real-time visual SLAM with global positioning from both GPS and indoor ultra-wide band (UWB) technology. The former allows accurate and repeatable creation and visualisation of AR annotations within local metric maps, whilst the latter provides a coarse global representation of the topology of the maps. We call this a ‘Topometric System’. The key elements are: robust and efficient vision based tracking and mapping using a Kalman filter framework; rapid and reliable vision based relocalisation of users within local maps; user interaction mechanisms for effective annotation insertion; and an integrated framework for managing and fusing mapping and positioning data. We present the results of experiments conducted over a wide area, with indoor and outdoor operations, which demonstrates successful creation and visualisation of large numbers of AR annotations over a range of different locations.  相似文献   

18.
Toward spontaneous interaction with the Perceptive Workbench   总被引:1,自引:0,他引:1  
Until now, we have interacted with computers mostly by using wire-based devices. Typically, the wires limit the distance of movement and inhibit freedom of orientation. In addition, most interactions are indirect. The user moves a device as an analog for the action created in the display space. We envision an untethered interface that accepts gestures directly and can accept any objects we choose as interactors. We discuss methods for producing more seamless interaction between the physical and virtual environments through the Perceptive Workbench. We applied the system to an augmented reality game and a terrain navigating system. The Perceptive Workbench can reconstruct 3D virtual representations of previously unseen real-world objects placed on its surface. In addition, the Perceptive Workbench identifies and tracks such objects as they are manipulated on the desk's surface and allows the user to interact with the augmented environment through 2D and 3D gestures  相似文献   

19.
We have previously developed a mixed reality (MR) painting system with which a user could take a physical object in the real world and apply virtual paint to it. However, this system could not provide the sensation of painting on virtual objects in MR space. Therefore, we subsequently proposed and developed mechanisms that simulated the effect of touch and movement when a brush device was used to paint on a virtual canvas. In this paper, we use visual and haptic feedback to provide the sensation of painting on virtual three-dimensional objects using a new brush device called the MAI Painting Brush++. We evaluate and confirm its effectiveness through several user studies.  相似文献   

20.
In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: (1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; (2) it allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; and (3) it offsets the significant cost of 3D Model Engineering by including the real world background. This paper reviews critical problems in AR and investigates technical approaches to address the fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co-existence; and the integration of these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables workers to “see” buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号