首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The cumbersome nature of wired interfaces often limits the range of application of virtual environments. In this paper, we discuss the design and implementation of a novel system, called ALIVE, which allows unencumbered full-body interaction between a human participant and a rich graphical world inhabited by autonomous agents. Based on results obtained with thousands of users, the paper argues that this kind of system can provide more complex and very different experiences than traditional virtual reality systems. The ALIVE system significantly broadens the range of potential applications of virtual reality systems; in particular, the paper discusses novel applications in the area of training and teaching, entertainment, and digital assistants or interface agents. We give an overview of the methods used in the implementation of the existing ALIVE systems.  相似文献   

2.
Published online: 15 March 2002  相似文献   

3.
4.
立体视频对象分割及其三维重建算法研究*   总被引:1,自引:0,他引:1  
高韬 《计算机应用研究》2011,28(3):1162-1164
为更加有效分析立体视频对象,本文提出了一种基于离散冗余小波变换的立体视频对象分割算法,首先采用离散冗余小波变换提取特征点结合DT网格技术的视差估计方法,获得了可靠的视差场,再利用视差信息对立体视频中静止对象进行分割。对于立体视频序列中的运动对象,采用离散冗余小波提取运动区域的方法进行分割。实验结果表明,本算法对有重叠的多视频对象具有较好的分割效果,可同时分割静止物体和运动物体,具有较好的精确性和鲁棒性。对于分割出的立体视频对象,结合深度信息对其进行三维重建,得到较好的三维效果。  相似文献   

5.
6.
In this paper, we address the analysis of 3D shape and shape change in non-rigid biological objects imaged via a stereo light microscope. We propose an integrated approach for the reconstruction of 3D structure and the motion analysis for images in which only a few informative features are available. The key components of this framework are: 1) image registration using a correlation-based approach, 2) region-of-interest extraction using motion-based segmentation, and 3) stereo and motion analysis using a cooperative spatial and temporal matching process. We describe these three stages of processing and illustrate the efficacy of the proposed approach using real images of a live frog's ventricle. The reconstructed dynamic 3D structure of the ventricle is demonstrated in our experimental results, and it agrees qualitatively with the observed images of the ventricle.  相似文献   

7.
8.
Geometric fusion for a hand-held 3D sensor   总被引:2,自引:0,他引:2  
Abstract. This article presents a geometric fusion algorithm developed for the reconstruction of 3D surface models from hand-held sensor data. Hand-held systems allow full 3D movement of the sensor to capture the shape of complex objects. Techniques previously developed for reconstruction from conventional 2.5D range image data cannot be applied to hand-held sensor data. A geometric fusion algorithm is introduced to integrate the measured 3D points from a hand-held sensor into a single continuous surface. The new geometric fusion algorithm is based on the normal-volume representation of a triangle, which enables incremental transformation of an arbitrary mesh into an implicit volumetric field function. This system is demonstrated for reconstruction of surface models from both hand-held sensor data and conventional 2.5D range images. Received: 30 August 1999 / Accepted: 21 January 2000  相似文献   

9.
Despite tremendous progress in 3D modelling technology, most sites in traditional industries do not have a computer model of their facilities at their disposal. In these industries, 2D technical drawings are typically the most commonly used documents. In many cases, a database of fully calibrated and oriented photogrammetric images of parts of the plant is also available. These images are often used for metric measurement and 3D as-built modelling. For planning revamps and maintenance, it is necessary to use industrial drawings as well as images and 3D models represented in a common “world” coordinate system. This paper proposes a method for full integration of technical drawings, calibrated images and as-built 3D models. A new algorithm is developed in order to use only a few correspondences between points on a technical drawing and multiple images to estimate a metric planar transformation between the drawing and the world coordinate system. The paper describes the mathematical relationship between this transformation and the set of homographies needed for merging the technical drawing with all the calibrated images. The method is implemented and fully integrated into an industrial software we developed for 3D as-built reconstruction. We present examples of a real application, in which the method is successfully applied to create an augmented reality representation of a waste water plant. Accepted: 13 August 2001  相似文献   

10.
11.
A model-based approach to reconstruction of 3D human arm motion from a monocular image sequence taken under orthographic projection is presented. The reconstruction is divided into two stages. First, a 2D shape model is used to track the arm silhouettes and second-order curves are used to model the arm based on an iteratively reweighted least square method. As a result, 2D stick figures are extracted. In the second stage, the stick figures are backprojected into the scene. 3D postures are reconstructed using the constraints of a 3D kinematic model of the human arm. The motion of the arm is then derived as a transition between the arm postures. Applications of these results are foreseen in the analysis of human motion patterns. Received: 26 January 1996 / Accepted: 17 July 1997  相似文献   

12.
In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.  相似文献   

13.
Abstract. Conventional tracking methods encounter difficulties as the number of objects, clutter, and sensors increase, because of the requirement for data association. Statistical tracking, based on the concept of network tomography, is an alternative that avoids data association. It estimates the number of trips made from one region to another in a scene based on interregion boundary traffic counts accumulated over time. It is not necessary to track an object through a scene to determine when an object crosses a boundary. This paper describes statistical tracing and presents an evaluation based on the estimation of pedestrian and vehicular traffic intensities at an intersection over a period of 1 month. We compare the results with those from a multiple-hypothesis tracker and manually counted ground-truth estimates. Received: 30 August 2001 / Accepted: 28 May 2002 Correspondence to: J.E. Boyd  相似文献   

14.
We present a new approach to the tracking of very non-rigid patterns of motion, such as water flowing down a stream. The algorithm is based on a “disturbance map”, which is obtained by linearly subtracting the temporal average of the previous frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a “head” at the present position of the motion and a historical “tail” that indicates the previous locations of that motion. These disturbances serve as loci of attraction for “tracking particles” that are scattered throughout the image. The algorithm is very fast and can be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving cameras, showing a busy ant column, waterfalls, rapids and flowing streams, shoppers in a mall, and cars in a traffic intersection. Received: 24 June 1997 / Accepted: 30 July 1998  相似文献   

15.
This paper presents a novel 3D-model-based computer-vision method for tracking the full six degree-of-freedom (dof) pose (position and orientation) of a rigid body, in real-time. The methodology has been targeted for autonomous navigation tasks, such as interception of or rendezvous with mobile targets. Tracking an object’s complete six-dof pose makes the proposed algorithm useful even when targets are not restricted to planar motion (e.g., flying or rough-terrain navigation). Tracking is achieved via a combination of textured model projection and optical flow. The main contribution of our work is the novel combination of optical flow with z-buffer depth information that is produced during model projection. This allows us to achieve six-dof tracking with a single camera. A localized illumination normalization filter also has been developed in order to improve robustness to shading. Real-time operation is achieved using GPU-based filters and a new data-reduction algorithm based on colour-gradient redundancy, which was developed within the framework of our project. Colour-gradient redundancy is an important property of colour images, namely, that the gradients of all colour channels are generally aligned. Exploiting this property provides a threefold increase in speed. A processing rate of approximately 80 to 100 fps has been obtained in our work when utilizing synthetic and real target-motion sequences. Sub-pixel accuracies were obtained in tests performed under different lighting conditions.
Beno BenhabibEmail:
  相似文献   

16.
3D object detection is a critical part of environmental perception systems and one of the most fundamental tasks in understanding the 3D visual world, which benefit a series of downstream real-world applications. RGB-D images include object texture and semantic information, as well as depth information describing spatial geometry. Recently, numerous 3D object detection models for RGB-D images have been proposed with excellent performance, but summaries in this area are still absent. To stimulate future research, this paper provides a detailed analysis of current developments in 3D object detection methods for RGB-D images to motivate future research. It covers three major parts, including background on 3D object detection, RGB-D data details, and comparative results of state-of-the-art methods on several publicly available datasets, with an emphasis on contributions, design ideas, and limitations, as well as insightful observations and inspiring future research directions.  相似文献   

17.
Published online: 23 April 2002  相似文献   

18.
This paper presents a new multi-pass hierarchical stereo-matching approach for generation of digital terrain models (DTMs) from two overlapping aerial images. Our method consists of multiple passes which compute stereo matches with a coarse-to-fine and sparse-to-dense paradigm. An image pyramid is generated and used in the hierarchical stereo matching. Within each pass, the DTM is refined by using the image pyramid from the coarse to the fine level. At the coarsest level of the first pass, a global stereo-matching technique, the intra-/inter-scanline matching method, is used to generate a good initial DTM for the subsequent stereo matching. Thereafter, hierarchical block matching is applied to image locations where features are detected to refine the DTM incrementally. In the first pass, only the feature points near salient edge segments are considered in block matching. In the second pass, all the feature points are considered, and the DTM obtained from the first pass is used as the initial condition for local searching. For the passes after the second pass, 3D interactive manual editing can be incorporated into the automatic DTM refinement process whenever necessary. Experimental results have shown that our method can successfully provide accurate DTM from aerial images. The success of our approach and system has also been demonstrated with a flight simulation software. Received: 4 November 1996 / Accepted: 20 October 1997  相似文献   

19.
20.
A common need in machine vision is to compute the 3-D rigid body transformation that aligns two sets of points for which correspondence is known. A comparative analysis is presented here of four popular and efficient algorithms, each of which computes the translational and rotational components of the transform in closed form, as the solution to a least squares formulation of the problem. They differ in terms of the transformation representation used and the mathematical derivation of the solution, using respectively singular value decomposition or eigensystem computation based on the standard representation, and the eigensystem analysis of matrices derived from unit and dual quaternion forms of the transform. This comparison presents both qualitative and quantitative results of several experiments designed to determine (1) the accuracy and robustness of each algorithm in the presence of different levels of noise, (2) the stability with respect to degenerate data sets, and (3) relative computation time of each approach under different conditions. The results indicate that under “ideal” data conditions (no noise) certain distinctions in accuracy and stability can be seen. But for “typical, real-world” noise levels, there is no difference in the robustness of the final solutions (contrary to certain previously published results). Efficiency, in terms of execution time, is found to be highly dependent on the computer system setup.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号