首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 620 毫秒
1.
2.
3.
4.
在主动视觉系统中,通常需要多个代理对同一场景中的感兴趣目标进行协同处理,以提高系统智能分析感兴趣目标的能力。其中,基于多视几何关系解决感兴趣目标的对应问题是协同处理的基础。一方面,主动视觉系统一般工作在宽基线条件下,这增加了对应问题描述的复杂性;另一方面,主动视觉系统以最佳视角观察目标,因此摄像头需做实时的姿态调整,由此导致的视间几何关系变化进一步加深了对应问题的解决难度。本文基于仿射不变的几何特征,建立宽基线条件下的多视几何关系,并针对频繁使用几何特征不能满足主动视觉系统实时要求的问题,提出一种快速更新多视几何关系的方法,并在多视几何约束下实现对应感兴趣目标的鲁棒标识。实验结果表明,该方法能解决宽基线主动视觉系统中感兴趣目标的复杂对应问题,并能达到实时要求。  相似文献   

5.
6.
The aim of the work reported here is to build a useful toolset for 3D model-based vision on an SIMD parallel machine, the AMT DAP. Included in the toolset are facilities for model specification, manipulation and rendering using a ray-tracing approach as well as model recognition and validation using a geometrical-matching approach. In particular, an SIMD parallel version of a ray tracer and an SIMD parallel version of a bottom-up geometrical matcher are described. The ray tracer can render constructive solid geometry models and incorporates spatial subdivision of the scene. The matcher uses edge primitives recovered from scenes to match to model edges using local constraints and deals with spurious data using bin assignments. The overall toolset is illustrated by its use in closed-form testing and refinement, where the models, camera geometry and frame-to-frame motion in an image sequence generated by the ray tracer are known, but are checked and validated using geometrical matching, recognition and localisation.  相似文献   

7.
An approach to the recovery of trajectories of objects in a dynamic scene from stereo images is proposed. The approach is based on the use of a point representation of objects, visual odometry, and a set of algorithms that produce point models of objects and calculate their trajectories using matched 3D point clouds. Results of numerical experiments for synthetic scenes are discussed.  相似文献   

8.
3-D surface description from binocular stereo   总被引:5,自引:0,他引:5  
A stereo vision system that attempts to achieve robustness with respect to scene characteristics, from textured outdoor scenes to environments composed of highly regular man-made objects is presented. It integrates area-based and feature-based primitives. The area-based processing provides a dense disparity map, and the feature-based processing provides an accurate location of discontinuities. An area-based cross correlation, an ordering constraint, and a weak surface smoothness assumption are used to produce an initial disparity map. This disparity map is only a blurred version of the true one because of the smoothing introduced by the cross correlation. The problem can be reduced by introducing edge information. The disparity map is smoothed and the unsupported points removed. This method gives an active role to edgels parallel to the epipolar lines, whereas they are discarded in most feature-based systems. Very good results have been obtained on complex scenes in different domains  相似文献   

9.
Multi-object detection and tracking by stereo vision   总被引:1,自引:0,他引:1  
This paper presents a new stereo vision-based model for multi-object detection and tracking in surveillance systems. Unlike most existing monocular camera-based systems, a stereo vision system is constructed in our model to overcome the problems of illumination variation, shadow interference, and object occlusion. In each frame, a sparse set of feature points are identified in the camera coordinate system, and then projected to the 2D ground plane. A kernel-based clustering algorithm is proposed to group the projected points according to their height values and locations on the plane. By producing clusters, the number, position, and orientation of objects in the surveillance scene can be determined for online multi-object detection and tracking. Experiments on both indoor and outdoor applications with complex scenes show the advantages of the proposed system.  相似文献   

10.
We present an interactive algorithm to compute sound propagation paths for transmission, specular reflection and edge diffraction in complex scenes. Our formulation uses an adaptive frustum representation that is automatically sub-divided to accurately compute intersections with the scene primitives. We describe a simple and fast algorithm to approximate the visible surface for each frustum and generate new frusta based on specular reflection and edge diffraction. Our approach is applicable to all triangulated models and we demonstrate its performance on architectural and outdoor models with tens or hundreds of thousands of triangles and moving objects. In practice, our algorithm can perform geometric sound propagation in complex scenes at 4-20 frames per second on a multi-core PC.  相似文献   

11.
以投影几何学以及双目立体视觉原理为理论基础,对移动机器人的三维重建技术进行研究,对移动机器人漫道过程中所在的兴趣区域的场景进行较为精确的建模.设计了机器人的快速建模方法,利用迭代最近点算法(ICP),完成了多个局部三维场景模型的融合.同时,结合栅格投射理论,完成了对全局三维场景模型的更新.利用栅格模型重建的三维场景,具有环境信息丰富,模型描述精确的特点,可以应用于移动机器人导航领域.  相似文献   

12.
In an industrial context, most manufactured objects are designed using CAD (Computer-Aided Design) software. For visualization, data exchange or manufacturing applications, the geometric model has to be discretized into a 3D mesh composed of a finite number of vertices and edges. However, the initial model may sometimes be lost or unavailable. In other cases, the 3D discrete representation may be modified, e.g. after numerical simulation, and no longer corresponds to the initial model. A retro-engineering method is then required to reconstruct a 3D continuous representation from the discrete one.In this paper, we present an automatic and comprehensive retro-engineering process dedicated mainly to 3D meshes obtained initially by mechanical object discretization. First, several improvements in automatic detection of geometric primitives from a 3D mesh are presented. Then a new formalism is introduced to define the topology of the object and compute the intersections between primitives. The proposed method is validated on 3D industrial meshes.  相似文献   

13.
Flexible point-based rendering on mobile devices   总被引:4,自引:0,他引:4  
We have seen the growing deployment of ubiquitous computing devices and the proliferation of complex virtual environments. As demand for detailed and high-quality geometric models increases, typical scene size (often including scanned 3D objects) easily reaches millions of geometric primitives. Traditionally, vertices and polygons (faces) represent 3D objects. These representations, coupled with the traditional rendering pipeline, don't adequately support display of complex scenes on different types of platforms with heterogeneous rendering capabilities. To accommodate these constraints, we use a packed hierarchical point-based representation for rendering. Point-based rendering offers a simple-to-use level-of-detail mechanism in which we can adapt the number of points rendered to the underlying object's screen size. Our work strives for flexible rendering - that is, rendering only the interior hierarchy nodes as representatives of the subtree. In particular, we avoid traversal of the entire hierarchy and reconstruction of model attributes (such as normals and color information) for interior nodes because both operations can be prohibitively expensive. Flexible rendering also lets us traverse the hierarchy in a specific order, resulting in a fast, one-pass shadow-mapping algorithm.  相似文献   

14.
The proposed architecture is aimed to recover 3-D shape information from gray-level images of a scene: to build a geometric representation of the scene in terms of geometric primitives; and to reason about the scene. The novelty of the architecture is in fact the integration of different approaches: symbolic reasoning techniques typical of knowledge representation in artificial intelligence, algorithmic capabilities typical of artificial vision schemes, and analogue techniques typical of artificial neural networks. Experimental results obtained by means of an implemented version of the proposed architecture acting on real scene images are reported to illustrate the system capabilities. © 1996 John Wiley & Sons, Inc.  相似文献   

15.
An output-sensitive visibility algorithm is one whose runtime is proportional to the number of visible graphic primitives in a scene model—not to the total number of primitives, which can be much greater. The known practical output-sensitive visibility algorithms are suitable only for static scenes, because they include a heavy preprocessing stage that constructs a spatial data structure which relies on the model objects’ positions. Any changes to the scene geometry might cause significant modifications to this data structure. We show how these algorithms may be adapted to dynamic scenes. Two main ideas are used: first, update the spatial data structure to reflect the dynamic objects’ current positions; make this update efficient by restricting it to a small part of the data structure. Second, use temporal bounding volumes (TBVs) to avoid having to consider every dynamic object in each frame. The combination of these techniques yields efficient, output-sensitive visibility algorithms for scenes with multiple dynamic objects. The performance of our methods is shown to be significantly better than previous output-sensitive algorithms, intended for static scenes. TBVs can be adapted to applications where no prior knowledge of the objects’ trajectories is available, such as virtual reality (VR), simulations etc. Furthermore, they save updates of the scene model itself; notjust of the auxiliary data structure used by the visibility algorithm. They can therefore be used to greatly reduce the communications overhead in client-server VR systems, as well as in general distributed virtual environments.  相似文献   

16.
Light field display (LFD) is considered as a promising technology to reconstruct the light rays’ distribution of the real 3D scene, which approximates the original light field of target displayed objects with all depth cues in human vision including binocular disparity, motion parallax, color hint and correct occlusion relationship. Currently, computer-generated content is widely used for the LFD system, therefore rich 3D content can be provided. This paper firstly introduces applications of light field technologies in display system. Additionally, virtual stereo content rendering techniques and their application scenes are thoroughly combed and pointed out its pros and cons. Moreover, according to the different characteristics of light field system, the coding and correction algorithms in virtual stereo content rendering techniques are reviewed. Through the above discussion, there are still many problems in the existing rendering techniques for LFD. New rendering algorithms should be introduced to solve the real-time light-field rendering problem for large-scale virtual scenes.  相似文献   

17.
18.
立体图像对的生成   总被引:1,自引:0,他引:1  
获取同一场景的立体图像对是实现双目立体成像的一个关键问题。提出了一种在三维场景已经建好的情况下生成立体图像对的方法。该方法根据双目立体视觉的原理,利用3DS MAX中的摄像机对象对场景中的物体进行坐标变换和透视投影变换,分别生成左眼视图和右眼视图。实验结果表明,两个目标摄像机与三维模型的位置关系以及基线长度是影响立体效果的重要因素,改变目标摄像机与三维模型的位置,可以分别生成正视差、负视差的立体图像对,当AB与CO的比例参数为0.05时,生成的立体图像对的立体效果较佳。  相似文献   

19.
3D video billboard clouds reconstruct and represent a dynamic three-dimensional scene using displacement-mapped billboards. They consist of geometric proxy planes augmented with detailed displacement maps and combine the generality of geometry-based 3D video with the regularization properties of image-based 3D video. 3D video billboards are an image-based representation placed in the disparity space of the acquisition cameras and thus provide a regular sampling of the scene with a uniform error model. We propose a general geometry filtering framework which generates time-coherent models and removes reconstruction and quantization noise as well as calibration errors. This replaces the complex and time-consuming sub-pixel matching process in stereo reconstruction with a bilateral filter. Rendering is performed using a GPU-accelerated algorithm which generates consistent view-dependent geometry and textures for each individual frame. In addition, we present a semi-automatic approach for modeling dynamic three-dimensional scenes with a set of multiple 3D video billboards clouds.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号