共查询到20条相似文献,搜索用时 10 毫秒
1.
基于图像融合的微表面快速三维重构算法研究* 总被引:1,自引:0,他引:1
在分析电子探针图像多样性与相关性特征的基础上,提出先应用提升小波进行快速图像融合,以提高图像信息量,然后根据微表面图像纹理相似性,抽取出高程数据,通过顶点数组方式快速重构三维立体场景。实验表明,此方法信息量大、操作简单、场景逼真,易形成交互场景,具有较好的实际应用价值。 相似文献
2.
BRAD3D, a low-cost hardware platform for the development of a realtime 3D graphics software is presented. The BRAD3D configuration is derived from a generalization of 3D image synthesis. Three basic processes have been identified: the geometric process, dealing with the measurements of the scene; the topologic process, extracting visible information from the polygonal structure; and the scan-conversion process, producing pixel values on a frame buffer. BRAD3D is implemented as a three-stage pipeline and accommodates depth-list and scan-line hidden-surface-removal algorithms. Each stage of the pipeline can be implemented using different hardware solutions. A microprocessor-based solution is presented as a general prototyping approach. 相似文献
3.
提出了基于经验模式分解和信息融合的三维几何模型重构方法,包括3个步骤:首先球面参数化信号并将其映射到平面,进行均匀规则采样;再对平面信号进行限邻域经验模式分解和小波分解,利用得到的各个内蕴模式图层和小波系数进行融合;最后从图层信号得到不规则的原始映射信号,逆映射回三维几何模型信号。同时,提出了基于信噪比的三维几何信号融合性能评价指标,对重构模型进行验证。实验结果表明,该方法能够有效融合三维几何信号,基于限邻域经验模式分解的重构效果优于小波分解。 相似文献
4.
The Journal of Supercomputing - This paper proposes a method to reconstruct three-dimensional (3D) objects using real-time fusion and analysis of multiple sensor data. This paper attempts to create... 相似文献
5.
目的 基于视觉的3维场景重建技术已在机器人导航、航拍地图构建和增强现实等领域得到广泛应用。不过,当相机出现较大运动时则会使得传统基于窄基线约束的3维重建方法无法正常工作。方法 针对宽基线环境,提出了一种融合高层语义先验的3维场景重建算法。该方法在马尔可夫随机场(MRF)模型的基础上,结合超像素的外观、共线性、共面性和深度等多种特征对不同视角图像中各个超像素的3维位置和朝向进行推理,从而实现宽基线条件下的初始3维重建。与此同时,还以递归的方式利用高层语义先验对相似深度超像素实现合并,进而对场景深度和3维模型进行渐进式优化。结果 实验结果表明,本文方法在多种不同的宽基线环境,尤其是相机运动较为剧烈的情况下,依然能够取得比传统方法更为稳定而精确的深度估计和3维场景重建效果。结论 本文展示了在宽基线条件下如何将多元图像特征与基于三角化的几何特征相结合以构建出精确的3维场景模型。本文方法采用MRF模型对不同视角图像中超像素的3维位置和朝向进行同时推理,并结合高层语义先验对3维重建的过程提供指导。与此同时,还使用了一种递归式框架以实现场景深度的渐进式优化。实验结果表明,本文方法在不同的宽基线环境下均能够获得比传统方法更接近真实描述的3维场景模型。 相似文献
6.
7.
针对当前三维重建系统中大量的交互和频繁的失真,根据运动进行物体三维重建的原理,设计了一个简便易操作的三维重建系统.首先经过相机标定,立体匹配点,进而运用双视点三维重建原理即三角测量得到三维坐标:然后运用运动的连续性,综合多视点下的三维信息结合图像特征点的二维局部属性进行优化,最后经过三角剖分和纹理渲染得到物体真实的形状.与传统的三维重建系统相比,该系统能够在用户交互很少的情况下进行三维重建,且能重建出完整的低失真的三维模型. 相似文献
8.
一种快速有效实现三维实体重建的算法 总被引:1,自引:2,他引:1
从图的数组表示法这一基本表示方法,作为基点出发,将点、线、面、面环等信息用数组形式存储,从数组元素出发逐步实现了基于三视图的三维重建。实践表明,该方法充分利用数组形式的有序、对应、直接等特点,大大提高了三维重建的效率,减少了传统方法庞大的搜索空间和降低了时间复杂度。 相似文献
9.
Automatic reconstruction of 3D objects from 2D orthographic views has been a major research issue in CAD/CAM.In this paper,two acceleratin techniques to improve the efficiency of reconstruction are presented.First,some peudo elements are removed by depth and topology information as soon as the wire-frame is constructed ,which reduces the searching space.Second.the proposed algorithm does not establish all possible surfaces in the process of generating 3D faces.The surfaces and edge loops are generated by using the relationship between the boundaries of 3D faces and their projections,This avoids the growth in combinational complexity of previous methods that have to check all possible pairs of 3D candidate edges. 相似文献
10.
Dalong Jiang Author Vitae Yuxiao Hu Author Vitae Author Vitae Lei Zhang Author Vitae Hongjiang Zhang Author Vitae Author Vitae 《Pattern recognition》2005,38(6):787-798
Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient two-dimensional (2D)-to-three-dimensional (3D) integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination. Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace. Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: (1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; (2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and (3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE. 相似文献
11.
Discretized distance maps have been used in robotics for path planning and efficient collision detection applications in static environments.1 However, they have been used at the finest level of resolution, thereby making them memory intensive. In this article, we propose an octree-based hierarchical representation for discretized distance maps, called Octree Distance Maps (ODM), and show its use in efficient collision detection. To the best of our knowledge, ours is the first work to consider the use of hierarchical distance maps for collision detection. ODM representation achieves an advantageous compromise between array-based distance maps and ordinary octrees. Compared to the former, ODM requires a fraction of the memory at the expense of somewhat slower collision detection. Compared to the latter, ODM requires slightly more memory but provides a significant improvement in collision detection. ODM is similar to the quadtree distance transforms used in image representation2 but differs significantly in various aspects of distance representation and its use in collision detection since the main motivation behind ODM is efficient collision detection instead of image representation. We then present algorithms for (1) creating an ODM from an octree, and (2) for efficient collision detection based on an ODM. Extensive experiments are then presented and compared with octree-based collision detection. Our experimental results quantify the advantageous compromise achieved by ODM representation. © 1997 John Wiley & Sons, Inc. 相似文献
12.
Zhan Song Author Vitae 《Pattern recognition》2010,43(10):3560-3571
Structured light-based sensing (SLS) requires the illumination to be coded either spatially or temporally in the illuminated pattern. However, while the former demands the use of uniquely coded spatial windows whose size grows with the reconstruction resolution and thereby demanding increasing smoothness on the imaged scene, the latter demands the use of multiple image captures. This article presents how the illumination of a very simple pattern plus a single image capture can also achieve 3D reconstruction. The illumination and imaging setting has the configuration of a typical SLS system, comprising a projector and a camera. The difference is, the illumination is not much more than a checkerboard-like pattern - a non-structured pattern in the language of SLS - that does not provide direct correspondence between the camera’s image plane and the projector’s display panel. The system works from the image progressively, first constructing the orientation map of the target object from the observed grid-lines, then inferring the depth map by the use of a few tricks related to interpolation. The system trades off little accuracy of the traditional SLSs with simplicity of its operation. Compared to temporally coded SLSs, the system has the essence that it requires only one image capture to operate; compared with spatially coded SLSs, it requires no use of spatial windows, and in turn a less degree of smoothness on the object surface; compared with methods like shape from shading and photometric stereo, owing to the use of artificial illumination it is less affected by the surface reflectance property of the target surface and the ambient lighting condition. 相似文献
13.
Qin Wang Qiong‐Hua Wang Sheng‐Xue Gu Chun‐Ling Liu 《Journal of the Society for Information Display》2016,24(3):198-203
If discrepancy between accommodation and convergence caused by a stereoscopic display exceeds fusion range of human eyes, viewers will see ghosting image, which leads to the loss of correct depth information and even causes severe visual fatigue. In this paper, an experiment aiming to investigate the binocular fusion range is conducted for a polarized 3D display. Two experimental trials are arranged to examine two aspects of fusion range including outward depth and inward depth. 3D modeling software is used to generate the test stereoscopic image pairs, which vary in depth by adjusting the separation between the virtual cameras. Angular parallax corresponding to the limit of fusion range is obtained by determining critical point of ghosting images. The experimental results show deviation between theoretical fusion range calculated by formula and experimental one.?0.223° to 0.275° represent critical fusion range for the polarized 3D display to avoid ghosting images. 相似文献
14.
Li Zongmin Wu Zijian Kuang Zhenzhong Chen Kai Gan Yongzhou Fan Jianping 《Multimedia Tools and Applications》2014,72(2):1731-1749
Multimedia Tools and Applications - Many existing 3D model retrieval use KNN (k-nearest neighbor) method for similarity search, but it is inefficient in high-dimension space search. In this paper,... 相似文献
15.
Geometric fusion for a hand-held 3D sensor 总被引:2,自引:0,他引:2
Abstract. This article presents a geometric fusion algorithm developed for the reconstruction of 3D surface models from hand-held sensor
data. Hand-held systems allow full 3D movement of the sensor to capture the shape of complex objects. Techniques previously
developed for reconstruction from conventional 2.5D range image data cannot be applied to hand-held sensor data. A geometric
fusion algorithm is introduced to integrate the measured 3D points from a hand-held sensor into a single continuous surface.
The new geometric fusion algorithm is based on the normal-volume representation of a triangle, which enables incremental transformation of an arbitrary mesh into an implicit volumetric field
function. This system is demonstrated for reconstruction of surface models from both hand-held sensor data and conventional
2.5D range images.
Received: 30 August 1999 / Accepted: 21 January 2000 相似文献
16.
A. A. Zolotukhin I. V. Safonov K. A. Kryzhanovskii 《Pattern Recognition and Image Analysis》2013,23(1):168-174
An algorithm for the reconstruction of the 3D shape of the surface of a micro-object from a stereo pair of images obtained on a raster electron microscope (REM) has been considered. A model of building an image in REM has been presented. The SIFT algorithm was used for determination of the correspondence points. The correspondence points are used to determine the mutual position of the object in a stereo pair by the RANSAC method. A set of points is created in the 3D space, which is later interpolated to reconstruct the 3D surface. 相似文献
17.
Stratified 3D reconstruction, or a layer-by-layer 3D reconstruction upgraded from projective to affine, then to the final metric reconstruction, is a well-known 3D reconstruction method in computer vision. It is also a key supporting technology for various well-known applications, such as streetview, smart3D, oblique photogrammetry. Generally speaking, the existing computer vision methods in the literature can be roughly classified into either the geometry-based approaches for spatial vision or the learning-based approaches for object vision. Although deep learning has demonstrated tremendous success in object vision in recent years, learning 3D scene reconstruction from multiple images is still rare, even not existent, except for those on depth learning from single images. This study is to explore the feasibility of learning the stratified 3D reconstruction from putative point correspondences across images, and to assess whether it could also be as robust to matching outliers as the traditional geometry-based methods do. In this study, a special parsimonious neural network is designed for the learning. Our results show that it is indeed possible to learn a stratified 3D reconstruction from noisy image point correspondences, and the learnt reconstruction results appear satisfactory although they are still not on a par with the state-of-the-arts in the structure-from-motion community due to largely its lack of an explicit robust outlier detector such as random sample consensus (RANSAC). To the best of our knowledge, our study is the first attempt in the literature to learn 3D scene reconstruction from multiple images. Our results also show that how to implicitly or explicitly integrate an outlier detector in learning methods is a key problem to solve in order to learn comparable 3D scene structures to those by the current geometry-based state-of-the-arts. Otherwise any significant advancement of learning 3D structures from multiple images seems difficult, if not impossible. Besides, we even speculate that deep learning might be, in nature, not suitable for learning 3D structure from multiple images, or more generally, for solving spatial vision problems. 相似文献
18.
Polarization imaging can retrieve inaccurate objects’ 3D shapes with fine textures, whereas coarse but accurate depths can be provided by binocular stereo vision. To take full advantage of these two complementary techniques, we investigate a novel 3D reconstruction method based on the fusion of polarization imaging and binocular stereo vision for high quality 3D reconstruction. We first generate the polarization surface by correcting the azimuth angle errors on the basis of registered binocular depth, to solve the azimuthal ambiguity in the polarization imaging. Then we propose a joint 3D reconstruction model for depth fusion, including a data fitting term and a robust low-rank matrix factorization constraint. The former is to transfer textures from the polarization surface to the fused depth by assuming their relationship linear, whereas the latter is to utilize the low-frequency part of binocular depth to improve the accuracy of the fused depth considering the influences of missing-entries and outliers. To solve the optimization problem in the proposed model, we adopt an efficient solution based on the alternating direction method of multipliers. Extensive experiments have been conducted to demonstrate the efficiency of the proposed method in comparison with state-of-the-art methods and to exhibit its wide application prospects in 3D reconstruction. 相似文献
19.
目的 针对目前手持式3维扫描设备生成的模型纹理分辨率不够,且部分区域存在高光、阴影及明暗变化等问题,提出一种基于多幅实拍照片的纹理重建方法。方法 首先使用基于特征匹配的方法将照片图像与几何模型进行配准;其次根据重建纹理大小,采用特殊编码方式的位置纹理建立照片像素到纹理像素直接且精确的对应关系;然后根据多幅使用闪光灯作为光源拍摄的照片,通过位置纹理建立联立方程,求解漫反射分量;最后采用改进的基于混合权重的融合方法对求解的漫反射分量进行纹理融合。结果 使用本文方法对3个实验模型进行本征纹理重建,与3维扫描设备生成纹理和直接用照片生成纹理相比,该方法操作简单、使用方便,可获得高度清晰的,不含高光和明暗效果的本征纹理图像。结论 实验结果表明,重建纹理质量在分辨率、色彩还原性及一致性方面明显优于原有纹理,且该方法具有很高的精确性和鲁棒性,可满足高质量的纹理重建需求。 相似文献
20.
面向图像三维重建的无人机航线规划 总被引:1,自引:1,他引:1
随着无人机技术的发展,无人机序列影像三维重建越来越受到人们的关注。为完整重建任务区域的三维模型并减少无人机飞行功耗,提出一种面向图像三维重建的无人机航线规划算法。针对凸多边形任务区域,在图像重叠度和时间连续性的要求下,基于光栅法规划扫描航线并结合最佳扫描方向使得转弯次数最少。借助Gazebo仿真平台,对比验证了无人机按照该算法规划的航线飞行时功耗更小,且拍摄得到的序列影像能够重建任务区域三维模型。 相似文献