共查询到20条相似文献,搜索用时 78 毫秒
1.
文章讨论了IBP(imagpe-based-rendering)的现状和发展,并在分析IBR方法的基础上提出了一种改进的建模绘制方法——基于3D结构和图像的复杂场建模绘制。即结合3D结构,整体应用三维实体,在某些结构复杂的功节处应用IBR技术,以达到真实绘制。 相似文献
2.
目的 传统增量式运动结构恢复算法中,初始图像对选择鲁棒性差,增量求解过程效率较低,捆绑调整策略存在计算冗余,模型修正后仍存在较大误差。为解决上述问题,以基于图像序列的3维重建为基础,提出一种新的增量式运动结构恢复算法(SFM-Y)。方法 首先,采用改进的自适应异常值过滤方法增强初始图像对选择的鲁棒性,得到用于初始重建的初始图像对;其次,通过增量迭代重建丰富点云模型,采用改进的EPNP(efficient perspective-n-point)解算方法提高增量添加过程的计算效率和精确度;最后,采用优化的捆绑调整策略进行模型修正,解决模型漂移问题,修正重投影误差。结果 实验选取不同数据规模的数据集,在本文方法及传统方法间进行测试对比,以便更加全面地分析算法性能。实验结果表明,SFM-Y算法相比传统的增量式运动结构恢复算法,在计算效率和结果质量方面均有所提高,根据性能分析对比的结果所示,本文方法较传统方法在计算效率和重建精度上约有10%的提升。结论 提出的增量式运动结构恢复算法能够高效准确地实现基于图像序列的3维重建优于传统方法,计算效率较高,初始重建鲁棒性强,生成模型质量较好。 相似文献
3.
为了有效地恢复遮挡点,假设相机为正投影模型,提出了一种基于秩1的遮挡点恢复方法.该方法利用所有图像点组成一个秩的矩阵,并利用该特性构造一个投影矩阵,利用该投影矩阵求到遮挡点,再将求到的遮挡点代替图像中的遮挡点,经过多次迭代,最后求到遮挡点的真实图像位置.模拟实验和真实实验表明:该方法具有鲁棒性好、收敛性好及误差小等优点. 相似文献
4.
5.
提出了一种运动模糊图像恢复模型,运动模糊图像经傅立叶变换后在频域有频谱零点进行参数估计,通过霍变换初步求得运动模糊图像的点扩展函数,当估计出运动模糊图像的点扩展函数的参数后,用神经网络方法进行恢复。这种恢复模型可以对任意角度的匀速运动模糊图像的恢复取得恢复效果。该方法具有操作简单和全局搜索收敛的优点,实验证明这是一种比较好的运动模糊图像恢复方法。 相似文献
6.
图像校正问题是图像处理领域内一个普遍存在的问题,由于成像系统的像差、畸变和带宽、摄像机姿态和成像设备扫描的非线性、运动模糊、辐射失真和噪声引入造成的图像失真都会给图像处理带来一定阻碍,所以需要通过图像校正将畸形图像恢复为原始图像的外观.由特殊镜头所拍摄得到的广角360°视场的球形全景图虽然能将更多的场景容纳到视场中,但... 相似文献
7.
8.
基于投影近似不变性的3D-2D医学图像配准 总被引:1,自引:0,他引:1
研究了同一病人的MRA和DSA图像之间的3D-2D配准问题,提出了一种基于投影近似不变性的配准算法,即在同一视角下,同一病人的三维MRA血管骨架的投影与二维DSA的骨架保持一致。通过定义二者之间的代价函数,并利用牛顿迭代法可得到视角参数的最优解。针对临床采集的MRA和DSA数据进行了实验,取得了较为满意的结果。 相似文献
9.
10.
针对单目3D目标检测网络训练约束少、模型预测精度低的问题,通过网络结构改进、透视投影约束建立以及损失函数优化等步骤,提出了一种基于透视投影的单目3D目标检测网络.首先,在透视投影机理的基础上,利用世界、相机以及目标三者之间的变换关系,建立一种利用消失点(VP)求解3D目标边界框的模型;其次,运用空间几何关系和先验尺寸信息,将其简化为方位角、目标尺寸与3D边界框的约束关系;最后,根据尺寸约束的单峰、易回归优势,进一步提出一种学习型的方位角―尺寸的损失函数,提高了网络的学习效率和预测精度.在模型训练中,针对单目3D目标检测网络未约束3D中心的缺陷,基于3D边界框和2D边框的空间几何关系,提出联合约束方位角、尺寸、3D中心的训练策略.在KITTI和SUN-RGBD数据集上进行实验验证,结果显示本文算法能获得更准确的目标检测结果,表明在3D目标检测方面该方法比其他算法更有效. 相似文献
11.
三维实体重建时投影图数据的预处理 总被引:3,自引:0,他引:3
介绍三维实体重建过程中投影图数据的处理算法,绘制同一图层的图形时,视图分割算法添加平行于两坐标轴的辅助直线,移动辅助线使其将三视图分割开来;对基于同一坐标系的投影三视图数据,推出从设备坐标系到各视图坐标系的转换公式并为其设计了转换算法,该算法求出三视图坐标系共有的原点,再根据视图分割的结果将每一图形元素转换到新的坐标系统中;隐含点的求取算法先判断同一视图中图元间的位置关系,再求出它们之间的公共点,这些点将用于线框搜索过程中,以上算法处理的投影图点线数据分别存入点线链表中。 相似文献
12.
基于投影特征识别的斜截切二次曲面重建算法 总被引:1,自引:0,他引:1
平面与二次曲面的截切组合是机械零件常用的设计手段.在分析斜截切二次曲面在三视图中的投影特性的基础上,提出识别和重建这类曲面的算法.首先根据曲面的投影特性在三视图中检索斜截切曲面的投影痕迹;然后以其作为引导,初步判定曲面类型,深度优先搜索进一步识别斜截切二次曲面;再根据投影边的二维信息计算未截切曲面的几何参数,并通过未截切曲面与截平面求交获取截切曲线;最后构造斜截切曲面的拓扑结构,生成三维曲面.该算法可以识别和重建空间任意位置的斜截切二次曲面,拓展了重建算法的形体覆盖域.文中算法目前已运用到形体重建原型系统中,实验结果证明了其有效性. 相似文献
13.
从2D视图重建3D实体的过程中,2D视图中的虚线会在3D重建中引起假边和假面的生成.针对此,提出一些规则在重建早先的阶段识别并删除这些假元.对于一个完全可见面,一旦搜索到它的外环和内环的边界,那么在该面内投影为虚线的边均将被删除. 相似文献
14.
由三视图重建包含圆环面的形体 总被引:1,自引:0,他引:1
机械零件中的管道和曲面过渡通常都由圆环面构成.分析了圆环面在三视图中的投影性质,将圆环面不同方向投影中的具有独特性的二次曲线特征定义为它的主特征和次特征,以工程图标注的语义信息为辅,提出基于曲面特征识别的圆环面重建算法.该算法首先在三视图中识别圆环面特征,然后由特征计算曲面参数,最后构造拓扑结构.用该算法对带有管道和曲面过渡的实例进行了验证. 相似文献
15.
This paper is about multi-view modeling of a rigid scene. We merge the traditional approaches of reconstructing image-extractable features and of modeling via user-provided geometry. We use features to obtain a first guess for structure and motion, fit geometric primitives, correct the structure so that reconstructed features lie exactly on geometric primitives and optimize both structure and motion in a bundle adjustment manner while enforcing the underlying constraints. We specialize this general scheme to the point features and the plane geometric primitives. The underlying geometric relationships are described by multi-coplanarity constraints. We propose a minimal parameterization of the structure enforcing these constraints and use it to devise the corresponding maximum likelihood estimator. The recovered primitives are then textured from the input images. The result is an accurate and photorealistic model.Experimental results using simulated data confirm that the accuracy of the model using the constrained methods is of clearly superior quality compared to that of traditional methods and that our approach performs better than existing ones, for various scene configurations. In addition, we observe that the method still performs better in a number of configurations when the observed surfaces are not exactly planar. We also validate our method using real images. 相似文献
16.
17.
基于直线光流场的三维运动和结构重建 总被引:2,自引:0,他引:2
利用直线间运动对应关系,将像素点光流的概念和定义方法应用于直线,提出了直线光流的概念,建立了求解空间物体运动参数的线性方程组,利用三幅图像21条直线的光流场,可以求得物体运动的12个参数以及空间直线坐标.但是在实际应用当中,要找出这21条直线的光流场是很困难的,因此该文提出了运用解非线性方程组的方法,只需要6条直线的光流.就可以分步求出物体的12个运动参数,并根据求得的12个运动参数和一致的图像坐标系中的直线坐标,求得空间直线的坐标,从而实现了三维场景的重建. 相似文献
18.
提出并实现了一种从单目视频流中重建人体三维运动的方法.该方法通过交互定制得到个性化的人体骨架模型和视频序列每一帧中人体各关节点的二维坐标后,分别针对单帧和连续多帧进行优化并迭代求解,得到每一帧的比例因子的最优解;最后反求各关节点的三维坐标,重建人体三维运动序列.对包含复杂和快速多变的人体运动的视频进行的实验表明,该方法简单有效,适用于包括体育、影视等在内的实际视频源. 相似文献
19.
Ameesh Makadia Christopher Geyer Kostas Daniilidis 《International Journal of Computer Vision》2007,75(3):311-327
We present a novel approach for the estimation of 3D-motion directly from two images using the Radon transform. The feasibility
of any camera motion is computed by integrating over all feature pairs that satisfy the epipolar constraint. This integration
is equivalent to taking the inner product of a similarity function on feature pairs with a Dirac function embedding the epipolar
constraint. The maxima in this five dimensional motion space will correspond to compatible rigid motions. The main novelty
is in the realization that the Radon transform is a filtering operator: If we assume that the similarity and Dirac functions
are defined on spheres and the epipolar constraint is a group action of rotations on spheres, then the Radon transform is
a correlation integral. We propose a new algorithm to compute this integral from the spherical Fourier transform of the similarity
and Dirac functions. Generating the similarity function now becomes a preprocessing step which reduces the complexity of the
Radon computation by a factor equal to the number of feature pairs processed. The strength of the algorithm is in avoiding
a commitment to correspondences, thus being robust to erroneous feature detection, outliers, and multiple motions.
The authors are grateful for support through the following grants: NSF-IIS-0083209, NSF-IIS-0121293, NSF-EIA-0324977, NSF-CNS-0423891,
NSF-IIS-0431070, and ARO/MURI DAAD19-02-1-0383.
The author is grateful for the generous support of the ARO MURI program (DAAD-19-02-1-0383) while at U. C. Berkeley. 相似文献
20.
Brodský Tomáš Fermüller Cornelia Aloimonos Yiannis 《International Journal of Computer Vision》2000,37(3):231-258
The classic approach to structure from motion entails a clear separation between motion estimation and structure estimation and between two-dimensional (2D) and three-dimensional (3D) information. For the recovery of the rigid transformation between different views only 2D image measurements are used. To have available enough information, most existing techniques are based on the intermediate computation of optical flow which, however, poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) accurately estimate flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. This paper introduces a novel approach to structure from motion which addresses the processes of smoothing, 3D motion and structure estimation in a synergistic manner. It provides an algorithm for estimating the transformation between two views obtained by either a calibrated or uncalibrated camera. The results of the estimation are then utilized to perform a reconstruction of the scene from a short sequence of images.The technique is based on constraints on image derivatives which involve the 3D motion and shape of the scene, leading to a geometric and statistical estimation problem. The interaction between 3D motion and shape allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that vary locally more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the least varying estimated depth as well as the image locations of scene discontinuities. We analyze the new constraints, show their relationship to the minimization of the epipolar constraint, and present experimental results using real image sequences that indicate the robustness of the method. 相似文献