首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
BP神经网络在甘油水溶液粘度预测中的应用   总被引:2,自引:1,他引:1  
二维数据表广泛应用于科学与工程计算,当出现数据缺失时,由于数据间存在的非线性关系,采用传统方法在两列数据中插值将产生较大的误差.结合甘油水溶液粘度的预测,提出使用人工神经网络的插值方法,对神经网络模型、参数以及训练样本集的选择进行了实验与优化.实验表明,利用BP神经网络预测25℃甘油水溶液粘度时,网络收敛速度快,预测精度高,优于传统的最邻近插值和双线性插值,能够满足一般科学与工程计算的需要.提出的插值预测方法对类似条件的二维数据表具有普遍指导意义.  相似文献   

2.
针对现有二维位置预测算法难以反映地势因素给预测准确度带来的影响,提出一种基于IRWQS(Incremental Repetition Weighing Queue Strategy)与模糊特征相结合的位置预测方法。首先,将从北斗卫星导航系统获取的三维位置坐标信息进行提取转换后存入数据库,再利用数据库的链式操作进行在线增量式重复加权队列扫描运算;其次,通过模糊特征匹配算法获取最优的位置坐标,并得出较为准确的下一运动位置坐标点以及运动趋势。实验结果表明,相比MMTS算法和UCMBS算法,所提算法的预测准确率分别平均提高约9%和25%。  相似文献   

3.
实时网格编辑的扩展中值坐标方法   总被引:1,自引:0,他引:1  
为了克服凸长和复杂网格较难克隆的问题,提出一种基于扩展中值坐标的交互式网格编辑方法.首先由用户选择源网格中感兴趣的区域,使用垂直投影参数化方法将其映射为二维区域,并将映射后的网格顶点二维拓扑信息保存为图像元放置于目标网格相应位置;然后通过扩展中值坐标参数和外部的边界环,在被粘贴的图像元基础上恢复出三维信息;最终粘贴网格部分被变形使得与目标网格光滑拼接.该方法采用GPU对网格复制进行加速.实验结果表明,文中方法能够在三维模型之间实时地复制、替换任意不规则网格区域.  相似文献   

4.
双目立体视觉一般通过最优化方法求取点的三维坐标,若采用不同的目标函数,将会存在其它形式的最优化解,在另外一种目标函数下导出了一种新的计算三维坐标的方法,与传统的最小二乘法和归一化最小二乘法相比,新方法通过矩阵的奇异值分解(SVD)计算坐标值,可以避免矩阵求逆.通过实验分析得出,新方法求取三维坐标的误差与传统的最小二乘法及归一化最小二乘法计算三维坐标的误差比较接近,进而验证了新方法求取三维坐标的可行性和正确性.  相似文献   

5.
新型汽车驾驶模拟器原理及实现   总被引:2,自引:0,他引:2  
本文所述的新型汽车驾驶模拟器,使用简化的汽车运动学模型,计算汽车速度、方位、速度方向和当前坐标;将三维坐标的视景图像转换到二维的显示设备上;应用二级分布式计算机系统模拟汽车座舱与运动状况。  相似文献   

6.
为了改进胶囊内窥镜观测的准确性和真实性,提出了基于胶囊内窥镜序列图像的胃肠道三维重建的方法.首先利用SIFT算法提取前后两幅序列图像中尽可能多的对应特征点;计算获取各特征点在成像面上的二维坐标;进一步利用8点算法计算胶囊内镜运动变化的旋转矩阵和平移矢量.进而计算得到每个特征点的相对三维坐标和世界三维坐标;然后,采用Delaunay三角剖分算法对各三维点进行网格化,并完成场景的三维重建.实验表明相机与被测点距离在100 mm之内时,得到的深度误差小于1 mm;距离250 mm内时,相对误差在3%之内.说明所提出的算法是可行的.  相似文献   

7.
张乐珊  陈戈  韩勇  张涛 《计算机应用》2010,30(8):2070-2072
通过将传统的二维盒维数算法扩展到三维空间,提出了一个基于三维空间的盒维数计算方法。分别利用三维盒维数算法和二维盒维数算法计算城市的分维,通过对计算结果进行比较分析,观察到城市空间结构在第三维同样具有分形特征,证明传统城市分维计算中采用基于二维空间的分维算法或者简单地利用二维分维加1的方法表示三维分维都是不准确的,并进而给出正确的城市分维计算方法。  相似文献   

8.
研究实现三维人体动画具有广泛的应用前景和实用意义,提出了一种二维视频驱动的三维人体动画实现方法。基于动态帧的关键帧提取算法从二维视频中构建了二维关键帧集合;基于二维关键帧构建二维人体骨骼模型;利用小孔成像原理和勾股定理计算得到关节特征点的深度坐标,从而得到了反映人体动画的三维数据。实验结果表明,该方法生成的三维人体动画形象逼真、成本低、提高了运动生成的实时性,能够应用于虚拟现实、计算机游戏、三维视频游戏制作等领域。  相似文献   

9.
文章介绍了二维抛物方程的傅里叶分步法,提出了一种基于二维方法的准三维研究方法。将三维图形分解为两个二维图形。首先通过二维方法解决圆锥的主视图(三角形)和俯视图(圆形)的电波传播问题,将主视图和俯视图的计算结果分别与参考文献[5]和HFSS中的模型进行对比,从而验证其正确性。该方法比二维的研究范围更广,同时比直接使用三维公式求解简单。  相似文献   

10.
提出了一种煤料堆体积和质量自动测量系统。该系统包含了基于图像处理方法的煤料堆体积测算和基于BP神经网络的煤料堆质量估算。在取料大机上安装激光扫描仪,通过控制取料大机的走形、回转和俯仰使得激光扫描仪的扫描面可以覆盖整个煤料堆。将激光扫描仪获取的相对坐标解析为对应料场上的绝对坐标,从而得到料堆的三维点云数据。将三维点云数据线性映射为二维灰度图像,在图像层面上使用一些理算法处理以获取料堆信息,最后提出了一种使用BP神经网络对料堆质量进行估算的方法。  相似文献   

11.
In recent years, the convergence of computer vision and computer graphics has put forth a new field of research that focuses on the reconstruction of real-world scenes from video streams. To make immersive 3D video reality, the whole pipeline spanning from scene acquisition over 3D video reconstruction to real-time rendering needs to be researched. In this paper, we describe latest advancements of our system to record, reconstruct and render free-viewpoint videos of human actors. We apply a silhouette-based non-intrusive motion capture algorithm making use of a 3D human body model to estimate the actor’s parameters of motion from multi-view video streams. A renderer plays back the acquired motion sequence in real-time from any arbitrary perspective. Photo-realistic physical appearance of the moving actor is obtained by generating time-varying multi-view textures from video. This work shows how the motion capture sub-system can be enhanced by incorporating texture information from the input video streams into the tracking process. 3D motion fields are reconstructed from optical flow that are used in combination with silhouette matching to estimate pose parameters. We demonstrate that a high visual quality can be achieved with the proposed approach and validate the enhancements caused by the the motion field step.  相似文献   

12.
Observability of 3D Motion   总被引:2,自引:2,他引:0  
This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the epipolar constraint, applied to motion fields, and the other the positive depth constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors in the 3D motion parameters at the places representing the minima of the functions. For conventional video cameras possessing a restricted field of view, the analysis shows that for algorithms in both classes which estimate all motion parameters simultaneously, the obtained solution has an error such that the projections of the translational and rotational errors on the image plane are perpendicular to each other. Furthermore, the estimated projection of the translation on the image lies on a line through the origin and the projection of the real translation. The situation is different for a camera with a full (360 degree) field of view (achieved by a panoramic sensor or by a system of conventional cameras). In this case, at the locations of the minima of the above two functions, either the translational or the rotational error becomes zero, while in the case of a restricted field of view both errors are non-zero. Although some ambiguities still remain in the full field of view case, the implication is that visual navigation tasks, such as visual servoing, involving 3D motion estimation are easier to solve by employing panoramic vision. Also, the analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion. Finally, the introduced framework points to new avenues for studying the stability of image-based servoing schemes.  相似文献   

13.
讨论了采用针孔摄像机进行摄像机沿光轴运动下的场景三维重建的方法.基于摄像机轴向运动的特点和性质,利用该方法找到图像间的缩放因子,进而解决了轴向运动下的特征匹配;采用Sturm的摄像机自标定方法得到摄像机的内外参数;从而实现了摄像机沿光轴运动下的场景三维重建.  相似文献   

14.
动态场景图像序列中运动目标检测新方法   总被引:1,自引:0,他引:1       下载免费PDF全文
在动态场景图像序列中检测运动目标时,如何消除因摄影机运动带来的图像帧间全局运动的影响,以便分割图像中的静止背景和运动物体,是一个必须解决的难题。针对复杂背景下动态场景图像序列的特性,给出了一种新的基于场景图像参考点3D位置恢复的图像背景判别方法和运动目标检测方法。首先,介绍了图像序列的层次化运动模型以及基于它的运动分割方法;然后,利用估计出的投影矩阵计算序列图像中各运动层的参考点3D位置,根据同一景物在不同帧中参考点3D位置恢复值的变化特性,来判别静止背景对应的运动层和运动目标对应的运动层,从而分割出图像中的静止背景和运动目标;最后,给出了动态场景图像序列中运动目标检测的详细算法。实验结果表明,新算法较好地解决了在具有多组帧间全局运动参数的动态场景序列图像中检测运动目标的问题,较大地提高了运动目标跟踪算法的有效性和鲁棒性。  相似文献   

15.
三维模型动画在数字化设计和应用中具有重要意义,受到越来越多研究者关注;但如何通过三维数字化原真再现民族舞蹈表演是极具挑战的问题.本论文通过动捕技术采集舞蹈动作实现舞蹈数字化展示.具体方法是:首先利用动捕设备捕获人体动作数据,然后在Maya中进行人物建模、骨骼绑定、蒙皮和权重调节,再通过MotionBuilder将3D模型与动捕数据结合,最终完成了现实舞蹈动作的虚拟人展演.论文构建了一个面向民族舞蹈展演的虚拟场景,并以13个民族的舞蹈为数字化内容,推广动捕驱动的舞蹈展演方法的应用.  相似文献   

16.
Developable surfaces have been extensively studied in computer graphics because they are involved in a large body of applications. This type of surfaces has also been used in computer vision and document processing in the context of three‐dimensional (3D) reconstruction for book digitization and augmented reality. Indeed, the shape of a smoothly deformed piece of paper can be very well modeled by a developable surface. Most of the existing developable surface parameterizations do not handle boundaries or are driven by overly large parameter sets. These two characteristics become issues in the context of developable surface reconstruction from real observations. Our main contribution is a generative model of bounded developable surfaces that solves these two issues. Our model is governed by intuitive parameters whose number depends on the actual deformation and including the “flat shape boundary”. A vast majority of the existing image‐based paper 3D reconstruction methods either require a tightly controlled environment or restricts the set of possible deformations. We propose an algorithm for reconstructing our model's parameters from a general smooth 3D surface interpolating a sparse cloud of 3D points. The latter is assumed to be reconstructed from images of a static piece of paper or any other developable surface. Our 3D reconstruction method is well adapted to the use of keypoint matches over multiple images. In this context, the initial 3D point cloud is reconstructed by structure‐from‐motion for which mature and reliable algorithms now exist and the thin‐plate spline is used as a general smooth surface model. After initialization, our model's parameters are refined with model‐based bundle adjustment. We experimentally validated our model and 3D reconstruction algorithm for shape capture and augmented reality on seven real datasets. The first six datasets consist of multiple images or videos and a sparse set of 3D points obtained by structure‐from‐motion. The last dataset is a dense 3D point cloud acquired by structured light. Our implementation has been made publicly available on the authors' web home pages. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we aim to reconstruct the 3D motion parameters of a human body model from the known 2D positions of a reduced set of joints in the image plane. Towards this end, an action-specific motion model is trained from a database of real motion-captured performances, and used within a particle filtering framework as a priori knowledge on human motion. First, our dynamic model guides the particles according to similar situations previously learnt. Then, the state space is constrained so only feasible human postures are accepted as valid solutions at each time step. As a result, we are able to track the 3D configuration of the full human body from several cycles of walking motion sequences using only the 2D positions of a very reduced set of joints from lateral or frontal viewpoints.  相似文献   

18.
The classic approach to structure from motion entails a clear separation between motion estimation and structure estimation and between two-dimensional (2D) and three-dimensional (3D) information. For the recovery of the rigid transformation between different views only 2D image measurements are used. To have available enough information, most existing techniques are based on the intermediate computation of optical flow which, however, poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) accurately estimate flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. This paper introduces a novel approach to structure from motion which addresses the processes of smoothing, 3D motion and structure estimation in a synergistic manner. It provides an algorithm for estimating the transformation between two views obtained by either a calibrated or uncalibrated camera. The results of the estimation are then utilized to perform a reconstruction of the scene from a short sequence of images.The technique is based on constraints on image derivatives which involve the 3D motion and shape of the scene, leading to a geometric and statistical estimation problem. The interaction between 3D motion and shape allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that vary locally more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the least varying estimated depth as well as the image locations of scene discontinuities. We analyze the new constraints, show their relationship to the minimization of the epipolar constraint, and present experimental results using real image sequences that indicate the robustness of the method.  相似文献   

19.
针对具有点状特征的柔性物体,提出了一种三维运动捕获方法.首先,该方法利用两个标定的高速摄像机拍摄柔性物体的运动视频,并对图像进行立体校正;然后,采用DOG (Difference Of Gaussian)算法获取点状特征的位置,并提取特征点极值;其次,在一定范围的窗口上搜索匹配对,匹配左右图像的特征点;再次,通过三角测量法进行三维重建;最后,利用搜索策略进行时间序列上的匹配,实现动态柔性物体的三维运动捕获,并计算空间坐标、速度、加速度参数.实验结果表明,相比于采用sift算法匹配特征点捕获柔性运动物体的方法,本方法精度更高.  相似文献   

20.
基于直线光流场的三维运动和结构重建   总被引:2,自引:0,他引:2  
利用直线间运动对应关系,将像素点光流的概念和定义方法应用于直线,提出了直线光流的概念,建立了求解空间物体运动参数的线性方程组,利用三幅图像21条直线的光流场,可以求得物体运动的12个参数以及空间直线坐标.但是在实际应用当中,要找出这21条直线的光流场是很困难的,因此该文提出了运用解非线性方程组的方法,只需要6条直线的光流.就可以分步求出物体的12个运动参数,并根据求得的12个运动参数和一致的图像坐标系中的直线坐标,求得空间直线的坐标,从而实现了三维场景的重建.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号