首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
本文首先阐述了视觉心理学原理 ,然后对基于视觉心理学原理的预见性纹理绘制算法进行了详细的研究。采用本算法可以生成符合人眼视觉感知的虚拟场景。  相似文献   

2.
基于OpenCV的双目视觉标定程序的开发   总被引:1,自引:0,他引:1  
分析了基于2D标靶的摄像机标定算法原理以及双目立体视觉系统摄像机的标定方法,给出了基于开源视觉库OpenCV的摄像机标定算法的详细处理流程,实现了一个完整的摄像机标定程序,可移植到嵌入式系统中.  相似文献   

3.
虚拟三维空间是现实世界的数字化三维空间,而人眼立体视觉空间则是人眼视觉系统对于现实世界或虚拟世界所形成的三维立体构象。传统上人眼直接观察现实世界,确立了人眼立体视觉空间与现实世界之间的几何对应关系。而人眼立体视觉空间与虚拟三维空间是否也存在对应的几何关系?以人眼立体视觉和虚拟三维场景为研究对象,根据双目视差原理,论述了人眼立体视觉的几何模型及视觉三维模型的表示形式。通过分析虚拟空间三维点元、屏幕视差及网膜视差等三者之间的内在几何关系,利用矩阵代数建立了虚拟空间与视觉空间之间的几何映射关系。这一映射关系表明视觉三维模型与虚拟三维模型之间存在一一对应关系,也反映了人眼视觉系统在虚拟空间中的可测量性质。本文创新性之处在于得到了视觉三维模型的完整表示,突破了传统上立体显示的定性感知,提供了定量分析的基础。这一工作对于在虚拟空间中的立体体验、虚拟交互以及立体测量等实践活动具有一定理论参考价值。  相似文献   

4.
用于解决传统摄像机位置自动初始化的SIFT特征匹配方法计算量较大,比较复杂.在基于虚拟视觉伺服(VVS)跟踪算法基础上,应用尺度不变特征变换(SIFT)方法提取SIFT特征向量,采用基于KD树的最近邻搜索实现特征匹配,最后用线性位置算法计算出摄相机初始位置,降低了算法的时间复杂度,提高了算法的效率.  相似文献   

5.
孙敏  顾耀林 《计算机应用》2008,28(7):1675-1677
自动设定摄像机初始值成为增强现实领域中跟踪技术的热点问题。文中在虚拟视觉伺服(VVS)跟踪算法基础上利用Randomized Tree分类技术,将特征点匹配作为分类问题处理,再利用线性位置算法计算出摄像机的初始位置。通过实验验证了该方法的快速准确性和有效性。  相似文献   

6.
基于平行双目立体视觉的测距系统   总被引:1,自引:0,他引:1  
刘盼  王金海 《计算机应用》2012,32(Z2):162-164
基于计算机视觉的理论基础,在PC机上搭建出了平行双目立体视觉测距系统的实验平台,在前人的基础上对传统的标定方法进行了改进,实现了双目摄像机的标定,并采用特征点匹配领域的热点与难点算法尺度不变特征变换(SIFT)特征匹配算法实现了立体匹配,恢复出物体的深度信息。通过实验与计算,最终验证了该方法理论分析的正确性及实践的可行性。  相似文献   

7.
提出了一种基于立体视觉的人脸识别算法及介绍其系统实现。在传统二维人脸识别的基础上,根据人眼视觉原理设计双摄像头提取人脸的三维信息作为判别标准,增加了识别的准确度和鲁棒性。对于左右两个摄像头的输入视频帧,依据相关系数和约束关系找到两幅图每点的对应关系,进而得到同一个3D 空间点投影到不同摄像机图像上对应点的位置差,由此表征出深度信息。这些2D 图像不具备的深度数据组成不同传统人脸识别的新判别依据。还将给出实现本算法的系统构成及最后的实验结果,在复杂环境下试验证明三维的人脸识别具有更好的鲁棒性和更高的识别率。  相似文献   

8.
张霞  葛芦生 《微机发展》2007,17(3):44-47
在应用视觉测量系统中,视觉传感器即CCD摄像机的标定是必要的步骤。目前大部分标定算法都需要事先给出图像中心点,文中讨论该参数的求取方法。介绍了目前几种常用的图像中心的标定方法,有重点地介绍了各种方法的原理和实现流程,对其实现原理的复杂性、实验仪器的精密性、操作方法的难易度进行了综合比较,最后给出相应的实验结果及其结论。对各种视觉测量系统及摄像机标定精度要求不同的场合,可以根据其实验条件及要求选取最合适的方法进行图像中心点的求取。  相似文献   

9.
基于SIFT和Hu特征融合的单目视觉识别算法研究   总被引:1,自引:0,他引:1  
研究了机器视觉技术在三维物体识别定位问题中的应用。利用Visual C++编程软件建立了摄像机标定界面,实现对摄像机进行快速标定的功能。提出了一种SIFT特征和Hu不变矩融合算法,该算法是一种融合了局部特征和全局特征的算法。其中全局特征反映了对三维物体图像的整体信息进行粗略匹配和定位,局部特征可以在全局特征中进行更准确特征匹配,该算法对伸缩、旋转和平移等有很好的抵抗能力。实验结果表明,该视觉算法可以有效解决三维物体匹配问题,并且有效提高了系统的识别速率和效率,满足物体识别的目的。  相似文献   

10.
双目视觉模型移动目标跟踪系统   总被引:2,自引:0,他引:2  
为了使机器人能够对目标进行自主智能的跟踪,基于仿生学原理建立双目视觉模型,设计出四自由度的仿人眼颈跟踪系统,通过上位机与下位机的配合以及2台摄像机与4台直流电机,实现双目视觉移动目标跟踪系统.该系统通过双目摄像机对目标进行图像采集,由PC机对目标进行特征提取以及一系列处理后计算出目标的移动范围,将目标移动数据传输至下位机后通过PID控制算法平滑控制直流电机的运转方向与运转速度,从而对移动目标实现良好的跟踪性能.  相似文献   

11.
This paper explores a robust region-based general framework for discriminating between background and foreground objects within a complex video sequence. The proposed framework works under difficult conditions such as dynamic background and nominally moving camera. The originality of this work lies essentially in our use of the semantic information provided by the regions while simultaneously identifying novel objects (foreground) and non-novel ones (background). The information of background regions is exploited to make moving objects detection more efficient, and vice-versa. In fact, an initial panoramic background is modeled using region-based mosaicing in order to be sufficiently robust to noise from lighting effects and shadowing by foreground objects. After the elimination of the camera movement using motion compensation, the resulting panoramic image should essentially contain the background and the ghost-like traces of the moving objects. Then, while comparing the panoramic image of the background with the individual frames, a simple median-based background subtraction permits a rough identification of foreground objects. Joint background-foreground validation, based on region segmentation, is then used for a further examination of individual foreground pixels intended to eliminate false positives and to localize shadow effects. Thus, we first obtain a foreground mask from a slow-adapting algorithm, and then validate foreground pixels (moving visual objects + shadows) by a simple moving object model built by using both background and foreground regions. The tests realized on various well-known challenging real videos (across a variety of domains) show clearly the robustness of the suggested solution. This solution, which is relatively computationally inexpensive, can be used under difficult conditions such as dynamic background, nominally moving camera and shadows. In addition to the visual evaluation, spatial-based evaluation statistics, given hand-labeled ground truth, has been used as a performance measure of moving visual objects detection.  相似文献   

12.
We propose an on-line algorithm to segment foreground from background in videos captured by a moving camera. In our algorithm, temporal model propagation and spatial model composition are combined to generate foreground and background models, and likelihood maps are computed based on the models. After that, an energy minimization technique is applied to the likelihood maps for segmentation. In the temporal step, block-wise models are transferred from the previous frame using motion information, and pixel-wise foreground/background likelihoods and labels in the current frame are estimated using the models. In the spatial step, another block-wise foreground/background models are constructed based on the models and labels given by the temporal step, and the corresponding per-pixel likelihoods are also generated. A graph-cut algorithm performs segmentation based on the foreground/background likelihood maps, and the segmentation result is employed to update the motion of each segment in a block; the temporal model propagation and the spatial model composition step are re-evaluated based on the updated motions, by which the iterative procedure is implemented. We tested our framework with various challenging videos involving large camera and object motions, significant background changes and clutters.  相似文献   

13.
Effective and efficient background subtraction is important to a number of computer vision tasks. We introduce several new techniques to address key challenges for background modeling using a Gaussian mixture model (GMM) for moving objects detection in a video acquired by a static camera. The novel features of our proposed model are that it automatically learns dynamics of a scene and adapts its parameters accordingly, suppresses ghosts in the foreground mask using a SURF features matching algorithm, and introduces a new spatio-temporal filter to further refine the foreground detection results. Detection of abrupt illumination changes in the scene is dealt with by a model shifting-based scheme to reuse already learned models and spatio-temporal history of foreground blobs is used to detect and handle paused objects. The proposed model is rigorously tested and compared with several previous models and has shown significant performance improvements.  相似文献   

14.
Falls have been reported as the leading cause of injury-related visits to emergency departments and the primary etiology of accidental deaths in elderly. Thus, the development of robust home surveillance systems is of great importance. In this article, such a system is presented, which tries to address the fall detection problem through visual cues. The proposed methodology utilizes a fast, real-time background subtraction algorithm, based on motion information in the scene and pixels intensity, capable to operate properly in dynamically changing visual conditions, in order to detect the foreground object. At the same time, it exploits 3D space’s measures, through automatic camera calibration, to increase the robustness of fall detection algorithm which is based on semi-supervised learning approach. The above system uses a single monocular camera and is characterized by minimal computational cost and memory requirements that make it suitable for real-time large scale implementations.  相似文献   

15.
视频序列中的运动目标检测是计算机视觉领域的重要研究课题.背景减除是运动目标检测的有效方法,但相机抖动会对背景提取带来极大干扰,从而可能造成传统基于模型的图像处理方法模型失真.本文提出了相机抖动场景下前景图像提取的数据驱动背景图像更新控制算法.首先利用Harris特征检测进行背景补偿以消除抖动干扰.然后利用无模型自适应控制方法,建立单入单出控制系统来表示背景图像并进行实时更新.最后运用背景减除法提取运动目标前景图像.本文方法与传统基于模型方法进行了不同视频序列的对比仿真.实验结果表明,本文方法可以有效处理相机抖动场景下的运动目标检测问题,目标前景图像分离效果更加接近真实场景.  相似文献   

16.
针对摄像机平行于场景运动,且存在晃动的情况,提出将摄像机运动情况下的视频修复转化为摄像机静止情况下的视频修复。该算法采用基于SURF特征点提取的视频拼接技术,将输入视频转化为新的背景静止的视频序列,在新视频中分别进行前景修复和背景修复,把修复好的新视频映射到原视频。在前景修复中,将目标块定义为能够覆盖整个运动物体的长方形块。在背景修复中,算法对待修复区域进行统一的标记和修复。实验结果表明,该算法提高了修复的准确度和速度,有利于保持视频的连续性。  相似文献   

17.
High‐quality video editing usually requires accurate layer separation in order to resolve occlusions. However, most of the existing bilayer segmentation algorithms require either considerable user intervention or a simple stationary camera configuration with known background, which is difficult to meet for many real world online applications. This paper demonstrates that various visually appealing montage effects can be online created from a live video captured by a rotating camera, by accurately retrieving the camera state and segmenting out the dynamic foreground. The key contribution is that a novel fast bilayer segmentation method is proposed which can effectively extract the dynamic foreground under rotational camera configuration, and is robust to imperfect background estimation and complex background colors. Our system can create a variety of live visual effects, including but not limited to, realistic virtual object insertion, background substitution and blurring, non‐photorealistic rendering and camouflage effect. A variety of challenging examples demonstrate the effectiveness of our method.  相似文献   

18.
视频运动对象分割是计算机视觉和视频处理的基本问题。在摄像机存在全局运动的动态场景下,准确分割运动对象依然是难点和热点问题。本文提出一种基于全局运动补偿和核密度检测的动态场景下视频运动对象分割算法。首先,提出匹配加权的全局运动估计补偿算法,消除动态场景下背景运动对运动对象分割的影响;其次,采用非参数核密度估计方法分别估计各像素属于前景与背景的概率密度,通过比较属于前景和属于背景的概率及形态学处理得到运动对象分割结果。实验结果证明,该方法实现简单,有效地提高了动态场景下运动对象分割的准确性。  相似文献   

19.
目的 在视频前景检测中,像素级的背景减除法检测结果轮廓清晰,灵活性高。然而,基于样本一致性的像素级分类方法不能有效利用像素信息,遇到颜色伪装和出现静止前景等复杂情形时无法有效检测前景。为解决这一问题,提出一种基于置信度加权融合和视觉注意的前景检测方法。方法 通过加权融合样本的颜色置信度和纹理置信度之和判断前景,进行自适应更新样本的置信度和权值;通过划分子序列结合颜色显著性和纹理差异度构建视觉注意机制判定静止前景目标,使用更新置信度最小样本的策略保持背景模型的动态更新。结果 本文方法在CDW2014(change detection workshops 2014)和SBM-RGBD(scene background modeling red-green-blue-depth)数据集上进行检测,相较于5种主流算法,本文算法的查全率和精度相较于次好算法分别提高2.66%和1.48%,综合性能最优。结论 本文算法提高了在颜色伪装和存在静止前景等复杂情形下前景检测的精度和召回率,在公开数据集上得到更好的检测效果。可将其应用于存在颜色伪装和静止前景等复杂情形的视频监控中。  相似文献   

20.
In this paper, we propose a novel method for moving foreground object extraction in sequences taken by a wearable camera, with strong motion. We use camera motion compensated frame differencing, enhanced with a novel kernel-based estimation of the probability density function of background pixels. The probability density functions are used for filtering false foreground pixels on the motion compensated difference frame. The estimation is based on a limited number of measurements; therefore, we introduce a special, spatio-temporal sample point selection and an adaptive thresholding method to deal with this challenge. Foreground objects are built with the DBSCAN algorithm from detected foreground pixels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号