首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
自由拍摄视点下的可见外壳生成算法   总被引:2,自引:0,他引:2  
给出了一种使用手持相机拍摄的多幅图像构造可见外壳的方法,该方法无需对相机运动方式作任何限定,而是利用多幅图像上的对应特征点求出每一幅图像的拍摄方位,因而可以非常灵活地获取某些较大规模的室外场景的几何模型.实验结果表明,利用该方法生成的可见外壳模型准确真实,能够满足虚拟现实等应用中的要求.  相似文献   

2.
We address the problem of estimating three-dimensional motion, and structure from motion with an uncalibrated moving camera. We show that point correspondences between three images, and the fundamental matrices computed from these point correspondences, are sufficient to recover the internal orientation of the camera (its calibration), the motion parameters, and to compute coherent perspective projection matrices which enable us to reconstruct 3-D structure up to a similarity. In contrast with other methods, no calibration object with a known 3-D shape is needed, and no limitations are put upon the unknown motions to be performed or the parameters to be recovered, as long as they define a projective camera.The theory of the method, which is based on the constraint that the observed points are part of a static scene, thus allowing us to link the intrinsic parameters and the fundamental matrix via the absolute conic, is first detailed. Several algorithms are then presented, and their performances compared by means of extensive simulations and illustrated by several experiments with real images.  相似文献   

3.
动态图像序列中的运动目标检测   总被引:7,自引:4,他引:7  
根据动态图像序列中背景因成像过程中各种因素而产生变化所存在的复杂性,提出了自适应的前景目标检测方法。首先,建立图像每一像素点的高斯分布模型,并根据序列中的当前帧及历史帧信息自适应地调整模型的参数。然后,结合图像帧间的差分信息以及灰度分布的先验概率等因素将图像从空间域映射至统计域。最后,在统计域中对前景目标进行鲁棒分割。实验的结果反映了该方法的有效性。  相似文献   

4.
The emergence of a new generation of 3D auto stereoscopic displays is driving the requirement for multi-baseline images. The dominant form of this display technology requires multiple views of the same scene, captured at a single instance in time along a common baseline in order to project stereoscopic images to the viewer. The direct acquisition of multiple views (typically 8 or 16 for the current generation of such displays) is problematic due to the difficulty of configuring, calibrating and controlling multiple cameras simultaneously.This paper describes a technique that alleviates these problems by generating the required views from binocular images. Considering each stereo pair in isolation leads to inconsistency across image sequences. By incorporating a motion-tracking algorithm this problem is significantly reduced. In this paper we describe a novel approach to stereo matching of image sequences for the purpose of generating multiple virtual cameraviews. Results of extensive tests on stereo image sequences will be documented indicating that this approach is promising both in terms of the speed of execution and the quality of the results produced.  相似文献   

5.
文章研究了不同视点下图象平面投影的坐标关系,给出了摄像机旋转和平移运动产生的序列图象间的匹配公式,提出了运动序列图象的整合方法。该方法有以下优点:整合图象满足透视几何关系,保证了整合图象的真实性;不需要事先知道摄像机的内外参数;整合算法计算量小,整合实时性好。实验证明了该方法具有广泛的应用价值。  相似文献   

6.
结合计算机视觉技术和CCD测量原理。给出了一种运动目标姿态参数的测量方法,并推导了目标姿态测量的数学模型。对可能存在的误差源进行了分析,并提出了解决方法。该方法通过三个与CCD相匹配的标定点测量,采用四元法描述姿态,使得系统结构简单、速度快;仿真结果表明,系统角测量精度可达到1’,系统可以广泛应用于运动目标姿态的近距离测量。  相似文献   

7.
8.
序列图像中运动目标的分割定位   总被引:8,自引:1,他引:8  
针对序列图像中运动目标的分割定位,给出了在复杂环境下一种快速识别运动目标、有效去除背景噪声及运动目标准确定位的方法。该方法主要由运动目标检测和运动目标定位分割两部分组成:对运动目标检测采用简单差分方法提取运动目标,快速中值滤波与数学形态学方法结合去除噪声;对运动目标定位提出了一种改进的区域生长分割算法RGSA(RegionGrowingSegmentationAlgorithm)定位目标,并求出目标重心。实验结果表明,该方法正确有效,结果令人满意。  相似文献   

9.
提出了一种自动的运动对象分割算法,利用浮点图像的轮廓及其颜色特征将第一帧图像进行区域分割,然后根据帧间运动信息构造出前景和背景图像,最后以前景和背景图像作为参考,对同一场景中所有视频帧进行快速可靠的分割。  相似文献   

10.
基于单平面模板的摄像机定标研究   总被引:2,自引:0,他引:2  
提出了一种摄像机定标方法,只需要摄像机从不同方向拍摄平面模板的多幅图像,摄像机与平面模板间可以自由地移动,运动的参数无需已知。对于每个视点获得图像,提取图像上的网格角点;平面模板与图像间的网格角点对应关系,确定了单应性矩阵;对每幅图像,就可确定一个单应性矩阵,这样就能够进行摄像机定标。该算法先有一个线性解法,然后基于极大似然准则对线性结果进行非线性优化求精。该方法同时也考虑了镜头畸变的影响。实验结果表明该算法简单易用。  相似文献   

11.
We propose an approach for modeling, measurement and tracking of rigid and articulated motion as viewed from a stationary or moving camera. We first propose an approach for learning temporal-flow models from exemplar image sequences. The temporal-flow models are represented as a set of orthogonal temporal-flow bases that are learned using principal component analysis of instantaneous flow measurements. Spatial constraints on the temporal-flow are then incorporated to model the movement of regions of rigid or articulated objects. These spatio-temporal flow models are subsequently used as the basis for simultaneous measurement and tracking of brightness motion in image sequences. Then we address the problem of estimating composite independent object and camera image motions. We employ the spatio-temporal flow models learned through observing typical movements of the object from a stationary camera to decompose image motion into independent object and camera motions. The performance of the algorithms is demonstrated on several long image sequences of rigid and articulated bodies in motion.  相似文献   

12.
This work presents a new problem along with our new algorithm for a multi-robot formation with minimally controlled conditions. For multi-robot cooperation, there have traditionally been prevailing assumptions in order to collect the necessary information. These assumptions include the existence of communication systems among the robots or the use of specialized sensors such as laser scanners or omnidirectional cameras. However, they are not always valid, especially in emergency situations or with miniature robots. We, therefore, need to deal with the conditions that have received less attention in research regarding a multi-robot formation. There are several challenges: (1) less information is available than the well-known formation algorithms assume, (2) following strategies for deformable shapes in a formation with only local information available are needed, and (3) target segmentation without any markers is required. This work presents a formation algorithm based on a visual tracking algorithm, including how to process the image measurements provided by a single monocular camera. Through several experiments with real robots (developed at the University of Minnesota), we show that the proposed algorithms work well with minimal sensing information.  相似文献   

13.
14.
: Visuo-spatial neglect is recognised as a major barrier to recovery following a stroke or head injury. A standard clinical assessment technique to assess the condition is a pencil-and-paper based cancellation task. Traditional static analysis of this task involves counting the number of targets correctly cancelled on the test sheet. Using a computer-based test capture system, this paper presents the novel application of using a series of standard pattern recognition techniques to examine the diagnostic capability of a number of dynamic features relating to the sequence in which the targets were cancelled. While none of the individual dynamic features is as sensitive to neglect as the conventional static analysis, a series of standard multi-dimensional feature analysis techniques are shown to improve the classification accuracy of the dynamic properties of task execution, and hence the sensitivity to the detection of neglect and the validity of this novel application. Combining the outcome of the dynamic sequence-based features with the conventional static analysis further improves the overall sensitivity of the two cancellation tasks included in this study. The algorithmic nature of the methodology for feature extraction objectively and consistently assesses patients, thereby improving the repeatability of the task. Received: 16 April 2001, Received in revised form: 28 May 2001, Accepted: 31 May 2001  相似文献   

15.
This project aims to develop a three-dimensional (3D) model reconstruction system using images acquired from a mobile camera. It consists of four major steps: camera calibration, volumetric model reconstruction, surface modeling and texture mapping. A novel online scale factor estimation is developed to enhance the accuracy of the coplanar camera calibration. For the volumetric modeling, the voting-based shape-from-silhouette first generates a coarse model, which is then refined by the photo-consistency check using the novel 3D voxel mask. Our scheme can handle concave surface in a sophisticated way. Finally, the surface model is formed with the original images mapped. 3D models of some test objects are presented.  相似文献   

16.
《Advanced Robotics》2013,27(2-3):381-393
This paper proposes an image-based visual servo control method for a micro helicopter. The helicopter does not have any sensors to measure its position or posture on the body. A stationary camera is placed on the ground and it obtains image features of the helicopter. The differences between current features and given reference features are computed. PID controllers then make control input voltages for helicopter control and they drive the helicopter. The proposed controller can avoid some major difficulties in computer vision such as numerical instability due to image noise or model uncertainties, since the reference is defined in the image frames. An experimental result demonstrates that the proposed controller can keep the helicopter in a stable hover.  相似文献   

17.
Nowadays, finding and Tracking a person in the world of technology is becoming a necessary task for various security purposes. Since the advent of technology, the development in the field of Facial Recognition plays an important role and has been exponentially increasing in today’s world. In this, a model is proposed for facial recognition to identify and alert the system when a person in search has been found at a specific location under the surveillance of a CCTV camera. The CCTV cameras are connected to a centralized server to which the live streaming feed is uploaded by cameras at each location. The server contains a database of all persons to be found. Based on the video feed from each camera, if a particular person in search is found in a certain feed, then the location of that person will be tracked and also a signal is passed to the system responsible. This model is based on image processing concepts to match live images with the existing trained images of the person in search. Since this model recognizes a person based on the first and foremost primary unique feature of a human, that is, only the person’s face image is required and will be found to be stored in the database. Hence the task of finding a person reduces to the task of detecting human faces in the video feed and matching with the existing images from the database.  相似文献   

18.
《Advanced Robotics》2013,27(1-2):69-83
Mobile robots have various sensors that are considered as their eyes and a two-dimensional laser scanner (SOKUIKI sensor) is one of them. This sensor is commonly used for constructing environment maps or detecting obstacles. It is very important to estimate accurately the sensor's position and direction when scan data is obtained in order to use correctly such a sensor for these purposes. On the other hand, scanning sensors have an essencial problem: time lag. That is, the time when the laser is transmitted to each direction is different. Thus the map has many errors if the measurement time lag problem is not resolved. This paper presents a method for laser scan data synchronization using time registration and, as a practical application, we show its effectiveness for accurate map construction.  相似文献   

19.
This paper describes a biologically inspired approach to vision-only simultaneous localization and mapping (SLAM) on ground-based platforms. The core SLAM system, dubbed RatSLAM, is based on computational models of the rodent hippocampus, and is coupled with a lightweight vision system that provides odometry and appearance information. RatSLAM builds a map in an online manner, driving loop closure and relocalization through sequences of familiar visual scenes. Visual ambiguity is managed by maintaining multiple competing vehicle pose estimates, while cumulative errors in odometry are corrected after loop closure by a map correction algorithm. We demonstrate the mapping performance of the system on a 66 km car journey through a complex suburban road network. Using only a web camera operating at 10 Hz, RatSLAM generates a coherent map of the entire environment at real-time speed, correctly closing more than 51 loops of up to 5 km in length.   相似文献   

20.
Camera calibration from video of a walking human   总被引:1,自引:0,他引:1  
A self-calibration method to estimate a camera's intrinsic and extrinsic parameters from vertical line segments of the same height is presented. An algorithm to obtain the needed line segments by detecting the head and feet positions of a walking human in his leg-crossing phases is described. Experimental results show that the method is accurate and robust with respect to various viewing angles and subjects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号