首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
简述了CCD摄像头测量航空相机焦距数值和检测实际成像面位置正确与否的原理和提高测试精度的方法.采用CCD摄像头接收被测相机所成的像,利用最小二乘法做曲线拟合和灰度矩法提高成像亮线中心的定位精度,从而精确地计算出像长值.实验和计算结果表明,两种方法对航空相机焦距数值的测试计算都有很高的精度,测试误差与理论误差(0.12%)相吻合;对实际成像面位置正确与否的检测判断结果完全相同,准确率达到100%.  相似文献   

2.
一种大屏幕人机交互系统的实现方法   总被引:2,自引:0,他引:2  
针对利用双目立体视觉技术实现大屏幕人机交互系统时摄像头标定复杂的问题, 提出一种大屏幕人机交互系统的实现方法。由两个摄像头负责采集用户手部的图像,获取指尖的二维图像位置坐标,通过透视投影模型,由测量所得的两个摄像头之间的距离、大屏幕左边与摄像头之间的距离、摄像头离地面的高度、大屏幕底边离地面的高度等数据,求出地面上与指尖成像在同一位置的地面坐标,由摄像头的位置及地面坐标求出一条直线,两条直线相交获取指尖空间坐标,由指尖空间坐标选取与屏幕对应的二维平面坐标,经物理坐标到逻辑坐标转换求出手指指向屏幕的实际坐标,实现大屏幕的定位。定位完成后,采用指尖检测方法检测视频图像中指尖是否存在,以此判断食指弯曲和伸开的点击操作。指尖空间坐标的计算方法虽然存在误差,但由于用户在操作过程中能够实时看到鼠标的位置,因此该误差对用户是透明的,而这种方法以简单的、近似的求解避免了复杂的摄像头标定过程,便于系统工程的实施。  相似文献   

3.
不同曝光值图像的直接融合方法   总被引:1,自引:0,他引:1  
张军  戴霞  孙德全  王邦平 《软件学报》2011,22(4):813-825
提出了一种直接从同一场景多次不同曝光值下成像的LDR(low dynamic range)图像序列中提取每个像素位置最佳成像信息的图像融合方法,可以在无需任何拍摄相机参数及场景先验信息的情况下,快速合成适合在常规设备上显示的HDR(high dynamic range)图像.该方法利用特殊设计的鲁棒性曲线拟合算法建立LDR图像序列中每个像素位置像素值曲线的数学模型,并由此给出评价单个像素成像时曝光合适程度的标准和融合最佳成像像素信息的方法.对不同场景的大量实验结果显示,该方法的计算结果与传统HDR成像技术经过复杂的HDR重建和色调映射计算后得到的结果相当,但具有更高的计算效率,并同时对图像噪声、相机微小移动和运动目标的影响具有较好的鲁棒性.  相似文献   

4.
为使清洁机器人在清扫的房间中能快速确定自己的位置,根据广角摄像头采集到的墙角画面图像中墙角的三条棱线对应的角度关系,来确定清洁机器人离墙面的距离,进而确定在房间中的位置。首先对广角摄像头采集到的畸变图像进行校正,再识别提取出墙角棱线,最后依据测量出来的房间的高度和位置信息建立三维模型并利用神经网络模拟数学关系实现定位。仿真与实际数据对比表明,此算法稳定,能够满足清洁机器人的定位需求。  相似文献   

5.
针对离焦模糊图像提出了一种基于图像阶跃边缘扩散特性的盲复原方法。该方法通过改进Grubbs异常值检测准则,对图像中的阶跃或近似阶跃边缘进行定位。通过自适应选取最佳图像区域的方法计算线扩散函数,进一步利用离焦模糊半径与线扩散函数之间的关系,计算出离焦模糊参数。根据参数得到点扩散函数,最终对离焦模糊图像进行复原。实验结果表明,该方法的复原效果较好,具有一定的应用价值。  相似文献   

6.
航空成像侦察是获取战术情报的重要途径。设计的全天时可见光航空相机成像侦察系统参照无人机各种飞行环境,基于无人机数学模型、可见光相机成像几何模型和大气辐射传输模型,实现航空相机在无人机运动参数、航空相机参数、大气环境参数变化情况下的成像侦察仿真。该系统由仿真参数初始化、数字地图加载、相机成像仿真、大气辐射传输仿真、无人机半物理仿真、图像仿真与报告输出六个模块组成。通过大量实验测试表明本系统模型有很高的逼真度和有效性,同时,可根据系统仿真报告对系统模型进行性能评估与改进。本系统的参数优化功能可指导和论证无人机航空相机成像侦察装备的研制工作。  相似文献   

7.
由于单目摄像头尺度缺失,导致鲁棒位姿估计结果不准确,为此提出单目主动摄像头真实三维重建方法.通过融合机器人的控制信息和尺寸信息,得到单目相机的位置信息,利用位置信息计算出等效基线,根据变基线等效双目策略理论获取精度更高的深度估计;采用GoDec-RANSAC算法排除样本点中的局外点,估计鲁棒位姿;通过获取的位姿,估计最后一帧图像与第一帧图像之间的相对位姿,得到全局位姿估计,完成单目主动摄像头真实三维重建.仿真实验结果表明,该方法能够准确的估计鲁棒位姿.  相似文献   

8.
为了达到通过任意摄像头拍摄照片就可对目标点进行定位的目的,针对如何用最少信息得到空间中目标点坐标问题,基于摄像机的成像模型、空间中固定点之间的几何约束,以及坐标系变换的基本原理,推导出一种类似于P4P的无标定照片的目标点定位方法。该方法可由观测到的两幅图像中已知世界坐标系中位置坐标的任意四点在图像中的位置,通过计算得到图像中目标点的三维坐标。此方法需要的已知信息极少,对于拍摄图像以及拍摄所用相机没有要求,并通过实验验证可行,精度相比传统标定方法没有明显损失,且较其他自标定定位方法更高,实用性较强。  相似文献   

9.
单幅自然场景深度恢复   总被引:1,自引:1,他引:0       下载免费PDF全文
离焦测距算法是一种用于恢复场景深度信息的常用算法。传统的离焦测距算法通常需要采集多幅离焦图像,实际应用中具有很大的制约性。文中基于局部模糊估计提出单幅离焦图像深度恢复算法。基于局部模糊一致性的假设,本文采用简单而有效的两步法恢复输入图像的深度信息:1)通过求取输入离焦图和利用已知高斯核再次模糊图之间的梯度比得到边缘处稀疏模糊图 2)将边缘位置模糊值扩离至全部图像,完整的相对深度信息即可恢复。为了获得准确的场景深度信息,本文加入几何条件约束、天空区域提取策略来消除颜色、纹理以及焦点平面歧义性带来的影响,文中对各种类型的图片进行对比实验,结果表明该算法能在恢复深度信息的同时有效抑制图像中的歧义性。  相似文献   

10.
传统离焦图像多视角模糊特征自动补偿方法存在着图像信息丢失率大、图像补偿完整度低的弊端,为了解决上述问题,提出离焦图像多视角模糊特征自动补偿方法研究。为了得到更好的模糊特征补偿效果,对离焦图像形成过程进行分析,以此为基础,对离焦图像多视角模糊模型进行构建,以构建的离焦图像多视角模糊模型为工具,采用聚类算法对离焦图像模糊特征进行相应的提取,以提取的离焦图像多视角模糊特征为基础,采用补偿算法对离焦图像多视角模糊特征进行处理,实现了离焦图像多视角模糊特征的自动补偿。通过仿真对比实验得到,与现有的三种离焦图像多视角模糊特征自动补偿方法相比较,提出的离焦图像多视角模糊特征自动补偿方法极大的降低了图像信息丢失率,提升了图像补偿完整度,充分说明提出的离焦图像多视角模糊特征自动补偿方法具备更好的补偿性能。  相似文献   

11.
Extrinsic calibration of heterogeneous cameras by line images   总被引:1,自引:0,他引:1  
The extrinsic calibration refers to determining the relative pose of cameras. Most of the approaches for cameras with non-overlapping fields of view (FOV) are based on mirror reflection, object tracking or rigidity constraint of stereo systems whereas cameras with overlapping FOV can be calibrated using structure from motion solutions. We propose an extrinsic calibration method within structure from motion framework for cameras with overlapping FOV and its extension to cameras with partially non-overlapping FOV. Recently, omnidirectional vision has become a popular topic in computer vision as an omnidirectional camera can cover large FOV in one image. Combining the good resolution of perspective cameras and the wide observation angle of omnidirectional cameras has been an attractive trend in multi-camera system. For this reason, we present an approach which is applicable to heterogeneous types of vision sensors. Moreover, this method utilizes images of lines as these features possess several advantageous characteristics over point features, especially in urban environment. The calibration consists of a linear estimation of orientation and position of cameras and optionally bundle adjustment to refine the extrinsic parameters.  相似文献   

12.
A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.  相似文献   

13.

In this paper, we propose a new video conferencing system that presents correct gaze directions of a remote user by switching among images obtained from multiple cameras embedded in a screen according to a local user’s position. Our proposed method reproduces a situation like that in which the remote user is in the same space as the local user. The position of the remote user to be displayed on the screen is determined so that the positional relationship between the users is reproduced. The system selects one of the embedded cameras whose viewing direction towards the remote user is the closest to the local user’s viewing direction to the remote user’s image on the screen. As a result of quantitative evaluation, we confirmed that, in comparison with the case using a single camera, the accuracy of gaze estimation was improved by switching among the cameras according to the position of the local user.

  相似文献   

14.
在多个相机组成的视频监视系统中,当目标物移出某一相机的视野而进入下一个时,如何实现相机的交接,实现目标物的继续跟踪是监视系统中要解决的关键问题。针对该问题,提出了一种基于位置比较的多摄像机运动目标跟踪方法。为获得目标物的位置,建立多个相机与目标物世界坐标之间映射关系的场景模型,并根据目标物出现在不同相机之间的视野边界线上的瞬间时刻的位置来给出重叠视野的边界线。由此可对任意角度摆放的多个具有重叠视野的相机之间运行的目标物进行接力跟踪。该方法可以适应多个目标物同时进入场景的情况,实验结果表明,该方法具有较高的鲁棒性,能够满足视频跟踪的实时性要求。  相似文献   

15.
A precise transshipment system is developed for automatic transporting material between an automated guided vehicle (AGV) and a load transfer station. In order to implement the alignment of the fixture base on board the AGV with that of the station, it's necessary to measure the pose of the AGV with respect to the station. The pose measurement system combined four distance sensors with two CCD cameras, with the distance sensors to measure the yaw, pitch angle and longitudinal deviation, and the cameras to determine the roll angle and the position deviation in the reference plane. A 6-degree of freedom (DOF) pose alignment system based on 3-DOF positioners is used to correct the pose deviation of the current pose with respect to the goal pose, lowering the demand on locating precision of AGV on the ground. The method to calibrate the whole measurement system is elaborately stated. The transshipment experiments have been conducted and the results show that the pose alignment system integrated with the proposed multi-sensor pose measurement system can meet the accuracy and repeatability required in industrial application.  相似文献   

16.
Structure from motion with wide circular field of view cameras   总被引:2,自引:0,他引:2  
This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180/spl deg/ field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183/spl deg/), Sigma 8 mm-f4-EX (180/spl deg/), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors.  相似文献   

17.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

18.
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments.  相似文献   

19.
In this paper, we present a new technique of 3D face reconstruction from a sequence of images taken with cameras having varying parameters without the need to grid. This method is based on the estimation of the projection matrices of the cameras from a symmetry property which characterizes the face, these projections matrices are used with points matching in each pair of images to determine the 3D points cloud, subsequently, 3D mesh of the face is constructed with 3D Crust algorithm. Lastly, the 2D image is projected on the 3D model to generate the texture mapping. The strong point of the proposed approach is to minimize the constraints of the calibration system: we calibrated the cameras from a symmetry property which characterizes the face, this property gives us the opportunity to know some points of 3D face in a specific well-chosen global reference, to formulate a system of linear and nonlinear equations according to these 3D points, their projection in the image plan and the elements of the projections matrix. Then to solve these equations, we use a genetic algorithm which consists of finding the global optimum without the need of the initial estimation and allows to avoid the local minima of the formulated cost function. Our study is conducted on real data to demonstrate the validity and the performance of the proposed approach in terms of robustness, simplicity, stability and convergence.  相似文献   

20.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号