共查询到20条相似文献,搜索用时 140 毫秒
1.
一种新的深度传感器内部参数标定方法研究 总被引:2,自引:1,他引:1
针对双镜头深度传感器(以Kinect为例)出厂标 定参数精度不高的问题,提出一种新的标定方法。 对于Kinect 2.0深度镜头,利用空间线定长约束,通过间接平差方法求解待求参数;根据求 解参数,将深度图 像坐标转转换值Kinect坐标并将其与对应的彩色影像坐标点进行关联,基于中心投影方程标 定彩色镜头。实 验结果表明,本方法将深度影像点转换到Kinect坐标时精度优于2.5 mm,深度影像坐标转换至彩色影像坐标 时精度优于1pixel,高于Kinect微软开发包内置参数的计算精度,对一些需要较高参 数精度的应用,本文算法解算的参数更优。 相似文献
2.
3.
We present an iterative algorithm for robustly estimating the ego-motion and refining and updating a coarse depth map using parametric surface parallax models and brightness derivatives extracted from an image pair. Given a coarse depth map acquired by a range-finder or extracted from a digital elevation map (DEM), ego-motion is estimated by combining a global ego-motion constraint and a local brightness constancy constraint. Using the estimated camera motion and the available depth estimate, motion of the three-dimensional (3-D) points is compensated. We utilize the fact that the resulting surface parallax field is an epipolar field, and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate. The parallax magnitude is estimated using a constant parallax model (CPM) which assumes a smooth parallax field and a depth based parallax model (DBPM), which models the parallax magnitude using the given depth map. We obtain confidence measures for determining the accuracy of the estimated depth values which are used to remove regions with potentially incorrect depth estimates for robustly estimating ego-motion in subsequent iterations. Experimental results using both synthetic and real data (both indoor and outdoor sequences) illustrate the effectiveness of the proposed algorithm. 相似文献
4.
大视场空间相机在轨成像期间,由于地球自转、卫星姿态机动和颤振等因素导致焦面像速场呈非线性各向异性分布。为此提出了一种新的基于刚体运动学的像移速度场建模方法,考虑离轴角参数,推导了离轴三反大视场空间相机的像速场解析式。以某大视场空间相机为例,分析了侧摆成像时同速与异速像移速度匹配模式对相机成像质量的影响。分析结果表明:以传函下降5%为约束,侧摆15成像时,当积分级数大于10级时应采用异速匹配模式,积分级数为32级时,异速匹配相比于同速匹配使焦面动态MTF从0.340 8提高到0.970 2。当积分级数确定为16级时,侧摆角在12.3以内时可采用同速匹配模式。在轨成像结果证明了像移速度场模型的准确性,可为大视场空间相机像移补偿提供可靠依据。 相似文献
5.
Two classes of algorithms for modeling camera motion in video sequences captured by a camera are proposed. The first class can be applied when there is no camera translation and the motion of the camera can be adequately modeled by zoom, pan, and rotation parameters. The second class is more general in that it can be applied when the camera is undergoing a translation motion, as well as a rotation and zoom and pan. This class uses seven parameters to describe the motion of the camera and requires the depth map to be known at the receiver. The salient feature of both algorithms is that the camera motion is estimated using binary matching of the edges in successive frames. The rate distortion characteristics of the algorithms are compared with that of the block matching algorithm and show that the former provide performance characteristics similar to those of the latter with reduced computational complexity. 相似文献
6.
For particle image velocimetry (PIV) technique, the two-dimensional (2D) PIV by one camera can only obtain 2D velocity field, while three-dimensional (3D) PIV based on tomography by three or four cameras is always complex and expensive. In this work, a binocular-PIV technology based on two cameras was proposed to reconstruct the 3D velocity field of gas-liquid two-phase flow, which is a combination of the binocular stereo vision and cross-correlation based on fast Fourier transform (CC-FFT). The depth of particle was calculated by binocular stereo vision on space scale, and the plane displacement of particles was acquired by CC-FFT on time scale. Experimental results have proved the effectiveness of the proposed method in 3D reconstruction of velocity field for gas-liquid two-phase flow. 相似文献
7.
8.
Motion and structure from feature correspondences: a review 总被引:9,自引:0,他引:9
Huang T.S. Netravali A.N. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1994,82(2):252-268
We present a review of algorithms and their performance for determining three-dimensional (3D) motion and structure of rigid objects when their corresponding features are known at different times or are viewed by different cameras. Three categories of problems are considered, depending upon whether the features are two (2D) or three-dimensional (3D) and the type of correspondence: a) 3D to 3D (i.e., locations of corresponding features in 3D space are known at two different times), b) 2D to 3D (i.e., locations of features in 3D space and their projection on the camera plane are known, and c) 2D to 2D (i.e., projections of features on the camera plane are known at two different times). Features considered include points, straight lines, curved lines, and corners. Emphasis is on problem formulation, efficient algorithms for solution, existence and uniqueness of solutions, and sensitivity of solutions to noise in the observed data. Algorithms described have been used in a variety of applications. Some of these are: a) positioning and navigating 3D objects in a 3D world, b) camera calibration, i.e., determining location and orientation of a camera by observing 3D features whose location is known, c) estimating motion and structure of moving objects relative to a camera. We mention some of the mathematical techniques borrowed from algebraic geometry, projective geometry, and homotopy theory that are required to solve these problems, list unsolved problems, and give some directions for future research 相似文献
9.
10.
Charles M. Higgins Christof Koch 《Analog Integrated Circuits and Signal Processing》2000,24(3):195-211
The extent of pixel-parallel focal plane image processing is limited by pixel area and imager fill factor. In this paper, we describe a novel multi-chip neuromorphic VLSI visual motion processing system which combines analog circuitry with an asynchronous digital interchip communications protocol to allow more complex pixel-parallel motion processing than is possible in the focal plane. This multi-chip system retains the primary advantages of focal plane neuromorphic image processors: low-power consumption, continuous-time operation, and small size. The two basic VLSI building blocks are a photosensitive sender chip which incorporates a 2D imager array and transmits the position of moving spatial edges, and a receiver chip which computes a 2D optical flow vector field from the edge information. The elementary two-chip motion processing system consisting of a single sender and receiver is first characterized. Subsequently, two three-chip motion processing systems are described. The first three-chip system uses two sender chips to compute the presence of motion only at a particular stereoscopic depth from the imagers. The second three-chip system uses two receivers to simultaneously compute a linear and polar topographic mapping of the image plane, resulting in information about image translation, rotation, and expansion. These three-chip systems demonstrate the modularity and flexibility of the multi-chip neuromorphic approach. 相似文献
11.
12.
A method to quantify the motion of the heart from digitized sequences of two-dimensional echocardiograms (2-D) echos was recently proposed. This method computes on every point of the 2-D echoes, the 2-D apparent velocity vector (or optical flow) which characterizes its interframe motion. However, further analysis is required to determine what part of this motion is due to translation, rotation, contraction, and deformation of the myocardium. A method to locally obtain this information is presented. The proposed method assumes that the interframe velocity field U(xy), V(x,y) can be locally described by linear equations in the form U(x,y)=a+Ax+By; V(x,y)=b+Cx+Dy. The additional constraint was introduced in the computation of the local velocity field by the method of projections onto convex sets. Since this constraint is only valid locally, the myocardium must be first divided into sectors and the velocity fields computed independently for each sector. 相似文献
13.
14.
Robust motion estimation for human–computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego‐motion or exo‐motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision‐based pose estimation method for fine‐motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego‐camera attached to a point of interest and exo‐cameras located in the immediate surroundings of the point of interest. The exo‐cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego‐camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego‐camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non‐contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems). 相似文献
15.
线结构光三维测量系统扫描方向的标定 总被引:1,自引:1,他引:0
提出一种基于平面标靶的线结构光三维传感器扫描方向的标定方法。利用平面标靶对摄像头进行标定,得到摄像头的内部参数,将棋盘格平面标靶固定在空间某一位置,测量系统沿着扫描方向移动并采集一系列图像。根据这一系列图像求出摄像机的外部参数,并结合已经求出的摄像机内部参数计算出标靶上同一特征点在摄像机坐标系下的坐标值,对这些点进行直线拟合得到一直线方程,直线的方向就是测量系统的扫描方向。实验表明,该方法测量精度高,操作简单,无需辅助的调整设备,降低了标定设备的成本和系统校准的难度,适合现场标定。 相似文献
16.
监测空间非合作目标的运动状态是空间监视的主要内容之一,也是进一步执行在轨操作的前提。失效卫星和空间碎片等有自身旋转运动,实施维修和抓捕的关键是准确获知目标转动矢量,包括转速和转轴方向。该文提出了一种空间非合作目标转动矢量估计方法,同时完成目标3维成像。首先利用干涉逆合成孔径雷达(InISAR)成像技术获得目标散射点的3维位置坐标以及有效转动矢量估计,然后利用微多普勒特征提取估计目标的总转速,继而通过联合有效转动矢量和总转速估计沿雷达视线方向上速度矢量未知的分量,求得目标的总转动矢量。多组仿真实验充分验证了所提方法的有效性,性能分析表明该方法可提供较准确的转动矢量估计,并可同时提供较好的3维成像结果。 相似文献
17.
18.
Wei-Ge Chen Giannakis G.B. Nandhakumar N. 《IEEE transactions on image processing》1996,5(10):1448-1461
Image motion estimation using the spatiotemporal approach has largely relied on the constant velocity assumption, and thus becomes inappropriate when the velocity of the imaged scene or the camera changes during the data acquisition time. Using a polynomial or a trigonometric polynomial model for the time variation of the image motion, spatiotemporal algorithms are developed in this paper to handle time-varying (but space-invariant) motion. Under these models, it is shown that time-varying image motion estimation is equivalent to parameter estimation of one-dimensional (1-D) polynomial phase or phase-modulated signals, which allows one to exploit well-established results in radar signal processing. When compared with alternative approaches, the resulting motion estimation algorithms produce more accurate estimates. Simulation results are provided to demonstrate the proposed schemes. 相似文献
19.
电路板自动光学检测中常通过检测电路板的边缘直线来确定其旋转角度和平移量从而完成定位配准,存在运算量大、精度不高的问题.对传统Hough变换检测直线方法进行改进,从图像金字塔、约束投票角度、约束搜索点的区域3个方面提高Hough变换的效率,并利用最小二乘拟合法提高检测精度.定位配准时,先运用改进的Hough变换求取电路板边缘直线的近似参数;然后在原图上对边缘直线进行拟合求取电路板的精确旋转角度和平移量,完成定位配准.实验表明本文方法能准确完成电路板的定位配准,对大小为1 920×1 080像素的电路板图像,定位时间约为60 ms,定位误差小于1个像素. 相似文献
20.
Practical self-calibration of pan-tilt cameras 总被引:1,自引:0,他引:1
The authors propose a practical self-calibration method of rotating and zooming cameras. The problem with previous methods occurs when the camera motion is almost fully zoomed with very little rotation, which is called the 'near-degenerate' configuration. In that case, the solutions become unstable and rotation angles cannot be calculated. When a pan-tilt camera (without z-axis rotation) is adopted and the intrinsic camera parameters are simplified, the near-degenerate configuration can be overcome and a closed-form solution obtained. Because pan-tilt cameras can be assumed for most stationary cameras (i.e. without translation) and the assumptions about the intrinsic camera parameters do not seem to effect the self-calibration, the method provides a simple, practical solution to the self-calibration problem. In addition, the authors introduce a nonlinear algorithm that adjusts not only the camera parameters but also the inter-image homography so that more accurate image registration is made possible. Simulations and experiments with real images are presented 相似文献