首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 15 毫秒
1.
基本矩阵的鲁棒贪心估计算法   总被引:2,自引:0,他引:2  
分析了基于随机抽样检验思想的现有鲁棒算法在基本矩阵的求解问题中存在的不足,提出一种获得基本矩阵最优解的算法.利用各种鲁棒技术获得内点集,以点到极线的距离作为最优量度标准,采用贪心策略在内点集中寻找最优子集,并利用最优子集来计算基本矩阵.合成数据与真实图像的实验结果表明,该算法在基本矩阵的求解精度、抗噪声能力、对极点的稳定性等方面优于现有的鲁棒方法.  相似文献   

2.
分析了在图像序列中图像局部灰值区域为二次型条件下用最大似我比方法进行运动检测的基本原理及其在一定置信度下的阈值选取问题,导出了一套检测数学模型,使得变化检测理论进一步完善。  相似文献   

3.
This paper addresses the recovery of structure and motion from uncalibrated images of a scene under full perspective or under affine projection. Particular emphasis is placed on the configuration of two views, while the extension to $N$ views is given in Appendix. A unified expression of the fundamental matrix is derived which is valid for any projection model without lens distortion (including full perspective and affine camera). Affine reconstruction is considered as a special projective reconstruction. The theory is elaborated in a way such that everyone having knowledge of linear algebra can understand the discussion without difficulty. A new technique for affine reconstruction is developed, which consists in first estimating the affine epipolar geometry and then performing a triangulation for each point match with respect to an implicit common affine basis.  相似文献   

4.
Estimation of parameters from image tokens is a central problem in computer vision. FNS, CFNS and HEIV are three recently developed methods for solving special but important cases of this problem. The schemes are means for finding unconstrained (FNS, HEIV) and constrained (CFNS) minimisers of cost functions. In earlier work of the authors, FNS, CFNS and a core version of HEIV were applied to a specific cost function. Here we extend the approach to more general cost functions. This allows the FNS, CFNS and HEIV methods to be placed within a common framework. Wojciech Chojnacki is a professor of mathematics in the Department of Mathematics and Natural Sciences at Cardinal Stefan Wyszyski University in Warsaw. He is concurrently a senior research fellow in the School of Computer Science at the University of Adelaide working on a range of problems in computer vision. His research interests include differential equations, mathematical foundations of computer vision, functional analysis, and harmonic analysis. He is author of over 70 articles on pure mathematics and machine vision, and a member of the Polish Mathematical Society. Michael J. Brooks holds the Chair in Artificial Intelligence within the University of Adelaides School of Computer Science, which he heads. He is also leader of the Image Analysis Program within the Cooperative Research Centre for Sensor Signal and Information Processing, based in South Australia. His research interests include structure from motion, self-calibration, metrology, statistical vision-parameter estimation, and video surveillance and analysis. He is author of over 100 articles on vision, actively involved in a variety of commercial applications, an Associate Editor of the International Journal of Computer Vision, and a Fellow of the Australian Computer Society. Anton van den Hengel is a senior lecturer in the School of Computer Science within the University of Adelaide. He is also leader of the Video Surveillance and Analysis Project within the Cooperative Research Centre for Sensor Signal and Information Processing. His research interests include structure from motion, parameter estimation theory, and commercial applications of computer vision. Darren Gawley graduated with first class honours from the School of Computer Science at the University of Adelaide. He holds a temporary lectureship at the same University, and is currently finalising his PhD in the field of computer vision.This revised version was published online in June 2005 with correction to CoverDate  相似文献   

5.
In this paper, we present an analytic solution to the problem of estimating an unknown number of 2-D and 3-D motion models from two-view point correspondences or optical flow. The key to our approach is to view the estimation of multiple motion models as the estimation of a single multibody motion model. This is possible thanks to two important algebraic facts. First, we show that all the image measurements, regardless of their associated motion model, can be fit with a single real or complex polynomial. Second, we show that the parameters of the individual motion model associated with an image measurement can be obtained from the derivatives of the polynomial at that measurement. This leads to an algebraic motion segmentation and estimation algorithm that applies to most of the two-view motion models that have been adopted in computer vision. Our experiments show that the proposed algorithm out-performs existing algebraic and factorization-based methods in terms of efficiency and robustness, and provides a good initialization for iterative techniques, such as Expectation Maximization, whose performance strongly depends on good initialization. This paper is an extended version of [34]. The authors thank Sampreet Niyogi for his help with the experimental section of the paper. This work was partially supported by Hopkins WSE startup funds, UIUC ECE startup funds, and by grants NSF CAREER IIS-0347456, NSF CAREER IIS-0447739, NSF CRS-EHS-0509151, NSF-EHS-0509101, NSF CCF-TF-0514955, ONR YIP N00014-05-1-0633 and ONR N00014-05-1-0836. René Vidal received his B.S. degree in Electrical Engineering (highest honors) from the Universidad Católica de Chile in 1997 and his M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences from the University of California at Berkeley in 2000 and 2003, respectively. In 2004, he joined The Johns Hopkins University as an Assistant Professor in the Department of Biomedical Engineering and the Center for Imaging Science. He has co-authored more than 70 articles in biomedical imaging, computer vision, machine learning, hybrid systems, robotics, and vision-based control. Dr. Vidal is recipient of the 2005 NFS CAREER Award, the 2004 Best Paper Award Honorable Mention at the European Conference on Computer Vision, the 2004 Sakrison Memorial Prize, the 2003 Eli Jury Award, and the 1997 Award of the School of Engineering of the Universidad Católica de Chile to the best graduating student of the school. Yi Ma received his two bachelors' degree in Automation and Applied Mathematics from Tsinghua University, Beijing, China in 1995. He received an M.S. degree in Electrical Engineering and Computer Science (EECS) in 1997, an M.A. degree in Mathematics in 2000, and a PhD degree in EECS in 2000 all from the University of California at Berkeley. Since August 2000, he has been on the faculty of the Electrical and Computer Engineering Department of the University of Illinois at Urbana-Champaign, where he is now an associate professor. In fall 2006, he is a visiting faculty at the Microsoft Research in Asia, Beijing, China. He has written more than 40 technical papers and is the first author of a book, entitled “An Invitation to 3-D Vision: From Images to Geometric Models,” published by Springer in 2003. Yi Ma was the recipient of the David Marr Best Paper Prize at the International Conference on Computer Vision in 1999 and Honorable Mention for the Longuet-Higgins Best Paper Award at the European Conference on Computer Vision in 2004. He received the CAREER Award from the National Science Foundation in 2004 and the Young Investigator Program Award from the Office of Naval Research in 2005.  相似文献   

6.
The renormalisation technique of Kanatani is intended to iteratively minimise a cost function of a certain form while avoiding systematic bias inherent in the common method of minimisation due to Sampson. Within the computer vision community, the technique has generally proven difficult to absorb. This work presents an alternative derivation of the technique, and places it in the context of other approaches. We first show that the minimiser of the cost function must satisfy a special variational equation. A Newton-like, fundamental numerical scheme is presented with the property that its theoretical limit coincides with the minimiser. Standard statistical techniques are then employed to derive afresh several renormalisation schemes. The fundamental scheme proves pivotal in the rationalising of the renormalisation and other schemes, and enables us to show that the renormalisation schemes do not have as their theoretical limit the desired minimiser. The various minimisation schemes are finally subjected to a comparative performance analysis under controlled conditions.  相似文献   

7.
The aim of this paper is to explore a linear geometric algorithm for recovering the three dimensional motion of a moving camera from image velocities. Generic similarities and differences between the discrete approach and the differential approach are clearly revealed through a parallel development of an analogous motion estimation theory previously explored in Vieville, T. and Faugeras, O.D. 1995. In Proceedings of Fifth International Conference on Computer Vision, pp. 750–756; Zhuang, X. and Haralick, R.M. 1984. In Proceedings of the First International Conference on Artificial Intelligence Applications, pp. 366–375. We present a precise characterization of the space of differential essential matrices, which gives rise to a novel eigenvalue-decomposition-based 3D velocity estimation algorithm from the optical flow measurements. This algorithm gives a unique solution to the motion estimation problem and serves as a differential counterpart of the well-known SVD-based 3D displacement estimation algorithm for the discrete case. Since the proposed algorithm only involves linear algebra techniques, it may be used to provide a fast initial guess for more sophisticated nonlinear algorithms (Ma et al., 1998c. Electronic Research Laboratory Memorandum, UC Berkeley, UCB/ERL(M98/37)). Extensive simulation results are presented for evaluating the performance of our algorithm in terms of bias and sensitivity of the estimates with respect to different noise levels in image velocity measurements.  相似文献   

8.
  总被引:62,自引:10,他引:52  
Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.  相似文献   

9.
匀速直线运动模糊图像的恢复   总被引:1,自引:0,他引:1  
结合计算机图形学与数字图像处理的知识,提出直接在运动方向上建立点扩展函数的算法,并利用霍夫变换检测运动参数。解决了由于运动模糊图像频谱存在零值,导致在频域中不能精确恢复图像的问题。文中给出了详细的算法及实验结果。  相似文献   

10.
In order to reconstruct 3-D shape from two uncalibrated views, one needs to resolve two problems: (i) the computed focal lengths can be imaginary; (ii) the computation fails for fixated images. We present a practical remedy for these by subsampling feature points and fixing the focal length. We first summarize theoretical backgrounds and then do simulations, which reveal a rather surprising fact that when the focal length is actually fixed, not using that knowledge yields better results for non-fixated images. We give an explanation to this seeming paradox and derive a hybrid method switching the computation by judging whether or not the images are fixated. Doing simulations and real image experiments, we demonstrate the effectiveness of our method.  相似文献   

11.
In this paper we study the problem of recovering the 3D shape, reflectance, and non-rigid motion properties of a dynamic 3D scene. Because these properties are completely unknown and because the scene's shape and motion may be non-smooth, our approach uses multiple views to build a piecewise-continuous geometric and radiometric representation of the scene's trace in space-time. A basic primitive of this representation is the dynamic surfel, which (1) encodes the instantaneous local shape, reflectance, and motion of a small and bounded region in the scene, and (2) enables accurate prediction of the region's dynamic appearance under known illumination conditions. We show that complete surfel-based reconstructions can be created by repeatedly applying an algorithm called Surfel Sampling that combines sampling and parameter estimation to fit a single surfel to a small, bounded region of space-time. Experimental results with the Phong reflectancemodel and complex real scenes (clothing, shiny objects, skin) illustrate our method's ability to explain pixels and pixel variations in terms of their underlying causes—shape, reflectance, motion, illumination, and visibility.  相似文献   

12.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号