首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Motion vector plays one significant feature in moving object segmentation. However, the motion vector in this application is required to represent the actual motion displacement, rather than regions of visually significant similarity. In this paper, region-based selective optical flow back-projection (RSOFB) which back-projects optical flows in a region to restore the region's motion vector from gradient-based optical flows, is proposed to obtain genuine motion displacement. The back-projection is performed based on minimizing the projection mean square errors of the motion vector on gradient directions. As optical flows of various magnitudes and directions provide various degrees of reliability in the genuine motion restoration, the optical flows to be used in the RSOFB are optimally selected based on their sensitivity to noises and their tendency in causing motion estimation errors. In this paper a deterministic solution is also derived for performing the minimization and obtaining the genuine motion magnitude and motion direction.  相似文献   

2.
We propose a scheme for comparing local neighborhoods (window) of image points, to estimate optical flow using discrete optimization. The proposed approach is based on using large correlation windows with adaptive support-weights. We present three new types of weighting constraints derived from image gradient, color statistics and occlusion information. The first type provides gradient structure constraints that favor flow consistency across strong image gradients. The second type imposes perceptual color constraints that reinforce relationship among pixels in a window according to their color statistics. The third type yields occlusion constraints that reject pixels that are seen in one window but not seen in the other. All these constraints contribute to suppress the effect of cluttered background, which is unavoidably included in the large correlation windows. Experimental results demonstrate that each of the proposed constraints appreciably elevates the quality of estimations, and that they jointly yield results that compare favorably to current techniques, especially on object boundaries.  相似文献   

3.
The computation of optical flow within an image sequence is one of the most widely used techniques in computer vision. In this paper, we present a new approach to estimate the velocity field for motion-compensated compression. It is derived by a nonlinear system using the direct temporal integral of the brightness conservation constraint equation or the Displaced Frame Difference (DFD) equation. To solve the nonlinear system of equations, an adaptive framework is used, which employs velocity field modeling, a nonlinear least-squares model, Gauss–Newton and Levenberg–Marquardt techniques, and an algorithm of the progressive relaxation of the over-constraint. The three criteria by which successful motion-compensated compression is judged are 1.) The fidelity with which the estimated optical flow matches the ground truth motion, 2.) The relative absence of artifacts and “dirty window” effects for frame interpolation, and 3.) The cost to code the motion vector field. We base our estimated flow field on a single minimized target function, which leads to motion-compensated predictions without incurring penalties in any of these three criteria. In particular, we compare our proposed algorithm results with those from Block-Matching Algorithms (BMA), and show that with nearly the same number of displacement vectors per fixed block size, the performance of our algorithm exceeds that of BMA in all the three above points. We also test the algorithm on synthetic and natural image sequences, and use it to demonstrate applications for motion-compensated compression.  相似文献   

4.
Optical flow computation has been extensively used for motion estimation of objects in image sequences. The results obtained by most optical flow techniques are computationally intensive due to the large amount of data involved. A new change-based data flow pipelined architecture has been developed implementing the Horn and Schunk smoothness constraint; pixels of the image sequence that significantly change, fire the execution of the operations related to the image processing algorithm. This strategy reduces the data and, combined with the custom hardware implemented, it achieves a significant optical flow computation speed-up with no loss of accuracy. This paper presents the bases of the change-driven data flow image processing strategy, as well as the implementation of custom hardware developed using an Altera Stratix PCI development board.
Rocío Gómez-FabelaEmail:

Julio C. Sosa   received the degree in electronic engineering in 1997 from the Instituto Tecnológico de Lázaro Cárdenas, México, the M.Sc. degree in electrical engineering in 2000 from the Centro de Investigacón y de Estudios Avanzadosthen del I.P.N., México and he is candidate to Ph.D. by University of Valencia, Spain. Currently he is associate professor at the Postgrade Department, the Escuela Superior de Cómputo—I.P.N. México. His research interests include hardware architectures, artificial intelligence and microelectronic. Jose A. Boluda   was born in Xàtiva (Spain) in 1969. He graduated in physics (1992) and received his Ph.D. (2000) in physics, both at the University of Valencia. From 1993, he was with the electronics and computer science department of the University of Valencia, Spain, where he collaborated in several projects related to ASIC design and image processing. He has been a visiting researcher with the Department of Electrical Engineering at the University of Virginia, USA and the Department of Applied Informatics at the University of Macedonia, Greece. He is currently Titular Professor in the Department of Informatics at the University of Valencia. His research interests include reconfigurable systems, VHDL hardware design, programmable logic synthesis and sensor design. Fernando Pardo   received the M.S. degree in physics from the University of Valencia, Valencia, Spain in 1991, and the Ph.D. in computer engineering from the University of Valencia, Valencia, Spain in 1997. From 1991 to 1993, he was with the Electronics and Computer Science department of the University of Valencia, Spain, where he collaborated in several research projects. In 1994 he was with the Integrated Laboratory for Advanced Robotics at the University of Genoa, Italy, where he worked on space-variant image processing. In 1994 he joined IMEC (Interuniversitary Micro-Electronics Centre), Belgium, where he worked on projects related to CMOS space-variant image sensors. In 1995 he joined the University of Valencia, Spain, where he is currently Associate Professor and the Head of the Computer Engineering Department. He is currently leading several projects regarding architectures for high-speed image processing and bio-inspired image sensors. Rocío Gómez-Fabela   was born in México City in 1979. She received the Computer Engineering degree in 2001 from Escuela Superior de Cómputo, México. She is currently studying towards the Ph.D. in the Department of Informatics, University of Valencia, Spain. Her current research interests are softcomputing, reconfigurable systems and VHDL hardware design.  相似文献   

5.
This paper proposes an attitude estimation method based on optical flow to solve the attitude tracking control problem for a three degree-of-freedom (3-DOF) lab helicopter. First, the relationship between optical flow and the motion of a general unmanned aerial vehicle is derived from the transformation between the image coordinate frame and the world coordinate frame. Then, an expression for the angular velocity of the 3-DOF helicopter is deduced only based on optical flow, and the attitude information is acquired by solving nonlinear equations. Finally, using visual feedback, a linear quadratic regulation (LQR) controller is designed for hovering and tracking, which consists of a feedforward controller and a LQR state feedback controller. Closed-loop experimental results on the lab helicopter demonstrate the effectiveness of the proposed estimation and control methods.  相似文献   

6.
We present a novel combined post-filtering (CPF) method to improve the accuracy of optical flow estimation. Its attractive advantages are that outliers reduction is attained while discontinuities are well preserved, and occlusions are partially handled. Major contributions are the following: First, the structure tensor (ST) based edge detection is introduced to extract flow edges. Moreover, we improve the detection performance by extending the traditional 2D spatial edge detector into spatial-scale 3D space, and also using a gradient bilateral filter (GBF) to replace the linear Gaussian filter to construct a multi-scale nonlinear ST. GBF is useful to preserve discontinuity but it is computationally expensive. A hybrid GBF and Gaussian filter (HGBGF) approach is proposed by means of a spatial-scale gradient signal-to-noise ratio (SNR) measure to solve the low efficiency issue. Additionally, a piecewise occlusion detection method is used to extract occlusions. Second, we apply a CPF method, which uses a weighted median filter (WMF), a bilateral filter (BF) and a fast median filter (MF), to post-smooth the detected edges and occlusions, and the other flat regions of the flow field, respectively. Benchmark tests on both synthetic and real sequences demonstrate the effectiveness of our method.  相似文献   

7.
This paper proposes a new optical flow smoothing methodology combining vector diffusion and robust statistics. Vector smoothing using diffusion preserves moving object boundaries and the main motion discontinuities. According to a study provided in the paper, diffusion does not remove the outliers but spreads them out, introducing a bias in the neighbourhood. In this paper robust statistics operators such as the median and alpha-trimmed mean are considered for robustifying the diffusion kernels. The robust diffusion smoothing process is extended to 3-D lattices as well. The proposed algorithms are applied for smoothing artificially generated vector fields as well as the optical flow estimated from image sequences.  相似文献   

8.
Optical flow methods are among the most accurate techniques for estimating displacement and velocity fields in a number of applications that range from neuroscience to robotics. The performance of any optical flow method will naturally depend on the configuration of its parameters, and for different applications there are different trade-offs between the corresponding evaluation criteria (e.g. the accuracy and the processing speed of the estimated optical flow). Beyond the standard practice of manual selection of parameters for a specific application, in this article we propose a framework for automatic parameter setting that allows searching for an approximated Pareto-optimal set of configurations in the whole parameter space. This final Pareto-front characterizes each specific method, enabling proper method comparison and proper parameter selection. Using the proposed methodology and two open benchmark databases, we study two recent variational optical flow methods. The obtained results clearly indicate that the method to be selected is application dependent, that in general method comparison and parameter selection should not be done using a single evaluation measure, and that the proposed approach allows to successfully perform the desired method comparison and parameter selection.  相似文献   

9.
10.
This work compares systematically two optical flow-based facial expression recognition methods. The first one is featural and selects a reduced set of highly discriminant facial points while the second one is holistic and uses much more points that are uniformly distributed on the central face region. Both approaches are referred as feature point tracking and holistic face dense flow tracking, respectively. They compute the displacements of different sets of points along the sequence of frames describing each facial expression (i.e. from neutral to apex). First, we evaluate our algorithms on the Cohn-Kanade database for the six prototypic expressions under two different spatial frame resolutions (original and 40%-reduced). Later, our methods were also tested on the MMI database which presents higher variabilities than the Cohn-Kanade one. The results on the first database show that dense flow tracking method at original resolution slightly outperformed, in average, the recognition rates of feature point tracking method (95.45% against 92.42%) but it requires 68.24% more time to track the points. For the patterns of MMI database, using dense flow tracking at the original resolution, we achieved very similar average success rates.  相似文献   

11.
To solve the problem of estimating velocities of gas bubbles in melted glass, a method based on optical flow constraint (OFC) has been extended to the 3D case. A single camera, whose distance to the fluid is variable in time, is used to capture a sequence of frames at different depths. Since objects are not static, we cannot obtain two frames of different height values at the same time, and to our knowledge, this prevents the use of common 3D motion estimation techni ques. Since the information will be rather sparse, our estimation takes several measures around a given pixel and discards the erroneous ones, using a robust estimator. Along with the exposition of the practical application, the estimation proposed here is first contrasted in the 2D case to common benchmarks and then evaluated for a synthetic problem where velocities are known. Received: 9 July 2001 / Accepted: 5 August 2002 Published online: 3 June 2003 This work has been supported by Saint Gobain Cristaleria S.A., under contract FUO-EM-034-01 with Oviedo University, Spain.  相似文献   

12.
Dynamic estimation of optical flow field using objective functions   总被引:1,自引:0,他引:1  
Optical flow (image velocity) fields are computed and interpreted from an image sequence by incorporating knowledge of object kinematics. Linear and quadratic objective functions are defined by considering the kinematics, and the function parameters are estimated simultaneously with the computation of the velocity field by relaxation. The objective functions provide an interpretation of the dynamic scenes and, at the same time, serve as the smoothness constraints. The computation is initially based on measured perpendicular velocity components of contours or region boundaries which, due to the ‘aperture problem’, are theoretically not the true perpendicular velocity components. This difficulty is alleviated by introducing a dynamic procedure for the measurement of the perpendicular components. Experiments on using objective functions for synthetic and real images of translating and rotating objects generated velocity fields that are meaningful and consistent with visual perception.  相似文献   

13.
A phase-difference-based algorithm for disparity and optical flow estimation is implemented on a TI-C40-based parallel DSP system. The module performs real-time computation of disparity maps on images of size 128 × 128 pixels and computation of optical flows on images of size 64 × 64 pixels. This paper describes the algorithm and its parallel implementation. Processing times required for the computation of disparity maps and velocity fields and measures of the algorithm's performance are reported in detail.  相似文献   

14.
In this paper a new approach to motion analysis from stereo image sequences using unified temporal and spatial optical flow field (UOFF) is reported. That is, based on a four-frame rectangular model and the associated six UOFF field quantities, a set of equations is derived from which both position and velocity can be determined. It does not require feature extraction and correspondence establishment, which are known to be difficult, and only partial solutions suitable for simplistic situations have been developed. Furthermore, it is capable of detecting multiple moving objects even when partial occlusion occurs, and is potentially suitable for nonrigid motion analysis. Unlike the current existing techniques for motion analysis from stereo imagery, the recovered motion by using this new approach is for a whole continuous field instead of only for some features. It is a purely optical flow approach. Two experiments are presented to demonstrate the feasibility of the approach.  相似文献   

15.
In this paper, we propose a new method for estimating camera motion parameters based on optical flow models. Camera motion parameters are generated using linear combinations of optical flow models. The proposed method first creates these optical flow models, and then linear decompositions are performed on the input optical flows calculated from adjacent images in the video sequence, which are used to estimate the coefficients of each optical flow model. These coefficients are then applied to the parameters used to create each optical flow model, and the camera motion parameters implied in the adjacent images can be estimated through a linear composition of the weighted parameters.We demonstrated that the proposed method estimates the camera motion parameters accurately and at a low computational cost as well as robust to noise residing in the video sequence being analyzed.  相似文献   

16.
Accurate optical flow computation under non-uniform brightness variations   总被引:1,自引:0,他引:1  
In this paper, we present a very accurate algorithm for computing optical flow with non-uniform brightness variations. The proposed algorithm is based on a generalized dynamic image model (GDIM) in conjunction with a regularization framework to cope with the problem of non-uniform brightness variations. To alleviate flow constraint errors due to image aliasing and noise, we employ a reweighted least-squares method to suppress unreliable flow constraints, thus leading to robust estimation of optical flow. In addition, a dynamic smoothness adjustment scheme is proposed to efficiently suppress the smoothness constraint in the vicinity of the motion and brightness variation discontinuities, thereby preserving motion boundaries. We also employ a constraint refinement scheme, which aims at reducing the approximation errors in the first-order differential flow equation, to refine the optical flow estimation especially for large image motions. To efficiently minimize the resulting energy function for optical flow computation, we utilize an incomplete Cholesky preconditioned conjugate gradient algorithm to solve the large linear system. Experimental results on some synthetic and real image sequences show that the proposed algorithm compares favorably to most existing techniques reported in literature in terms of accuracy in optical flow computation with 100% density.  相似文献   

17.
In this study, the spatial local optimization method was improved to obtain high precision of optical flow for cases in which the object movement changes substantially and a method to trace the loci of moving objects was considered. In the spatial local optimization method, the precision of the optical flow when the object movement changes substantially becomes a problem. Therefore, to make the object movement relatively small, we obtained flow vectors from the image sequence to drop the resolution of the original input image sequence to half the initial resolution. flow vectors were then obtained from the original input image sequence that were smaller than the threshold value. We show that the precision of the optical flow when the object movement changes substantially is improved by this method. Method used to trace the loci of moving objects was demonstrated. We obtained clusters from histograms of flow vectors and pursued each cluster. We show that it is possible to trace moving objects by this method. This work was presented, in part, at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

18.
A common problem of optical flow estimation in the multiscale variational framework is that fine motion structures cannot always be correctly estimated, especially for regions with significant and abrupt displacement variation. A novel extended coarse-to-fine (EC2F) refinement framework is introduced in this paper to address this issue, which reduces the reliance of flow estimates on their initial values propagated from the coarse level and enables recovering many motion details in each scale. The contribution of this paper also includes adaptation of the objective function to handle outliers and development of a new optimization procedure. The effectiveness of our algorithm is demonstrated by Middlebury optical flow benchmarkmarking and by experiments on challenging examples that involve large-displacement motion.  相似文献   

19.
Optical flow is a classical approach to estimating the velocity vector fields associated to illuminated objects traveling onto manifolds. The extraction of rotational (vortices) or curl-free (sources or sinks) features of interest from these vector fields can be obtained from their Helmholtz-Hodge decomposition (HHD). However, the applications of existing HHD techniques are limited to flat, 2D domains. Here we demonstrate the extension of the HHD to vector fields defined over arbitrary surface manifolds. We propose a Riemannian variational formalism, and illustrate the proposed methodology with synthetic and empirical examples of optical-flow vector field decompositions obtained on a variety of surface objects.  相似文献   

20.
Described here is a method for estimating rolling and swaying motions of a mobile robot using optical flow. We have proposed an image sensor with a hyperboloidal mirror for the vision-based navigation of a mobile robot. Its name is HyperOmni Vision. The radial component of optical flow in HyperOmni Vision has a periodic characteristic. The circumferential component of optical flow has a symmetric characteristic. The proposed method makes use of these characteristic to estimate robustly the rolling and swaying motion of the mobile robot. Correspondence to: Y. Yagi e-mail: y-yagi@sys.es.osaka-u.ac.jp  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号