首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
《Image and vision computing》2002,20(5-6):441-448
In this paper, we address the problem of recovering structure and motion from the apparent contours of a smooth surface. Fixed image features under circular motion and their relationships with the intrinsic parameters of the camera are exploited to provide a simple parameterization of the fundamental matrix relating any pair of views in the sequence. Such a parameterization allows a trivial initialization of the motion parameters, which all bear physical meaning. It also greatly reduces the dimension of the search space for the optimization problem, which can now be solved using only two epipolar tangents. In contrast to previous methods, the motion estimation algorithm introduced here can cope with incomplete circular motion and more widely spaced images. Existing techniques for model reconstruction from apparent contours are then reviewed and compared. Experiment on real data has been carried out and the 3D model reconstructed from the estimated motion is presented.  相似文献   

2.
In this work, we propose a method that integrates depth and fisheye cameras to obtain a wide 3D scene reconstruction with scale in one single shot. The motivation of such integration is to overcome the narrow field of view in consumer RGB-D cameras and lack of depth and scale information in fisheye cameras. The hybrid camera system we use is easy to build and calibrate, and currently consumer devices with similar configuration are already available in the market. With this system, we have a portion of the scene with shared field of view that provides simultaneously color and depth. In the rest of the color image we estimate the depth by recovering the structural information of the scene. Our method finds and ranks corners in the scene combining the extraction of lines in the color image and the depth information. These corners are used to generate plausible layout hypotheses, which have real-world scale due to the usage of depth. The wide angle camera captures more information from the environment (e.g. the ceiling), which helps to overcome severe occlusions. After an automatic evaluation of the hypotheses, we obtain a scaled 3D model expanding the original depth information with the wide scene reconstruction. We show in our experiments with real images from both home-made and commercial systems that our method achieves high success ratio in different scenarios and that our hybrid camera system outperforms the single color camera set-up while additionally providing scale in one single shot.  相似文献   

3.
Abstract— Today, high‐end simulation is demanding eye‐limiting resolution along with extremely large fields of view. This represents a tremendous challenge if none of the other desirable features and performance of traditional systems is to be lost. Image‐generating computers continue to become more capable, but the new demands placed upon them by these display technologies are proving difficult to realize at an economic price. This paper explores SEOS's investigation into this exciting new generation of simulation. A solution is outlined that delivers the required resolution, yet keeps the demands on the driving image‐generating computers to an acceptable level. At the same time, maintenance and alignment are simplified.  相似文献   

4.
We present an approach to significantly enhance the spectral resolution of imaging systems by generalizing image mosaicing. A filter transmitting spatially varying spectral bands is rigidly attached to a camera. As the system moves, it senses each scene point multiple times, each time in a different spectral band. This is an additional dimension of the generalized mosaic paradigm, which has demonstrated yielding high radiometric dynamic range images in a wide field of view, using a spatially varying density filter. The resulting mosaic represents the spectrum at each scene point. The image acquisition is as easy as in traditional image mosaics. We derive an efficient scene sampling rate, and use a registration method that accommodates the spatially varying properties of the filter. Using the data acquired by this method, we demonstrate scene rendering under different simulated illumination spectra. We are also able to infer information about the scene illumination. The approach was tested using a standard 8-bit black/white video camera and a fixed spatially varying spectral (interference) filter.  相似文献   

5.
星敏感器是一种高精度的空间姿态测量装置,精度标定是其高精度测量的重要保障.在分析影响大视场星敏感器测量误差因素的基础上,针对常用标定方法(如多项式畸变模型法和直接映射法)的缺点,提出遗传算法优化的BP神经网络算法进行大视场星敏感器标定.根据此方法建立了大视场星敏感器标定系统,并进行了实验测试.实验结果表明:该标定方法使得单星测角误差由标定前的0.14°降低到0.0053 °,有效提高了大视场星敏感器测角精度,并且代码效率高、稳定性高.  相似文献   

6.
In this paper, we describe a reconstruction method for multiple motion scenes, which are scenes containing multiple moving objects, from uncalibrated views. Assuming that the objects are moving with constant velocities, the method recovers the scene structure, the trajectories of the moving objects, the camera motion, and the camera intrinsic parameters (except skews) simultaneously. We focus on the case where the cameras have unknown and varying focal lengths while the other intrinsic parameters are known. The number of the moving objects is automatically detected without prior motion segmentation. The method is based on a unified geometrical representation of the static scene and the moving objects. It first performs a projective reconstruction using a bilinear factorization algorithm and, then, converts the projective solution to a Euclidean one by enforcing metric constraints. Experimental results on synthetic and real images are presented.  相似文献   

7.
ABSTRACT

Vegetation is an important land-cover type and its growth characteristics have potential for improving land-cover classification accuracy using remote-sensing data. However, due to lack of suitable remote-sensing data, temporal features are difficult to acquire for high spatial resolution land-cover classification. Several studies have extracted temporal features by fusing time-series Moderate Resolution Imaging Spectroradiometer data and Landsat data. Nevertheless, this method needs assumption of no land-cover change occurring during the period of blended data and the fusion results also present certain errors influencing temporal features extraction. Therefore, time-series high spatial resolution data from a single sensor are ideal for land-cover classification using temporal features. The Chinese GF-1 satellite wide field view (WFV) sensor has realized the ability of acquiring multispectral data with decametric spatial resolution, high temporal resolution and wide coverage, which contain abundant temporal information for improving land-cover classification accuracy. Therefore, it is of important significance to investigate the performance of GF-1 WFV data on land-cover classification. Time-series GF-1 WFV data covering the vegetation growth period were collected and temporal features reflecting the dynamic change characteristics of ground-objects were extracted. Then, Support Vector Machine classifier was used to land-cover classification based on the spectral features and their combination with temporal features. The validation results indicated that temporal features could effectively reflect the growth characteristics of different vegetation and finally improved classification accuracy of approximately 7%, reaching 92.89% with vegetation type identification accuracy greatly improved. The study confirmed that GF-1 WFV data had good performances on land-cover classification, which could provide reliable high spatial resolution land-cover data for related applications.  相似文献   

8.
This paper presents a novel framework for Euclidean structure recovery utilizing a scaled orthographic view and perspective views simultaneously. A scaled orthographic view is introduced in order to automatically obtain camera parameters such as camera positions, orientation, and focal length. Scaled orthographic properties enable all camera parameters to be calculated implicitly and perspective properties enable a Euclidean structure to be recovered. The method can recover a Euclidean structure with at least seven point correspondences across a scaled orthographic view and perspective views. Experimental results for both computed and natural images verify that the method recovers structure with sufficient accuracy to demonstrate potential utility. The proposed method can be applied to an interface for 3D modeling, recognition and tracking  相似文献   

9.
Epipolar geometry from profiles under circular motion   总被引:1,自引:0,他引:1  
Addresses the problem of motion estimation from profiles (apparent contours) of an object rotating on a turntable in front of a single camera. A practical and accurate technique for solving this problem from profiles alone is developed. It is precise enough to reconstruct the shape of the object. No correspondences between points or lines are necessary. Symmetry of the surface of revolution swept out by the rotating object is exploited to obtain the image of the rotation axis and the homography relating epipolar lines in two views robustly and elegantly. These, together with geometric constraints for images of rotating objects, are used to obtain first the image of the horizon, which is the projection of the plane that contains the camera centers, and then the epipoles, thus fully determining the epipolar geometry of the image sequence. The estimation of this geometry by this sequential approach avoids many of the problems found in other algorithms. The search for the epipoles, by far the most critical step, is carried out as a simple 1D optimization. Parameter initialization is trivial and completely automatic at all stages. After the estimation of the epipolar geometry, the Euclidean motion is recovered using the fixed intrinsic parameters of the camera obtained either from a calibration grid or from self-calibration techniques. Finally, the spinning object is reconstructed from its profiles using the motion estimated in the previous stage. Results from real data are presented, demonstrating the efficiency and usefulness of the proposed methods  相似文献   

10.
Structure from controlled motion   总被引:1,自引:0,他引:1  
This paper deals with the recovery of 3D information using a single mobile camera in the context of active vision. First, we propose a general revisited formulation of the structure-from-known-motion issue. Within the same formalism, we handle various kinds of 3D geometrical primitives such as points, lines, cylinders, spheres, etc. We also aim at minimizing effects of the different measurement errors which are involved in such a process. More precisely, we mathematically determine optimal camera configurations and motions which lead to a robust and accurate estimation of the 3D structure parameters. We apply the visual servoing approach to perform these camera motions using a control law in closed-loop with respect to visual data. Real-time experiments dealing with 3D structure estimation of points and cylinders are reported. They demonstrate that this active vision strategy can very significantly improve the estimation accuracy  相似文献   

11.
We propose a new human interface system, the virtual 3D interface system, which allows a user to work in a virtual three-dimensional (3D) space on the computer by way of hand motions and hand poses. This system has two cameras: one takes the top-view of the desktop and the other takes the side-view. With the top-view image, the system recognizes the bending of the five fingers and interprets the command. At the same time, the system extracts the fingertip of the forefinger in both the top-view and the side-view images, and then estimates the 3D position of the fingertip. These procedures are very simple so that we can develop some real-time 3D application systems.  相似文献   

12.
We address the issue of tracking moving objects in an environment covered by multiple uncalibrated cameras with overlapping fields of view, typical of most surveillance setups. In such a scenario, it is essential to establish correspondence between tracks of the same object, seen in different cameras, to recover complete information about the object. We call this the problem of consistent labeling of objects when seen in multiple cameras. We employ a novel approach of finding the limits of field of view (FOV) of each camera as visible in the other cameras. We show that, if the FOV lines are known, it is possible to disambiguate between multiple possibilities for correspondence. We present a method to automatically recover these lines by observing motion in the environment, Furthermore, once these lines are initialized, the homography between the views can also be recovered. We present results on indoor and outdoor sequences containing persons and vehicles.  相似文献   

13.
14.
This paper describes a new method for self-calibration of camera with constant internal parameters under circular motion, using one sequence and two images captured with different camera orientations. Unlike the previous method, in which three circular motion sequences are needed with known motion, the new method computes the rotation angles and the projective reconstructions of the sequence and the images with circular constraint enforced, which is called a circular projective reconstruction, using a factorization-based method. It is then shown that the images of the circular points of each circular projective reconstruction can be readily obtained. Subsequently, the image of the absolute conic and the calibration matrix of the camera can be determined. Experiments on both synthetic and real image sequence are given, showing the accuracy and robustness of the new algorithm.  相似文献   

15.
A method is described which recovers the 3-D shape of deformable objects, particularly human motions, from mobile stereo images. In the proposed technique, camera calibration is not required when taking images. Existing optical 3-D modeling systems must employ calibrated cameras that are set at fixed positions. This inevitably puts constraints on the range of the movement of an object. In the proposed method, multiple mobile cameras take images of a deformable object moving freely, and its 3-D model is reconstructed from the video image streams obtained. The advantages of the proposed method include the fact that the cameras employed are calibration-free, and that the image-taking cameras can move freely. The theory is described, and the performance is shown by an experiment on 3-D human motion modeling in an outdoor environment. The accuracy of the 3-D model obtained is evaluated and a discussion is given. This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February 4–6, 2005  相似文献   

16.
Structure from motion using line correspondences   总被引:4,自引:4,他引:0  
A theory is presented for the computation of three-dimensional motion and structure from dynamic imagery, using only line correspondences. The traditional approach of corresponding microfeatures (interesting points-highlights, corners, high-curvature points, etc.) is reviewed and its shortcomings are discussed. Then, a theory is presented that describes a closed form solution to the motion and structure determination problem from line correspondences in three views. The theory is compared with previous ones that are based on nonlinear equations and iterative methods.  相似文献   

17.
This paper presents a new method for estimation of the homography up to similarity from observing a single point that is rotating at constant velocity around a single axis. The benefit of the proposed estimation approach is that it does not require measurement of the points in the world frame. The homography is estimated based on the known shape of the motion and in-image tracking of a single rotating point. The proposed method is compared to the two known methods: the direct approach based on point correspondences and a more recently proposed method based on conic properties. The main advantages of the proposed method are that it also estimates the angular velocity and that it requires only a single circle. The estimation is made directly from the measurements in the image. Because of the advantages of the proposed method over the other methods, the proposed method should be simple to implement for calibration of visually guided robotic systems.All the approaches were compared in the simulation environment in some non-ideal conditions and in the presence of disturbances, and a real experiment was made on a mobile robot. The experimental results confirm that the presented approach gives accurate results, even in some non-ideal conditions.  相似文献   

18.
Collective circular motion of multi-vehicle systems   总被引:1,自引:0,他引:1  
N.  M.  A.  A. 《Automatica》2008,44(12):3025-3035
This paper addresses a collective motion problem for a multi-agent system composed of nonholonomic vehicles. The aim of the vehicles is to achieve circular motion around a virtual reference beacon. A control law is proposed, which guarantees global asymptotic stability of the circular motion with a prescribed direction of rotation, in the case of a single vehicle. Equilibrium configurations of the multi-vehicle system are studied and sufficient conditions for their local stability are given, in terms of the control law design parameters. Practical issues related to sensory limitations are taken into account. The transient behavior of the multi-vehicle system is analyzed via numerical simulations.  相似文献   

19.
Kam Leung Yeung  Li Li 《Displays》2013,34(2):165-170
We have previously shown that concurrent head movements impair head-referenced image motion perception when compensatory eye movements are suppressed (Li, Adelstein, & Ellis, 2009) [16]. In this paper, we examined the effect of the field of view on perceiving world-referenced image motion during concurrent head movements. Participants rated the motion magnitude of a horizontally oscillating checkerboard image presented on a large screen while making yaw or pitch head movements, or holding their heads still. As the image motion was world-referenced, head motion elicited compensatory eye movements from the vestibular-ocular reflex to maintain the gaze on the display. The checkerboard image had either a large (73°H × 73°V) or a small (25°H × 25°V) field of view (FOV). We found that perceptual sensitivity to world-referenced image motion was reduced by 20% during yaw and pitch head movements compared to the veridical levels when the head was still, and this reduction did not depend on the display FOV size. Reducing the display FOV from 73°H × 73°V to 25°H × 25°V caused an overall underestimation of image motion by 7% across the head movement and head still conditions. We conclude that observers have reduced perceptual sensitivity to world-referenced image motion during concurrent head movements independent of the FOV size. The findings are applicable in the design of virtual environment countermeasures to mitigate perception of spurious motion arising from head tracking system latency.  相似文献   

20.
Understanding human behaviour is a high level perceptual problem, one which is often dominated by the contextual knowledge of the environment, and where concerns such as occlusion, scene clutter and high within-class variations are commonplace. Nonetheless, such understanding is highly desirable for automated visual surveillance. We consider this problem in a context of a workflow analysis within an industrial environment. The hierarchical nature of the workflow is exploited to split the problem into ‘activity’ and ‘task’ recognition. In this, sequences of low level activities are examined for instances of a task while the remainder are labelled as background. An initial prediction of activity is obtained using shape and motion based features of the moving blob of interest. A sequence of these activities is further adjusted by a probabilistic analysis of transitions between activities using hidden Markov models (HMMs). In task detection, HMMs are arranged to handle the activities within each task. Two separate HMMs for task and background compete for an incoming sequence of activities. Imagery derived from a camera mounted overhead the target scene has been chosen over the more conventional oblique views (from the side) as this view does not suffer from as much occlusion, and it poses a manageable detection and tracking problem while still retaining powerful cues as to the workflow patterns. We evaluate our approach both in activity and task detection on a challenging dataset of surveillance of human operators in a car manufacturing plant. The experimental results show that our hierarchical approach can automatically segment the timeline and spatially localize a series of predefined tasks that are performed to complete a workflow.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号