首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
武伟  詹玲超 《微计算机信息》2007,23(28):131-132,278
由于现今功能强大的图像编辑软件很容易就可以得到,所以对数字图像进行操作和编辑变得非常的容易,在一幅图像中添加或移掉一个重要的人或物并且不留任何痕迹是很有可能的。如果这些篡改图像用在媒体或法律上,对社会将造成很大的影响。随着数码相机和摄像机的不断普及,验证数字图像变得越来越重要了。文章利用数码相机的两个固有特性对图像进行篡改检测,对多幅图像进行操作,实验证明有不错的检测效果。  相似文献   

2.
《Advanced Robotics》2013,27(6-7):893-921
Visual odometry refers to the use of images to estimate the motion of a mobile robot. Real-time systems have already been demonstrated for terrestrial robotic vehicles, while a near real-time system has been successfully used on the Mars Exploration Rovers for planetary exploration. In this paper, we adapt this method to estimate the motion of a hopping rover on an asteroid surface. Due to the limited stereo depth resolution and the continuous rotational motion on a hopping rover, we propose to use a system of multiple monocular cameras. We describe how the scale of the scene observed by different cameras without overlapping views can be transferred between the cameras, allowing us to reconstruct a single continuous trajectory from multiple image sequences. We describe the implementation of our algorithm and its performance under simulation using rendered images.  相似文献   

3.
以基于虚拟CCD线阵的多CCD影像重成像算法作为内视场拼接的技术手段,在对由地形起伏引起的多CCD影像的拼接误差进行理论分析和推导的基础上,提出无需数字高程模型(DEM)的虚拟CCD线阵多CCD影像重成像算法;并提出使用基于严密成像几何模型的空间前方交会的方法直接评价影像拼接对摄影测量生产的精度影响。研究表明,在虚拟CCD"安装"位置与真实CCD位置偏差不大的情况下,使用成像区域的平均高程进行拼接即可满足几何无缝拼接的需求;而在虚拟CCD"安装"位置与真实位置的偏离超过限差的情况下,可以用航天飞机雷达地形测绘DEM(SRTM-DEM)等一定精度的DEM数据校正地形起伏引起的拼接误差。使用先进陆地观测卫星(ALOS)卫星全色遥感立体测绘仪(PRISM)传感器三线阵影像作为实验数据,对前视、下视和后视的多CCD影像分别进行拼接。对拼接线的判读结果表明,影像拼接效果良好。另外,分别对拼接前和拼接后的前视、后视影像选取同名点进行空间前方交会,拼接后影像的空间前方交会精度与拼接前影像的空间前方交会精度一致。本文方法,可以无需DEM进行ALOS PRISM的内视场拼接,并使得拼接后影像的立体测图精度无损;拼接误差分析方法也可以在航空相机影像拼接中推广。  相似文献   

4.
This paper presents an efficient image-based approach to navigate a scene based on only three wide-baseline uncalibrated images without the explicit use of a 3D model. After automatically recovering corresponding points between each pair of images, an accurate trifocal plane is extracted from the trifocal tensor of these three images. Next, based on a small number of feature marks using a friendly GUI, the correct dense disparity maps are obtained by using our trinocular-stereo algorithm. Employing the barycentric warping scheme with the computed disparity, we can generate an arbitrary novel view within a triangle spanned by three camera centers. Furthermore, after self-calibration of the cameras, 3D objects can be correctly augmented into the virtual environment synthesized by the tri-view morphing algorithm. Three applications of the tri-view morphing algorithm are demonstrated. The first one is 4D video synthesis, which can be used to fill in the gap between a few sparsely located video cameras to synthetically generate a video from a virtual moving camera. This synthetic camera can be used to view the dynamic scene from a novel view instead of the original static camera views. The second application is multiple view morphing, where we can seamlessly fly through the scene over a 2D space constructed by more than three cameras. The last one is dynamic scene synthesis using three still images, where several rigid objects may move in any orientation or direction. After segmenting three reference frames into several layers, the novel views in the dynamic scene can be generated by applying our algorithm. Finally, the experiments are presented to illustrate that a series of photo-realistic virtual views can be generated to fly through a virtual environment covered by several static cameras.  相似文献   

5.
This paper addresses the problem of estimating the epipolar geometry from point correspondences between two images taken by uncalibrated perspective cameras. It is shown that Jepson's and Heeger's linear subspace technique for infinitesimal motion estimation can be generalized to the finite motion case by choosing an appropriate basis for projective space. This yields a linear method for weak calibration. The proposed algorithm has been implemented and tested on both real and synthetic images, and it is compared to other linear and non-linear approaches to weak calibration.  相似文献   

6.
Light field cameras are becoming popular in computer vision and graphics, with many research and commercial applications already having been proposed.Various types of cameras have been developed with the camera array being one of the ways of acquiring a 4D light field image usingmultiple cameras. Camera calibration is essential, since each application requires the correct projection and ray geometry of the light field. The calibrated parameters are used in the light field image rectified from the images captured by multiple cameras. Various camera calibration approaches have been proposed for a single camera, multiple cameras, and amoving camera. However, although these approaches can be applied to calibrating camera arrays, they are not effective in terms of accuracy and computational cost. Moreover, less attention has been paid to camera calibration of a light field camera. In this paper, we propose a calibration method for a camera array and a rectification method for generating a light field image from the captured images. We propose a two-step algorithm consisting of closed form initialization and nonlinear refinement, which extends Zhang’swell-known method to the camera array. More importantly, we introduce a rigid camera constraint whereby the array of cameras is rigidly aligned in the camera array and utilize this constraint in our calibration. Using this constraint, we obtained much faster and more accurate calibration results in the experiments.  相似文献   

7.
This paper describes a three-dimensional localization process used on an agricultural robot designed to pick white asparagus. The system uses two cameras (CCD and Newvicon).Thanks to diascopic lighting, the images can easily be binarized. The threshold of digitalization is determined automatically by the system. A statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth.The localization is done stereometrically with two cameras. As the robot carrying the system moves, the images are altered and decision criteria modified. A study of the images from mobile objects produced by both tube and CCD cameras was carried out. A simulation of this phenomenon has been done to determine the modifications concerning object shapes, thresholding levels and decision parameters as a function of robot speed.  相似文献   

8.
Extrinsic calibration of heterogeneous cameras by line images   总被引:1,自引:0,他引:1  
The extrinsic calibration refers to determining the relative pose of cameras. Most of the approaches for cameras with non-overlapping fields of view (FOV) are based on mirror reflection, object tracking or rigidity constraint of stereo systems whereas cameras with overlapping FOV can be calibrated using structure from motion solutions. We propose an extrinsic calibration method within structure from motion framework for cameras with overlapping FOV and its extension to cameras with partially non-overlapping FOV. Recently, omnidirectional vision has become a popular topic in computer vision as an omnidirectional camera can cover large FOV in one image. Combining the good resolution of perspective cameras and the wide observation angle of omnidirectional cameras has been an attractive trend in multi-camera system. For this reason, we present an approach which is applicable to heterogeneous types of vision sensors. Moreover, this method utilizes images of lines as these features possess several advantageous characteristics over point features, especially in urban environment. The calibration consists of a linear estimation of orientation and position of cameras and optionally bundle adjustment to refine the extrinsic parameters.  相似文献   

9.
Our work targets 3D scenes in motion. In this article, we propose a method for view-dependent layered representation of 3D dynamic scenes. Using densely arranged cameras, we've developed a system that can perform processing in real time from image pickup to interactive display, using video sequences instead of static images, at 10 frames per second. In our system, images on layers are view dependent, and we update both the shape and image of each layer in real time. This lets us use the dynamic layers as the coarse structure of the dynamic 3D scenes, which improves the quality of the synthesized images. In this sense, our prototype system may be one of the first full real-time image -based modelling and rendering systems. Our experimental results show that this method is useful for interactive 3D rendering of real scenes  相似文献   

10.
Affine Structure and Motion from Points,Lines and Conics   总被引:2,自引:2,他引:0  
In this paper several new methods for estimating scene structure and camera motion from an image sequence taken by affine cameras are presented. All methods can incorporate both point, line and conic features in a unified manner. The correspondence between features in different images is assumed to be known.Three new tensor representations are introduced describing the viewing geometry for two and three cameras. The centred affine epipoles can be used to constrain the location of corresponding points and conics in two images. The third order, or alternatively, the reduced third order centred affine tensors can be used to constrain the locations of corresponding points, lines and conics in three images. The reduced third order tensors contain only 12 components compared to the 16 components obtained when reducing the trifocal tensor to affine cameras.A new factorization method is presented. The novelty lies in the ability to handle not only point features, but also line and conic features concurrently. Another complementary method based on the so-called closure constraints is also presented. The advantage of this method is the ability to handle missing data in a simple and uniform manner. Finally, experiments performed on both simulated and real data are given, including a comparison with other methods.  相似文献   

11.
段其昌  赵钦波  杨源飞 《计算机应用》2012,32(Z1):126-127,133
视频监控中常用云台摄像机监控视场较大的区域.对于云台摄像机跟随拍摄的情况,提出了一种基于特征匹配的目标入侵检测方法.通过提取的尺度不变特征变换(SIFT)特征点对,将当前图像和全景图像进行匹配,从而得到当前图像和全景图像投影关系,再将当前图像的坐标系变换到全景图像下,最后运用差分法,找到入侵目标.实验结果表明,即使当前图像与全景图像存在尺度、缩放、形变等差异,通过本方法也可正确地检测出入侵目标.  相似文献   

12.
提出了一个基于视觉的分布式监控系统。该系统通过将多个摄像机单元组织成一个局域网来监视多个特定的区域,实时获取并处理彩色图像序列。当有特定的异常物体如行人、车辆等进入监视区域后能够向用户发出警报,而且能够将从多个摄像机单元获取的数据进行融合得到一个一致的决定。每个摄像机单元利用基于高斯混合模型的背景减除法对异常物体进行检测,并用卡尔曼滤波器进行跟踪。系统在实验室的局域网内进行了实验,显示了良好的性能。  相似文献   

13.
Pedestrian detection by means of far-infrared stereo vision   总被引:1,自引:0,他引:1  
This article presents a stereo system for the detection of pedestrians using far-infrared cameras. Since pedestrian detection in far-infrared images can be difficult in some environmental conditions, the system exploits three different detection approaches: warm area detection, edge-based detection, and disparity computation. A final validation process is performed using head morphological and thermal characteristics. Currently, neither temporal correlation, nor motion cues are used in this processing.The developed system has been implemented on an experimental vehicle equipped with two infrared cameras and preliminarily tested in different situations.  相似文献   

14.
This paper presents sample-based cameras for rendering high quality reflections on convex reflectors at interactive rates. The method supports change of view, moving objects and reflectors, higher order reflections, view-dependent lighting of reflected objects, and reflector surface properties. In order to render reflections with the feed forward graphics pipeline, one has to project reflected vertices. A sample-based camera is a collection of BSP trees of pinhole cameras that jointly approximate the projection function. It is constructed from the reflected rays defined by the desired view and the scene reflectors. A scene point is projected by invoking only the cameras that contain it in their frustums. Reflections are rendered by projecting the scene geometry and then rasterizing in hardware  相似文献   

15.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

16.
本文提出一个用于建筑物及其周围场景真实感图形生成的图形图象综合光线跟踪算法采用综合处理图形和图象的光线跟踪技术,对建筑物及其场景中插人的景物图象(如树木、人物等)可以根据光源、视点的位置,生成合理的阴影和倒影。由于引入了虎视点及位置纹理映射的方法,景物图象在造型表面上的倒影及景物图象上被其它物体遮挡的阴影效果更加逼真。提出的天空背景的处理方法,使得背景可以反射、透射到建筑物表面上,并在视点变化时天空背景可以正确地移人或移出视野。此外本文还在表现建筑物表面材料的连续纹理映射、模拟地面积水的局部反射效果等问题上给出了解决方法。  相似文献   

17.
针对分布式环境下多摄像机的标定问题,我们提出了一种切实可行的多摄像机标定方法。标定过程仅需要各摄像机拍摄下包含激光点的图像即可。在整个标定过程中利用了所有图像的信息,因此比以往的方法具有更好的鲁棒性。整个标定操作过程方便,易于实现。实验结果表明,该方法是一种有效的多摄像机标定方法。  相似文献   

18.
This paper presents a real time interactive conferencing system that allows people to communicate via animated images. The employed distant cameras act as a remote vision system. The cameras simulate the movement of an observer and return images covering the surrounding view of the pictured person. This paper also describes the neuro-command system that controls a robot with two joints (degrees of freedom) that carries the used camera. This neural system translates the head movements. A methodology from image processing has been adopted for the classification, and facilitation of the phase of neural network learning process.  相似文献   

19.
《Graphical Models》2001,63(3):135-150
Traditionally most camera-based position estimation systems use only a few points to calibrate cameras. In this paper, we investigate a novel and alternate approach for 3D position estimation by using a larger number of points arranged in a 3D grid. We present an implementation of the active-space indexing mechanism which uses three cameras. Given the corresponding points in camera images, a precise estimation of the position can be obtained. The active-space indexing method can be also used as a spatial filter to eliminate the large number of possible corresponding pairs from consideration. This capability, unique only to the active-space indexing method, provides a tractable algorithm to the otherwise intractable situation.  相似文献   

20.
Camera networks are complex vision systems difficult to control if the number of sensors is getting higher. With classic approaches, each camera has to be calibrated and synchronized individually. These tasks are often troublesome because of spatial constraints, and mostly due to the amount of information that need to be processed. Cameras generally observe overlapping areas, leading to redundant information that are then acquired, transmitted, stored and then processed. We propose in this paper a method to segment, cluster and codify images acquired by cameras of a network. The images are decomposed sequentially into layers where redundant information are discarded. Without the need of any calibration operation, each sensor contributes to build a global representation of the entire network environment. The information sent by the network is then represented by a reduced and compact amount of data using a codification process. This framework allows structures to be retrieved and also the topology of the network. It can also provide the localization and trajectories of mobile objects. Experiments will present practical results in the case of a network containing 20 cameras observing a common scene.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号