首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 780 毫秒
1.
自适应色彩矫正图像增强算法仿真研究   总被引:1,自引:0,他引:1  
刘捡平  杨春蓉 《计算机仿真》2012,29(1):224-226,268
研究图像增强优化问题,由于大量的图像由于拍照抖动产生噪声,造成图像模糊问题,而传统的去除图像运动模糊的算法,具有计算复杂度过高和特定的假设条件的局限,为了改善图像视觉效果,提出了一种改进的计算量小的自适应矫正图像增强算法,利用模糊图像作为参考,对欠曝光图像进行非线性自适应色调矫正。首先利用非线性函数对不同通道颜色进行调节,然后使用自适应方法对亮度进行矫正仿真,得到最终清晰图像。仿真结果表明改算法可以有效增强图像,改善了图像的质量,具有一定的实际应用价值。  相似文献   

2.
Registering a virtual scene with a real scene captured by a video camera has a number of applications including visually guided robotic navigation, surveillance, military training and operation. The fundamental problem involves several challenging research issues including finding corresponding points between the virtual and the real scene and camera calibration. This paper presents our research in defining and mapping a set of reliable image features for registering the two imageries, extracting and selecting reliable control points for the construction of intrinsic and extrinsic camera parameters. A number of innovative algorithms are presented followed by extensive experimental analysis. An application of registering virtual database image with video image is presented. The algorithms we developed for calculating and matching linear structured features and selecting of reliable control points are applicable to image registration beyond virtual and real imageries.  相似文献   

3.
The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio‐angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.  相似文献   

4.
A general method of adaptation of pixel-by-pixel background subtraction models designed for a fixed camera is developed in the case of a PTZ camera mounted on a mobile platform. The method involves the use of two identical pixellevel background models. The first one is directly applied to the classification of the current scene, while the second one is used to prepare the first model for the transformation to the new coordinate system, the one closer to the current position of the PTZ video camera. The chosen solution makes it possible to eliminate a large number of false positives, which inevitably occur after each transformation of the background model. The experimental verification of the developed method using two well-known background models GMM and ViBe demonstrated good quality of the scene classification and low computational load.  相似文献   

5.
Image processing strongly relies on the quality of the input images, as images of appropriate quality can significantly decrease the development effort for image processing and computer vision algorithms. A flexible acquisition system for image enhancement, which is able to operate in real time under changing brightness conditions, is suggested. The system is based on controlling the aperture of the acquisition camera lens, which makes it useable in combination with all types of image sensors. The control scheme is based on an adaptive image quality estimator and can be used to enhance a variety of spatio-temporal properties. Those properties are either characterized by a time-varying or spatial characteristic, or both, i.e. spatio-temporal characteristics of the imaged scene. A region of interest is derived from the more abstract spatio-temporal property. We present results for aperture control adapted to regions of interest characterized by 2D and 3D spatio-temporal properties. We investigate control implemented in software and aimed towards different spatio-temporal properties. Hardware configuration and real-time acquisition capability for static and dynamic changing image contents is demonstrated, and adaptation time and improvement of image quality are measured and compared.Received: 30 August 2003, Accepted: 17 May 2004, Published online: 20 August 2004This work was carried out within the K plus Competence Center ADVANCED COMPUTER VISION and was funded from the K plus program.We thank Professor Walter Kropatsch for critical comments and fruitful discussions on the paper content and methodology.Austrian patent granted under no. A 705/2002.  相似文献   

6.
A vision–based 3-D scene analysis system is described that is capable to model complex real–world scenes like streets and buildings automatically from stereoscopic image pairs. Input to the system is a sequence of stereoscopic images taken with two standard CCD Cameras and TV lenses. The relative orientation of both cameras to each other is known by calibration. The camerapair is then moved throughout the scene and a long sequence of closely spaced views is recorded. Each of the stereoscopic image pairs is rectified and a dense map of 3-D suface points is obtained by area correlation, object segmentation, interpolation, and triangulation. 3-D camera motion relative to the scene coordinate system is tracked directly from the image sequence which allows to fuse 3-D surface measurements from different viewpoints into a consistent 3-D model scene. The surface geometry of each scene object is approximated by a triangular surface mesh which stores the suface texture in a texture map. From the textured 3-D models, realistic looking image sequences from arbitrary view points can be synthesized using computer graphics.  相似文献   

7.
Modeling the space of camera response functions   总被引:2,自引:0,他引:2  
Many vision applications require precise measurement of scene radiance. The function relating scene radiance to image intensity of an imaging system is called the camera response. We analyze the properties that all camera responses share. This allows us to find the constraints that any response function must satisfy. These constraints determine the theoretical space of all possible camera responses. We have collected a diverse database of real-world camera response functions (DoRF). Using this database, we show that real-world responses occupy a small part of the theoretical space of all possible responses. We combine the constraints from our theoretical space with the data from DoRF to create a low-parameter empirical model of response (EMoR). This response model allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart. We also show that the model can be used to accurately estimate the camera response from images of an arbitrary scene taken using different exposures. The DoRF database and the EMoR model can be downloaded at http://www.cs.columbia.edu/CAVE.  相似文献   

8.
This paper considers the problem of how to use a camera to recognize the shape of a path without using too much image processing time. A quadrangle image scene is divided into three parts, and the center of gravity of an object in each part is extracted to estimate the shape of the path. A strategy for how to measure distances with a camera is also presented. The idea behind this strategy is first to establish a methematical model describing the relationship between pixel distance in the image scene and real distance in front of the mobile vehicle by experiments, and then to decide the relevant position of the object by means of rotation of the camera. These image processing methods can be used in control problems with mobile vehicles/car-like robots, and can be used in fuzzy control, neural control, and other control strategies. This work was presented, in part, at the 2nd International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1997  相似文献   

9.
Many of the recent real-time markerless camera tracking systems assume the existence of a complete 3D model of the target scene. Also the system developed in the MATRIS project assumes that a scene model is available. This can be a freeform surface model generated automatically from an image sequence using structure from motion techniques or a textured CAD model built manually using a commercial software. The offline model provides 3D anchors to the tracking. These are stable natural landmarks, which are not updated and thus prevent an accumulating error (drift) in the camera registration by giving an absolute reference. However, sometimes it is not feasible to model the entire target scene in advance, e.g. parts, which are not static, or one would like to employ existing CAD models, which are not complete. In order to allow camera movements beyond the parts of the environment modelled in advance it is desired to derive additional 3D information online. Therefore, a markerless camera tracking system for calibrated perspective cameras has been developed, which employs 3D information about the target scene and complements this knowledge online by reconstruction of 3D points. The proposed algorithm is robust and reduces drift, the most dominant problem of simultaneous localisation and mapping (SLAM), in real-time by a combination of the following crucial points: (1) stable tracking of longterm features on the 2D level; (2) use of robust methods like the well-known Random Sampling Consensus (RANSAC) for all 3D estimation processes; (3) consequent propagation of errors and uncertainties; (4) careful feature selection and map management; (5) incorporation of epipolar constraints into the pose estimation. Validation results on the operation of the system on synthetic and real data are presented.  相似文献   

10.
Depth-of-field (DOF) and motion blur are important visual cues used for computer graphics and photography to illustrate focus of attention and object motion. In this work, we present a method for photo-realistic DOF and motion blur generation based on the characteristics of a real camera system. Both the depth–blur relation for different camera focus settings and the nonlinear intensity response of image sensors are modeled. The camera parameters are calibrated and used for defocus and motion blur synthesis. For a well-focused real scene image, DOF and motion blur effects are generated by post-processing techniques. Experiments have shown that the proposed method generates more photo-consistent results than the commonly used graphical models.  相似文献   

11.
虚拟地空战场通用视景仿真软件系统的设计   总被引:3,自引:0,他引:3  
针对地空战场仿真的三维演示需求,基于高层三维图形开发环境设计了一套通用的视景仿真软件系统,适应于实时数据、人工规划路径等多种仿真驱动形式。使用MultiGenCreator建立了三维武器实体和地景模型;基于Vega仿真软件环境开发天空、地面背景和战场特效;利用大规模地形管理技术解决计算机对战场大地形实时处理能力不足的缺陷;通过设计粒子系统模拟了各种复杂的场景特效;利用C++编程,调用VegaAPI函数,开发了视景仿真程序。开发和应用结果表明,该系统灵活通用,效果生动,并且比以往直接基于OpenGL图形库开发的视景仿真软件提高了画面逼真度,缩短了开发周期。  相似文献   

12.
针对单相机场景下的智能交通应用得到了较好的发展,但跨区域研究还处在起步阶段的问题,本文提出一种基于相机标定的跨相机场景拼接方法.首先,通过消失点标定分别建立两个相机场景中子世界坐标系下的物理信息到二维图像的映射关系;其次通过两个子世界坐标系间的公共信息来完成相机间的投影变换;最后,通过提出的逆投影思想和平移矢量关系来完成道路场景拼接.通过实验结果表明,该方法能够较好的实现道路场景拼接和跨区域道路物理测量,为相关实际应用研究奠定基础.  相似文献   

13.
The appearance of a static scene as sensed by a camera changes considerably as a result of changes in the illumination that falls upon it. Scene appearance modeling is thus necessary for understanding which changes in the appearance of a scene are the result of illumination changes. For any camera, the appearance of the scene is a function of the illumination sources in the scene, the three-dimensional configuration of the objects in the scene and the reflectance properties of all the surfaces in the scene. A scene appearance model is described here as a function of the behavior of static illumination sources, within or beyond the scene, and arbitrary three-dimensional configurations of patches and their reflectance distributions. Based on the suggested model, a spatial prediction technique was developed to predict the appearance of the scene, given a few measurements within it. The scene appearance model and the prediction technique were developed analytically and tested empirically. Two potential applications are briefly explored.  相似文献   

14.
We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each scene point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each scene point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.  相似文献   

15.
Camera calibration is an essential issue in many computer vision tasks in which quantitative information of a scene is to be derived from its images. It is concerned with the determination of a set of parameters from the given images. In literature, it has been modeled as a nonlinear global optimization problem and has been solved using various optimization techniques. In this article, a recently developed variant of a very popular global optimization technique—the particle swarm optimization (PSO) algorithm—has been used for solving this problem for a stereo camera system modeled by pin-hole camera model. Extensive experiments have been performed on synthetic data to test the applicability of the technique to this problem. The simulation results, which have been compared with those obtained by a real coded genetic algorithm (RCGA) in literature, show that the proposed PSO performs a bit better than RCGA in terms of computational effort.  相似文献   

16.
In this paper, we present a personal area situation understanding (PASU) system, a novel application of a smart device using wireless camera sensor networks. The portability of a PASU system makes it an attractive solution for monitoring and understanding the current situation of the personal area around a user. The PASU system allows its user to construct a 3D scene of the environment and view the scene from various vantage points for better understanding of the environment. The paper describes the architecture and implementation of the PASU system addressing limitations of wireless camera sensor networks, such as low bandwidth and limited computational capabilities. The capabilities of PASU are validated with extensive experiments. The PASU system demonstrates the potential of a portable system combining a smart device and a wireless camera sensor network for personal area monitoring and situation understanding.  相似文献   

17.
目的 视觉感知技术是智能车系统中的一项关键技术,但是在复杂挑战下如何有效提高视觉性能已经成为智能驾驶领域的重要研究内容。本文将人工社会(artificial societies)、计算实验(computational experiments)和平行执行(parallel execution)构成的ACP方法引入智能驾驶的视觉感知领域,提出了面向智能驾驶的平行视觉感知,解决了视觉模型合理训练和评估问题,有助于智能驾驶进一步走向实际应用。方法 平行视觉感知通过人工子系统组合来模拟实际驾驶场景,构建人工驾驶场景使之成为智能车视觉感知的“计算实验室”;借助计算实验两种操作模式完成视觉模型训练与评估;最后采用平行执行动态优化视觉模型,保障智能驾驶对复杂挑战的感知与理解长期有效。结果 实验表明,目标检测的训练阶段虚实混合数据最高精度可达60.9%,比单纯用KPC(包括:KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute),PASCAL VOC(pattern analysis,statistical modelling and computational learning visual object classes)和MS COCO(Microsoft common objects in context))数据和虚拟数据分别高出17.9%和5.3%;在评估阶段相较于基准数据,常规任务(-30°且垂直移动)平均精度下降11.3%,环境任务(雾天)平均精度下降21.0%,困难任务(所有挑战)平均精度下降33.7%。结论 本文为智能驾驶设计和实施了在实际驾驶场景难以甚至无法进行的视觉计算实验,对复杂视觉挑战进行分析和评估,具备加强智能车在行驶过程中感知和理解周围场景的意义。  相似文献   

18.
Camera geometries for image matching in 3-D machine vision   总被引:2,自引:0,他引:2  
The location of a scene element can be determined from the disparity of two of its depicted entities (each in a different image). Prior to establishing disparity, however, the correspondence problem must be solved. It is shown that for the axial-motion stereo camera model the probability of determining unambiguous correspondence assignments is significantly greater than that for other stereo camera models. However, the mere geometry of the stereo camera system does not provide sufficient information for uniquely identifying correct correspondences. Therefore, additional constraints derived from justifiable assumptions about the scene domain and from the scene radiance model are utilized to reduce the number of potential matches. The measure for establishing the correct correspondence is shown to be a function of the geometrical constraints, scene constraints, and scene radiance model  相似文献   

19.
The application of computer-aided controversial plays resolution in sport events significantly benefits organizers, referees and audience. Nowadays, especially in ball sports, very accurate technological solutions can be found. The main drawback of these systems is the need of complex and expensive hardware which makes them not affordable for less-known regional/traditional sports events. The lack of competitive systems with reduced hardware/software complexity and requirements motivates this research. Visual Analytics technologies permit system detecting the ball trajectory, solving with precision possible controversial plays. Ball is extracted from the video scene exploiting its shape features and velocity vector properties. Afterwards, its relative position to border line is calculated based on polynomial approximations. In order to enhance user visual experience, real-time rendering technologies are introduced to obtain virtual 3D reconstruction in quasi real-time. Comparing to other set ups, the main contribution of this work lays on the utilization of an unique camera per border line to extract 3D bounce point information. In addition, the system has no camera location/orientation limit, provided that line view is not occluded. Testing of the system has been done in real world scenarios, comparing the system output with referees’ judgment. Visual results of the system have been broadcasted during Basque Pelota matches.  相似文献   

20.
基于三维模型的交通场景视觉监控   总被引:4,自引:2,他引:4  
视觉监控是计算机视觉研究的前沿方向.动态场景视觉监控就是利用计算机视觉和人工智能的理论和方法.通过对摄像机拍录的图像序列进行自动分析来对场景中的运动物体进行定位、跟踪和识别,并对物体的运动行为作出判断或者解释,达到监控的目的.本文结合交通场景监控这一特定任务,实现一个包括摄像机标定、模型可视化、运动车辆的姿态优化与定位、跟踪预测、基于轨迹分析的行为理解等功能算法的交通场景视觉监控系统.从算法和实现的角度出发,文章对系统中各个功能模块进行了较为详细的描述与讨论.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号