首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
2.
The haze phenomenon seriously interferes the image acquisition and reduces image quality. Due to many uncertain factors, dehazing is typically a challenge in image processing. The most existing deep learning-based dehazing approaches apply the atmospheric scattering model (ASM) or a similar physical model, which originally comes from traditional dehazing methods. However, the data set trained in deep learning does not match well this model for three reasons. Firstly, the atmospheric illumination in ASM is obtained from prior experience, which is not accurate for dehazing real-scene. Secondly, it is difficult to get the depth of outdoor scenes for ASM. Thirdly, the haze is a complex natural phenomenon, and it is difficult to find an accurate physical model and related parameters to describe this phenomenon. In this paper, we propose a black box method, in which the haze is considered an image quality problem without using any physical model such as ASM. Analytically, we propose a novel dehazing equation to combine two mechanisms: interference item and detail enhancement item. The interference item estimates the haze information for dehazing the image, and then the detail enhancement item can repair and enhance the details of the dehazed image. Based on the new equation, we design an anti-interference and detail enhancement dehazing network (AIDEDNet), which is dramatically different from existing dehazing networks in that our network is fed into the haze-free images for training. Specifically, we propose a new way to construct a haze patch on the flight of network training. The patch is randomly selected from the input images and the thickness of haze is also randomly set. Numerous experiment results show that AIDEDNet outperforms the state-of-the-art methods on both synthetic haze scenes and real-world haze scenes.  相似文献   

3.
We investigated human perceptual performance allowed by relatively impoverished information conveyed in nighttime natural scenes. We used images of nighttime outdoor scenes rendered in image-intensified low-light visible (i2) sensors, thermal infrared (ir) sensors, and an i2/ir fusion technique with information added. We found that nighttime imagery provides adequate low-level image information for effective perceptual organization on a classification task, but that performance for exemplars within a given object category is dependent on the image type. Overall performance was best with the false-color fused images. This is consistent with the suggestion in the literature that color plays a predominate role in perceptual grouping and segmenting of objects in a scene and supports the suggestion that the addition of color in complex achromatic scenes aids the perceptual organization required for visual search. In the present study, we address the issue of assessment of perceptual performance with alternative night-vision sensors and fusion methods and begin to characterize perceptual organization abilities permitted by the information in relatively impoverished images of complex scenes. Applications of this research include improving night vision, medical, and other devices that use alternative sensors or degraded imagery.  相似文献   

4.
Real-time modeling and rendering of raining scenes   总被引:1,自引:0,他引:1  
Real-time modeling and rendering of a realistic raining scene is a challenging task. This is because the visual effects of raining involve complex physical mechanisms, reflecting the physical, optical and statistical characteristics of raindrops, etc. In this paper, we propose a set of new methods to model the raining scene according to these physical mechanisms. Firstly, by adhering to the physical characteristic of raindrops, we model the shapes, movements and intensity of raindrops in different situations. Then, based on the principle of human vision persistence, we develop a new model to calculate the shapes and appearances of rain streaks. To render the foggy effect in a raining scene, we present a statistically based multi-particles scattering model exploiting the particle distribution coherence along each viewing ray. By decomposing the conventional equations of single scattering of non-isotropic light into two parts with the physical parameter independent part precalculated, we are able to render the respective scattering effect in real time. We also realize diffraction of lamps, wet ground, the ripples on puddles in the raining scene, as well as the beautiful rainbow. By incorporating GPU acceleration, our approach permits real-time walkthrough of various raining scenes with average 20 fps rendering speed and the results are quite satisfactory.  相似文献   

5.
Moving object detection in dynamic scenes is a basic task in a surveillance system for sensor data collection. In this paper, we present a powerful background subtraction algorithm called Gaussian-kernel density estimator (G-KDE) that improves the accuracy and reduces the computational load. The main innovation is that we divide the changes of background into continuous and stable changes to deal with dynamic scenes and moving objects that first merge into the background, and separately model background using both KDE model and Gaussian models. To get a temporal-spatial background model, the sample selection is based on the concept of region average at the update stage. In the detection stage, neighborhood information content (NIC) is implemented which suppresses the false detection due to small and un-modeled movements in the scene. The experimental results which are generated on three separate sequences indicate that this method is well suited for precise detection of moving objects in complex scenes and it can be efficiently used in various detection systems.  相似文献   

6.
7.
Our work targets 3D scenes in motion. In this article, we propose a method for view-dependent layered representation of 3D dynamic scenes. Using densely arranged cameras, we've developed a system that can perform processing in real time from image pickup to interactive display, using video sequences instead of static images, at 10 frames per second. In our system, images on layers are view dependent, and we update both the shape and image of each layer in real time. This lets us use the dynamic layers as the coarse structure of the dynamic 3D scenes, which improves the quality of the synthesized images. In this sense, our prototype system may be one of the first full real-time image -based modelling and rendering systems. Our experimental results show that this method is useful for interactive 3D rendering of real scenes  相似文献   

8.
Background modeling and subtraction are core components in video processing. To this end, one aims to recover and continuously update a representation of the scene that is compared with the current input to perform subtraction. Most of the existing methods treat each pixel independently and attempt to model the background perturbation through statistical modeling such as a mixture of Gaussians. While such methods have satisfactory performance in many scenarios, they do not model the relationships and correlation amongst nearby pixels. Such correlation between pixels exists both in space and across time especially when the scene consists of image structures moving across space. Waving trees, beach, escalators and natural scenes with rain or snow are examples of such scenes. In this paper, we propose a method for differentiating between image structures and motion that are persistent and repeated from those that are “new”. Towards capturing the appearance characteristics of such scenes, we propose the use of an appropriate subspace created from image structures. Furthermore, the dynamical characteristics are captured by the use of a prediction mechanism in such subspace. Since the model must adapt to long-term changes in the background, an incremental method for fast online adaptation of the model parameters is proposed. Given such adaptive models, robust and meaningful measures for detection that consider both structural and motion changes are considered. Promising experimental results that include qualitative and quantitative comparisons with existing background modeling/subtraction techniques demonstrate the very promising performance of the proposed framework when dealing with complex backgrounds.  相似文献   

9.
10.
Digital landscape realism often comes from the multitude of details that are hard to model such as fallen leaves, rock piles or entangled fallen branches. In this article, we present a method for augmenting natural scenes with a huge amount of details such as grass tufts, stones, leaves or twigs. Our approach takes advantage of the observation that those details can be approximated by replications of a few similar objects and therefore relies on mass‐instancing. We propose an original structure, the Ghost Tile, that stores a huge number of overlapping candidate objects in a tile, along with a pre‐computed collision graph. Details are created by traversing the scene with the Ghost Tile and generating instances according to user‐defined density fields that allow to sculpt layers and piles of entangled objects while providing control over their density and distribution.  相似文献   

11.
Bayesian modeling of dynamic scenes for object detection   总被引:11,自引:0,他引:11  
Accurate detection of moving objects is an important precursor to stable tracking or recognition. In this paper, we present an object detection scheme that has three innovations over existing approaches. First, the model of the intensities of image pixels as independent random variables is challenged and it is asserted that useful correlation exists in intensities of spatially proximal pixels. This correlation is exploited to sustain high levels of detection accuracy in the presence of dynamic backgrounds. By using a nonparametric density estimation method over a joint domain-range representation of image pixels, multimodal spatial uncertainties and complex dependencies between the domain (location) and range (color) are directly modeled. We propose a model of the background as a single probability density. Second, temporal persistence is proposed as a detection criterion. Unlike previous approaches to object detection which detect objects by building adaptive models of the background, the foregrounds modeled to augment the detection of objects (without explicit tracking) since objects detected in the preceding frame contain substantial evidence for detection in the current frame. Finally, the background and foreground models are used competitively in a MAP-MRF decision framework, stressing spatial context as a condition of detecting interesting objects and the posterior function is maximized efficiently by finding the minimum cut of a capacitated graph. Experimental validation of the proposed method is performed and presented on a diverse set of dynamic scenes.  相似文献   

12.
Quick-VDR: out-of-core view-dependent rendering of gigantic models   总被引:10,自引:0,他引:10  
We present a novel approach for interactive view-dependent rendering of massive models. Our algorithm combines view-dependent simplification, occlusion culling, and out-of-core rendering. We represent the model as a clustered hierarchy of progressive meshes (CHPM). We use the cluster hierarchy for coarse-grained selective refinement and progressive meshes for fine-grained local refinement. We present an out-of-core algorithm for computation of a CHPM that includes cluster decomposition, hierarchy generation, and simplification. We introduce novel cluster dependencies in the preprocess to generate crack-free, drastic simplifications at runtime. The clusters are used for LOD selection, occlusion culling, and out-of-core rendering. We add a frame of latency to the rendering pipeline to fetch newly visible clusters from the disk and avoid stalls. The CHPM reduces the refinement cost of view-dependent rendering by more than an order of magnitude as compared to a vertex hierarchy. We have implemented our algorithm on a desktop PC. We can render massive CAD, isosurface, and scanned models, consisting of tens or a few hundred million triangles at 15-35 frames per second with little loss in image quality.  相似文献   

13.
This paper proposes a new approach for multi-object 3D scene modeling. Scenes with multiple objects are characterized by object occlusions under several views, complex illumination conditions due to multiple reflections and shadows, as well as a variety of object shapes and surface properties. These factors raise huge challenges when attempting to model real 3D multi-object scene by using existing approaches which are designed mainly for single object modeling. The proposed method relies on the initialization provided by a rough 3D model of the scene estimated from the given set of multi-view images. The contributions described in this paper consists of two new methods for identifying and correcting errors in the reconstructed 3D scene. The first approach corrects the location of 3D patches from the scene after detecting the disparity between pairs of their projections into images. The second approach is called shape-from-contours and identifies discrepancies between projections of 3D objects and their corresponding contours, segmented from images. Both unsupervised and supervised segmentations are used to define the contours of objects.  相似文献   

14.
This paper presents a novel approach based on contextual Bayesian networks (CBN) for natural scene modeling and classification. The structure of the CBN is derived based on domain knowledge, and parameters are learned from training images. For test images, the hybrid streams of semantic features of image content and spatial information are piped into the CBN-based inference engine, which is capable of incorporating domain knowledge as well as dealing with a number of input evidences, producing the category labels of the entire image. We demonstrate the promise of this approach for natural scene classification, comparing it with several state-of-art approaches.  相似文献   

15.
In this paper, we present a novel graph database-mining method called APGM (APproximate Graph Mining) to mine useful patterns from noisy graph database. In our method, we designed a general framework for modeling noisy distribution using a probability matrix and devised an efficient algorithm to identify approximate matched frequent subgraphs. We have used APGM to both synthetic data set and real-world data sets on protein structure pattern identification and structure classification. Our experimental study demonstrates the efficiency and efficacy of the proposed method.  相似文献   

16.
Lin  Yijun  Wu  Fengge  Zhao  Junsuo 《Neural computing & applications》2023,35(11):8227-8241
Neural Computing and Applications - High-resolution (HR) remote sensing images provide rich information for human activities. However, processing entire HR images is time-consuming, and many...  相似文献   

17.
This paper presents a novel method for the registration of texture images with a 3D model of outdoor scenes. We pose image registration as an optimization problem that uses knowledge of the sun’s position to estimate shadows in a scene, and use the shadows produced as a cue to solve for the registration parameters. Results are presented on a controlled experiment and for a large scale model of an archaeological site in Sicily.  相似文献   

18.
19.
20.
With the collaboration of the CNES, a new modeling method for telecommunication devices, presented in this article, has been implemented on any computer. This method combines the variational method, based on electric equivalent circuit representations, and the boundary element method. Results are successfully compared to measurements and those from the previous publications. © 2002 Wiley Periodicals, Inc. Int J RF and Microwave CAE 12: 247–258, 2002.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号