首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Bayesian modeling of uncertainty in low-level vision   总被引:1,自引:1,他引:0  
The need for error modeling, multisensor fusion, and robust algorithms is becoming increasingly recognized in computer vision. Bayesian modeling is a powerful, practical, and general framework for meeting these requirements. This article develops a Bayesian model for describing and manipulating the dense fields, such as depth maps, associated with low-level computer vision. Our model consists of three components: a prior model, a sensor model, and a posterior model. The prior model captures a priori information about the structure of the field. We construct this model using the smoothness constraints from regularization to define a Markov Random Field. The sensor model describes the behavior and noise characteristics of our measurement system. We develop a number of sensor models for both sparse and dense measurements. The posterior model combines the information from the prior and sensor models using Bayes' rule. We show how to compute optimal estimates from the posterior model and also how to compute the uncertainty (variance) in these estimates. To demonstrate the utility of our Bayesian framework, we present three examples of its application to real vision problems. The first application is the on-line extraction of depth from motion. Using a two-dimensional generalization of the Kalman filter, we develop an incremental algorithm that provides a dense on-line estimate of depth whose accuracy improves over time. In the second application, we use a Bayesian model to determine observer motion from sparse depth (range) measurements. In the third application, we use the Bayesian interpretation of regularization to choose the optimal smoothing parameter for interpolation. The uncertainty modeling techniques that we develop, and the utility of these techniques in various applications, support our claim that Bayesian modeling is a powerful and practical framework for low-level vision.  相似文献   

2.
Repertory grid technique plays a central role in the elicitation methodology of many well-reported knowledge acquisition tools or workbenches. However, the dependability of these systems is low where the technique breaks down or proves inadequate due to limited expressive power and other problems. The paper introduces an alternate approach based on Personal Construct Theory that elicits an expert's knowledge as a network of terms that constitutes a propositional formalism. An extended example is used to both highlight the difficulties encountered using repertory grids and illustrate how these are overcome using the proposed approach. The results of an empirical study are presented where an experienced clinician compared the knowledge structures that she constructed for a diagnostic task using each elicitation technique. Furthermore, although the network representation is amenable to inductive learning methods for generating production rules, an inference method is demonstrated which reveals the formalism's categorical reasoning potential. The authors conclude that it is more appropriate to classify such methods as either mediating or immediate rather than the knowledge structures they employ. The paper contributes to a better understanding of constructivist formalisms developed for knowledge acquisition  相似文献   

3.
A re-scan of the well-known Mach band illusion has led to the proposal of a Bi-Laplacian of Gaussian operation in early vision. Based on this postulate, the human visual system at low-level has been modeled from two approaches that give rise to two new tools. On one hand, it leads to the construction of a new image sharpening kernel, and on the other, to the explanation of more complex brightness-contrast illusions and the possible development of a new algorithm for robust visual capturing and display systems.  相似文献   

4.
A system has been devised for development and testing in robot vision applications. The hardware and software of the system are described. Keyboard manufacturing is taken as a typical example of robot vision applications. For this example, an account is given of the four main stages in a development and implementation programme: defining the requirements of the application; matching these to the vision system; developing the application software; integrating and installing the system. Further applications of robot vision are suggested.  相似文献   

5.
Deep network design is a fundamental challenge. A right trade-off between depth and complexity of convolutional neural networks is of significant importance to applications in low-level vision tasks. Wider feature maps could be beneficial to performance and generality but would increase computational complexity. In this paper, we rethink the balance between width of the feature maps and depth of the network especially for image restoration tasks including deblurring, dehazing, super-resolution, and denoising. We explore a new approach to network structure by encouraging more depth to deal with restoration requirements while decreasing the width of some feature maps. Such a slimmer and deeper approach can enhance the performance while maintaining the same level of computational costs. We have experimentally evaluated the performances of the proposed approach on four image restoration tasks and obtained state-of-the-art results on quantitative measures and qualitative assessments, demonstrating the effectiveness of the approach.  相似文献   

6.
A new derivation of continuous-time Kalman Filter equations is presented. The underlying idea has been previously used to derive the smoothing equations. A unified approach to filtering and smoothing problems has thus been achieved.  相似文献   

7.
底层视觉重建技术旨在在受限的成像条件下重建高质量图像/视频,对后续视觉处理与呈现具有重要意义。由于像感域数据(raw data)具有高位宽、与感光量成线性响应等特点,近年来基于像感域的视觉重建技术在学术界和工业界获得的关注日益提高。本文聚焦于 6 种代表性视觉重建任务,包括低光增强与去噪、超分辨率、高动态范围重建、去摩尔纹、多任务联合重建以及数据生成,重点综述了深度学习驱动的像感域视觉重建领域的进展:系统地总结了领域代表性方法,概述各类方法的优势与局限,分析了不同任务中像感域数据相较于颜色域数据(经降噪、去马赛克、白平衡、色调映射和颜色空间转换(如 RGB、sRGB 等)等处理之后的数据)的独特属性与优势;梳理了各个领域的开源数据集,包括图像数据集、快速连拍数据集以及视频数据集,总结了数据集的构造方法以及配对数据的空间/时间对齐策略,为后续研究的数据集创建提供了参考与指引;总结了现有方法存在的问题与困境,展望了像感域底层视觉重建的发展趋势。  相似文献   

8.
The nebulization quality of oil flames, an important characteristic exhibited by combustion processes of petroleum refinery furnaces, is mostly affected by variations on the values of the vapor flow rate (VFR). Expressive visual changes in the flame patterns and decay of the combustion efficiency are observed when the process is tuned by diminishing the VFR. Such behavior is supported by experimental evidence showing that too low values of VFR and solid particulate material rate increase are strongly correlated. Given the economical importance of keeping this parameter under control, a laboratorial vertical furnace was devised with the purpose of carrying out experiments to prototype a computer vision system capable of estimating VFR values through the examination of test characteristic vectors based on geometric properties of the grey level histogram of instantaneous flame images. Firstly, a training set composed of feature vectors from all the images collected during experiments with a priori known VFR values are properly organized and an algorithm is applied to this data in order to generate a fuzzy measurement vector whose components represent membership degrees to the ‘high nebulization quality’ fuzzy set. Fuzzy classification vectors from images with unknown a priori VFR values are, then, assumed to be state-vectors in a random-walk model, and a non-linear Tikhonov regularized Kalman filter is applied to estimate the state and the corresponding nebulization quality. The successful validation of the output data, even based on small training data sets, indicates that the proposed approach could be applied to synthesize a real-time algorithm for evaluating the nebulization quality of combustion processes in petroleum refinery furnaces that use oil flames as the heating source.  相似文献   

9.
This paper presents a new non-linear filter designed to track targets following a road network, taking advantage of the road map information. The algorithm is based on a Bayesian Multiple Hypotheses modelling of movement process, postulating and evaluating different hypotheses on the segments being followed by the target after road junctions. Then, the along-road tracking is carried out, for each hypothesis, by a longitudinal IMM filter capable of tracking target movements along straight roads, circular segments, and generic curvilinear segments defined through Bézier curves. The algorithm also includes a lateral drift estimator, which tracks the lateral motion of the target with respect to road axis, to be able to estimate target piloting error and especially to track targets in wide roads. The paper completely describes the filter and associated measurement preprocessing procedures, and also includes a comparative evaluation of the proposed filter with other filtering methods in the literature.  相似文献   

10.
目前主流的网络安全防护体系是外嵌的,安全体系与业务体系分离,安全产品相互孤立,在防护能力上难以高效应对越来越复杂的网络安全挑战。网络安全从外向内进行强基,势在必行。将网络安全的业务场景归纳为组织、厂商、监管和威胁四方视角,各视角具有不同的业务目标。从四方视角的共性和个性出发,系统性归纳网络安全生态的能力需求,提出内禀安全方法论。内禀安全能力是指ICT组件原生支撑监测、防护和溯源等安全功能的能力。内禀安全能力对网络安全具有基础支撑作用,本身不是最终的安全功能实现,与现有的“内生安全”“内设安全”等方法论所针对的问题不同。内禀安全强调网络组件内在的安全赋能禀赋,有两种方式可以发掘这种禀赋,一是通过先天安全能力激活,二是外嵌能力内化,对外在逻辑上表现出自体免疫。此类组件的优势之一在于业务与安全的内聚,能够透明化感知安全态势、定制化配置安全策略、贴身化执行安全保护;优势之二在于将业务功能与安全功能进行合并封装,简化整体工程架构,降低网络管理复杂度。进一步提出了内禀安全支撑能力框架,对符合内禀安全理念的安全能力进行归纳和枚举,将安全支撑能力分为采集、认知、执行、协同和弹复5类,并进一步介绍各类能力的子类型和基础ICT技术。基于该框架,介绍了典型安全业务场景在内禀安全理念下的增强实现。  相似文献   

11.
In the paper we present parallel solutions for performing image contour ranking on coarse-grained machines. In contour ranking, a linear representation of the edge contours is generated from the edge contours of a raw image. We describe solutions that employ different divide-and-conquer approaches and that use different communication patterns. The combining step of the divide-and-conquer solutions uses efficient sequential techniques for merging information about subimages. The proposed solutions are implemented on Intel Delta and Intel Paragon machines. We discuss performance results and present scalability analysis using different image and machine sizes. © 1997 by John Wiley & Sons, Ltd.  相似文献   

12.
We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high-resolution videos of pedestrians as they move through a designated area. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of one pedestrian at a time. We formulate the multi-camera control strategy as an online scheduling problem and propose a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area. A centerpiece of our work is the development and testing of experimental surveillance systems within a visually and behaviorally realistic virtual environment simulator. The simulator is valuable as our research would be more or less infeasible in the real world given the impediments to deploying and experimenting with appropriately complex camera sensor networks in large public spaces. In particular, we demonstrate our surveillance system in a virtual train station environment populated by autonomous, lifelike virtual pedestrians, wherein easily reconfigurable virtual cameras generate synthetic video feeds. The video streams emulate those generated by real surveillance cameras monitoring richly populated public spaces.A preliminary version of this paper appeared as [1].  相似文献   

13.
14.
Texture mapping has been widely used to improve the quality of 3D rendered images. To reduce the storage and bandwidth impact of texture mapping, compression systems are commonly used. To further increase the quality of the rendered images, texture filtering is also often adopted. These two techniques are generally considered to be independent. First, a decompression step is executed to gather texture samples, which is then followed by a separate filtering step. We have investigated a system based on linear transforms that merges both phases together. This allows more efficient decompression and filtering at higher compression ratios. This paper formally presents our approach for any linear transformation, how the commonly used discrete cosine transform can be adapted to this new approach, and how this method can be implemented in real time on current-generation graphics cards using shaders. Through reuse of the existing hardware filtering, fast magnification and minification filtering is achieved. Our implementation provides fully anisotropically filtered samples four to six times faster than an implementation using two separate phases for decompression and filtering. Additionally, our transform-based compression also provides increased and variable compression ratios over standard hardware compression systems at a comparable or better quality level.  相似文献   

15.
Observations and decisions in computer vision are inherently uncertain. The rigorous treatment of uncertainty has therefore received a lot of attention, since it not only improves the results compared to ad hoc methods but also makes the results more explainable. In this paper, the usefulness of stochastic approaches will be demonstrated by example with selected problems. These are given in the context of optimal estimation, self-diagnostics, and performance evaluation and cover all steps of the reasoning chain. The removal or interpretation of unexplainable thresholds and tuning parameters will be discussed for typical tasks in feature extraction, object reconstruction, and object classification. The text was submitted by the author in English. Jochen Meidow studied surveying and mapping at the University of Bonn, Germany, and graduated with a diploma in 1996. As research associate at the Institute for Theoretical Geodesy, University of Bonn, he received his PhD degree (Dr.-Ing.) in 2001 for a thesis about aerial image analysis. Between 2001 and 2004 he was a postdoctoral fellow at the Institute for Photogrammetry, University of Bonn, and since 2004 he is with the Research Institute for Optronics and Pattern Recognition (FGAN-FOM) in Ettlingen, Germany. He is a member of the DAGM (German Pattern Recognition Society). His research interests are adjustment theory, statistics, and spatial reasoning.  相似文献   

16.
This note is concerned with the H-infinity deconvolution filtering problem for linear time-varying discretetime systems described by state space models, The H-infinity deconvolution filter is derived by proposing a new approach in Krein space. With the new approach, it is clearly shown that the central deconvolution filter in an H-infinity setting is the same as the one in an H2 setting associated with one constructed stochastic state-space model. This insight allows us to calculate the complicated H-infinity deconvolution filter in an intuitive and simple way. The deconvolution filter is calculated by performing Riccati equation with the same order as that of the original system.  相似文献   

17.
Decision problems at the strategic level tend to have multiple criteria and outcomes that are uncertain. Many of the current decision‐making tools are too simplistic to incorporate the important features. This paper considers a multicriteria decision‐making scenario in which the outcomes of the decisions, evaluated on different criteria, are uncertain. The main contribution of this paper is the presentation of a tool that enables decision makers to visualize the expected payoff and likelihood that the payoff of a decision does not fall short of a preset target value. Furthermore, it presents decision makers with a tool that shows the tradeoff between expected payoff and downside risk. A variety of solution techniques are suggested that build upon this visualization.  相似文献   

18.
In recent years, robust design optimization (RDO) has emerged as a significant area of research. The focus of RDO is to obtain a design that minimizes the effects of uncertainty on product reliability and performance. The effectiveness of the resulting solution in RDO highly depends on how the objective function and the constraints are formulated to account for uncertainties. Inequality constraint and objective function formulations under uncertainty have been studied extensively in the literature. However, the approaches for formulating equality constraints in the RDO literature are in a state of disharmony. Moreover, we observe that these approaches are generally applicable only to certain special cases of equality constraints. There is a need for a systematic approach for handling equality constraints in RDO, which is the motivation for this research. In this paper, we examine critical issues pertinent to formulating equality constraints in RDO. Equality constraints in RDO can be classified as belonging to two classes: (1) those that cannot be satisfied, because of the uncertainty inherently present in the RDO problem, and (2) those that must be satisfied, regardless of the uncertainty present in the problem. In this paper, we propose a linearization- based approach to classify equality constraints into the above two classes, and propose respective formulation methods. The theoretical developments presented in this paper are illustrated with the help of two numerical examples.  相似文献   

19.
To obtain a user-desired and accurate clustering result in practical applications, one way is to utilize additional pairwise constraints that indicate the relationship between two samples, that is, whether these samples belong to the same cluster or not. In this paper, we put forward a discriminative learning approach which can incorporate pairwise constraints into the recently proposed two-class maximum margin clustering framework. In particular, a set of pairwise loss functions is proposed, which features robust detection and penalization for violating the pairwise constraints. Consequently, the proposed method is able to directly find the partitioning hyperplane, which can separate the data into two groups and satisfy the given pairwise constraints as much as possible. In this way, it makes fewer assumptions on the distance metric or similarity matrix for the data, which may be complicated in practice, than existing popular constrained clustering algorithms. Finally, an iterative updating algorithm is proposed for the resulting optimization problem. The experiments on a number of real-world data sets demonstrate that the proposed pairwise constrained two-class clustering algorithm outperforms several representative pairwise constrained clustering counterparts in the literature.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号