共查询到20条相似文献,搜索用时 15 毫秒
1.
It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise or corruption. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural-network framework which allows for input noise provided that some model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method gives an additional term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable and sampling this jointly with the network weights using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input. 相似文献
2.
Describes how, in the process of extracting the optical flow through space-time filtering, one has to consider the constraints associated with the motion uncertainty, as well as the spatial and temporal sampling rates of the sequence of images. The motion uncertainty satisfies the Cramer-Rao (CR) inequality, which is shown to be a function of the filter parameters. On the other hand, the spatial and temporal sampling rates have lower bounds, which depend on the motion uncertainty, the maximum support in the frequency domain, and the optical flow. These lower bounds on the sampling rates and on the motion uncertainty are constraints that constitute an intrinsic part of the computational structure of space-time filtering. The author shows that if he uses these constraints simultaneously, the filter parameters cannot be arbitrarily determined but instead have to satisfy consistency constraints. By using explicit representations of uncertainties in extracting visual attributes, one can constrain the range of values assumed by the filter parameters 相似文献
3.
A Bayesian computer vision system for modeling human interactions 总被引:28,自引:0,他引:28
Oliver N.M. Rosario B. Pentland A.P. 《IEEE transactions on pattern analysis and machine intelligence》2000,22(8):831-843
We describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task. The system deals in particularly with detecting when interactions between people occur and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. We propose and compare two different state-based learning architectures, namely, HMMs and CHMMs for modeling behaviors and interactions. Finally, a synthetic “Alife-style” training system is used to develop flexible prior models for recognizing human interactions. We demonstrate the ability to use these a priori models to accurately classify real human behaviors and interactions with no additional tuning or training 相似文献
4.
Fast algorithms for low-level vision 总被引:9,自引:0,他引:9
A recursive filtering structure is proposed that drastically reduces the computational effort required for smoothing, performing the first and second directional derivatives, and carrying out the Laplacian of an image. These operations are done with a fixed number of multiplications and additions per output point independently of the size of the neighborhood considered. The key to the approach is, first, the use of an exponentially based filter family and, second, the use of the recursive filtering. Applications to edge detection problems and multiresolution techniques are considered, and an edge detector allowing the extraction of zero-crossings of an image with only 14 operations per output element at any resolution is proposed. Various experimental results are shown 相似文献
5.
Gabor wavelets are well established as being useful for modeling neuronal response properties of the primary visual cortex. However, current Gabor models do not account for long-range contextual modulation. This paper introduces a new model which extends a state-of-the-art model of contextual modulation by incorporating long-range convolution at the scale of the visual field. The significance of this new mechanism is that it accounts for perceptual filling-in of occluded receptive fields with purely low-level vision processing. 相似文献
6.
Mertoguno S. Bourbakis N.G. 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2003,33(5):782-788
This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper. 相似文献
7.
Gauss-Markov measure field models for low-level vision 总被引:1,自引:0,他引:1
Marroquin J.L. Velasco F.A. Rivera M. Nakamura M. 《IEEE transactions on pattern analysis and machine intelligence》2001,23(4):337-348
We present a class of models, derived from classical discrete Markov random fields, that may be used for the solution of ill-posed problems in image processing and in computational vision. They lead to reconstruction algorithms that are flexible, computationally efficient, and biologically plausible. To illustrate their use, we present their application to the reconstruction of the dominant orientation and direction fields, to the classification of multiband images, and to image quantization and filtering 相似文献
8.
This paper describes two distinct approaches to image-segmentation, both of which take the form of so-called region-growing algorithms. This kind of algorithms begin with a seed location and attempt to join neighboring pixels to this growing seed until no neighbors can be joined to it.
In the first approach, each pixel is linked to the nearest neighbor in the sense of the gray level. The second approach is based on a binary relation named relative similarity relation which reflects relative properties in an image. The combined segmentation scheme of these approaches is also presented. 相似文献
9.
Ali Gohar Al-Obeidat Feras Tubaishat Abdallah Zia Tehseen Ilyas Muhammad Rocha Alvaro 《Neural computing & applications》2023,35(11):8027-8034
Neural Computing and Applications - Artificial intelligence systems are becoming ubiquitous in everyday life as well as in high-risk environments, such as autonomous driving, medical treatment, and... 相似文献
10.
Roberto Calandra André Seyfarth Jan Peters Marc Peter Deisenroth 《Annals of Mathematics and Artificial Intelligence》2016,76(1-2):5-23
Designing gaits and corresponding control policies is a key challenge in robot locomotion. Even with a viable controller parametrization, finding near-optimal parameters can be daunting. Typically, this kind of parameter optimization requires specific expert knowledge and extensive robot experiments. Automatic black-box gait optimization methods greatly reduce the need for human expertise and time-consuming design processes. Many different approaches for automatic gait optimization have been suggested to date. However, no extensive comparison among them has yet been performed. In this article, we thoroughly discuss multiple automatic optimization methods in the context of gait optimization. We extensively evaluate Bayesian optimization, a model-based approach to black-box optimization under uncertainty, on both simulated problems and real robots. This evaluation demonstrates that Bayesian optimization is particularly suited for robotic applications, where it is crucial to find a good set of gait parameters in a small number of experiments. 相似文献
11.
J. Meidow 《Pattern Recognition and Image Analysis》2008,18(2):216-221
Observations and decisions in computer vision are inherently uncertain. The rigorous treatment of uncertainty has therefore
received a lot of attention, since it not only improves the results compared to ad hoc methods but also makes the results
more explainable. In this paper, the usefulness of stochastic approaches will be demonstrated by example with selected problems.
These are given in the context of optimal estimation, self-diagnostics, and performance evaluation and cover all steps of
the reasoning chain. The removal or interpretation of unexplainable thresholds and tuning parameters will be discussed for
typical tasks in feature extraction, object reconstruction, and object classification.
The text was submitted by the author in English.
Jochen Meidow studied surveying and mapping at the University of Bonn, Germany, and graduated with a diploma in 1996. As research associate
at the Institute for Theoretical Geodesy, University of Bonn, he received his PhD degree (Dr.-Ing.) in 2001 for a thesis about
aerial image analysis. Between 2001 and 2004 he was a postdoctoral fellow at the Institute for Photogrammetry, University
of Bonn, and since 2004 he is with the Research Institute for Optronics and Pattern Recognition (FGAN-FOM) in Ettlingen, Germany.
He is a member of the DAGM (German Pattern Recognition Society). His research interests are adjustment theory, statistics,
and spatial reasoning. 相似文献
12.
Tatiana Miazhynskaia Sylvia Frühwirth-Schnatter 《Computational statistics & data analysis》2006,51(3):2029-2042
Neural networks provide a tool for describing non-linearity in volatility processes of financial data and help to answer the question “how much” non-linearity is present in the data. Non-linearity is studied under three different specifications of the conditional distribution: Gaussian, Student-t and mixture of Gaussians. To rank the volatility models, a Bayesian framework is adopted to perform a Bayesian model selection within the different classes of models. In the empirical analysis, the return series of the Dow Jones Industrial Average index, FTSE 100 and NIKKEI 225 indices over a period of 16 years are studied. The results show different behavior across the three markets. In general, if a statistical model accounts for non-normality and explains most of the fat tails in the conditional distribution, then there is less need for complex non-linear specifications. 相似文献
13.
A re-scan of the well-known Mach band illusion has led to the proposal of a Bi-Laplacian of Gaussian operation in early vision. Based on this postulate, the human visual system at low-level has been modeled from two approaches that give rise to two new tools. On one hand, it leads to the construction of a new image sharpening kernel, and on the other, to the explanation of more complex brightness-contrast illusions and the possible development of a new algorithm for robust visual capturing and display systems. 相似文献
14.
Uncertainty-management techniques that ignore the distinctiveness of individuals will either fail or incur a high cost in system resources. Location-tracking applications must consider the individual user's characteristics, habits, and preferences to estimate his or her location more effectively. We discuss human-controlled moving objects, called roving users. A typical RU application tracks each RU's location to answer queries about the person's whereabouts at any particular time. We also discuss about the belief networks that models the user habits. We also discuss about the belief networks that models the user habits. 相似文献
15.
Bayesian approaches to Gaussian mixture modeling 总被引:6,自引:0,他引:6
Roberts S.J. Husmeier D. Rezek I. Penny W. 《IEEE transactions on pattern analysis and machine intelligence》1998,20(11):1133-1142
A Bayesian-based methodology is presented which automatically penalizes overcomplex models being fitted to unknown data. We show that, with a Gaussian mixture model, the approach is able to select an “optimal” number of components in the model and so partition data sets. The performance of the Bayesian method is compared to other methods of optimal model selection and found to give good results. The methods are tested on synthetic and real data sets 相似文献
16.
Different from the existing TSK fuzzy system modeling methods, a novel zero-order TSK fuzzy modeling method called Bayesian zero-order TSK fuzzy system (B-ZTSK-FS) is proposed from the perspective of Bayesian inference in this paper. The proposed method B-ZTSK-FS constructs zero-order TSK fuzzy system by using the maximum a posteriori (MAP) framework to maximize the corresponding posteriori probability. First, a joint likelihood model about zero-order TSK fuzzy system is defined to derive a new objective function which can assure that both antecedents and consequents of fuzzy rules rather than only their antecedents of the most existing TSK fuzzy systems become interpretable. The defined likelihood model is composed of three aspects: clustering on the training set for antecedents of fuzzy rules, the least squares (LS) error for consequent parameters of fuzzy rules, and a Dirichlet prior distribution for fuzzy cluster memberships which is considered to not only automatically match the “sum-to-one” constraints on fuzzy cluster memberships, but also make the proposed method B-ZTSK-FS scalable for large-scale datasets by appropriately setting the Dirichlet index. This likelihood model indeed indicates that antecedent and consequent parameters of fuzzy rules can be linguistically interpreted and simultaneously optimized by the proposed method B-ZTSK-FS which is based on the MAP framework with the iterative sampling algorithm, which in fact implies that fuzziness and probability can co-jointly work for TSK fuzzy system modeling in a collaborative rather than repulsive way. Finally, experimental results on 28 synthetic and real-world datasets are reported to demonstrate the effectiveness of the proposed method B-ZTSK-FS in the sense of approximation accuracy, interpretability and scalability. 相似文献
17.
Luis Enrique Bergues Cabrales Andrs Ramírez Aguilera Rolando Placeres Jimnez Manuel Verdecia Jarque Hctor Manuel Camu Ciria Juan Bory Reyes Miguel Angel OFarril Mateus Fabiola Surez Palencia Marisela Gonzlez vila 《Mathematics and computers in simulation》2008,78(1):112-120
A modification to the conventional Gompertz equation (named modified Gompertz equation) is introduced to describe the solid tumor growth perturbed with direct electric current. Simulations of this equation are made. Quantitative fitting criteria as goodness-of-fit, handling missing data and prediction capability are considered. Also, parameter estimation accuracy is calculated. A fit of the experimental data of Ehrlich and fibrosarcoma Sa-37 tumor growths treated with different intensities of direct electric current are also made to validate this modified Gompertz equation. The results obtained in this study corroborate that the direct electric current effectiveness depends on (i/i0) ratio, and also on the duration of the effects of it into the tumor, and of the tumor type. It was concluded that the modified Gompertz equation has a good prediction capability to describe both unperturbed and perturbed tumor growths and it could be used to help physicians choose the most appropriate treatment for patients and animals with malignant solid tumors. 相似文献
18.
将课程教学资源融合到学生模型构建中,描述了包括领域知识拓扑结构的建立、条件概率表学习算法的推理的详细过程,最终得到了学生模型中关于章节知识项的贝叶斯网络结构图,并通过一个实验系统对个性化教学系统中学生模型建构的整个框架的可行性进行了验证. 相似文献
19.
Software measurement can play an important risk management role during product development. For example, metrics incorporated into predictive models can give advance warning of potential risks. The authors show how to use Bayesian networks, a graphical modeling technique, to predict software defects and-perform "what if" scenarios. 相似文献
20.
Adrian C. Newton 《Environmental Modelling & Software》2010,25(1):15-23
The IUCN (International Union for Conservation of Nature) Red List is widely recognised as an authoritative assessment of the conservation status of species. However, the data available for Red Listing are often lacking or uncertain. This paper presents a Bayesian network that may be used to perform a Red List assessment of a taxon using uncertain data. In such cases, input variables can be entered as likelihoods, and the appropriate Red List category is identified by the network using Bayesian inference. Relative performance of the Bayesian network was evaluated by comparison with an alternative method (RAMAS® Red List), based on the use of fuzzy numbers. While results were generally comparable, some differences were noted for species with uncertain input data. Contrasting results may be attributed to differences in how uncertain data are analysed by the two approaches. The Bayesian network has the advantage of being more transparent, facilitating sensitivity analysis. The method consequently has potential for facilitating Red List assessments, particularly for poorly known taxa. 相似文献