首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 880 毫秒
1.
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the root-mean-square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.  相似文献   

2.
为降低基于梯度的边界检测算法的复杂度,常使用两种梯度近似算法。但这些梯度近似值受边界方向的影响较大,降低了边界检测的性能。提出了通用梯度近似算法的数学模型和两种优化准则,进而推导出两种梯度近似的优化算法。分析表明:与常用算法相比,优化算法在各向同性的性能方面提高4.4倍,在梯度幅度的逼近度提高57倍。同时,给出了优化算法的简单快捷的实现方法。  相似文献   

3.
4.
5.
Molodtsov’s soft set theory is a newly emerging tool to deal with uncertain problems. Based on the novel granulation structures called soft approximation spaces, Feng et al. initiated soft rough approximations and soft rough sets. Feng’s soft rough sets can be seen as a generalized rough set model based on soft sets, which could provide better approximations than Pawlak’s rough sets in some cases. This paper is devoted to establishing the relationship among soft sets, soft rough sets and topologies. We introduce the concept of topological soft sets by combining soft sets with topologies and give their properties. New types of soft sets such as keeping intersection soft sets and keeping union soft sets are defined and supported by some illustrative examples. We describe the relationship between rough sets and soft rough sets. We obtain the structure of soft rough sets and the topological structure of soft sets, and reveal that every topological space on the initial universe is a soft approximating space.  相似文献   

6.

We compare various extensions of the Bradley–Terry model and a hierarchical Poisson log-linear model in terms of their performance in predicting the outcome of soccer matches (win, draw, or loss). The parameters of the Bradley–Terry extensions are estimated by maximizing the log-likelihood, or an appropriately penalized version of it, while the posterior densities of the parameters of the hierarchical Poisson log-linear model are approximated using integrated nested Laplace approximations. The prediction performance of the various modeling approaches is assessed using a novel, context-specific framework for temporal validation that is found to deliver accurate estimates of the test error. The direct modeling of outcomes via the various Bradley–Terry extensions and the modeling of match scores using the hierarchical Poisson log-linear model demonstrate similar behavior in terms of predictive performance.

  相似文献   

7.
Partial information in databases can arise when information from several databases is combined. Even if each database is complete for some “world”, the combined databases will not be, and answers to queries against such combined databases can only be approximated. In this paper we describe various situations in which a precise answer cannot be obtained for a query asked against multiple databases. Based on an analysis of these situations, we propose a classification of constructs that can be used to model approximations.

The main goal of the paper is to study several formal models of approximations and their semantics. In particular, we obtain universality properties for these models of approximations. Universality properties suggest syntax for languages with approximations based on the operations which are naturally associated with them. We prove universality properties for most of the approximation constructs. Then we design languages built around datatypes given by the approximation constructs. A straightforward approach results in languages that have a number of limitations. In an attempt to overcome those limitations, we explain how all the languages can be embedded into a language for conjunctive and disjunctive sets from Libkin and Wong (1996) and demonstrate its usefulness in querying independent databases. We also discuss the semantics of approximation constructs and the relationship between them.  相似文献   


8.
The sparse synthesis model for signals has become very popular in the last decade, leading to improved performance in many signal processing applications. This model assumes that a signal may be described as a linear combination of few columns (atoms) of a given synthesis matrix (dictionary). The Co-Sparse Analysis model is a recently introduced counterpart, whereby signals are assumed to be orthogonal to many rows of a given analysis dictionary. These rows are called the co-support.The Analysis model has already led to a series of contributions that address the pursuit problem: identifying the co-support of a corrupted signal in order to restore it. While all the existing work adopts a deterministic point of view towards the design of such pursuit algorithms, this paper introduces a Bayesian estimation point of view, starting with a random generative model for the co-sparse analysis signals. This is followed by a derivation of Oracle, Minimum-Mean-Squared-Error (MMSE), and Maximum-A-posteriori-Probability (MAP) based estimators. We present a comparison between the deterministic formulations and these estimators, drawing some connections between the two. We develop practical approximations to the MAP and MMSE estimators, and demonstrate the proposed reconstruction algorithms in several synthetic and real image experiments, showing their potential and applicability.  相似文献   

9.
As databases increasingly integrate different types of information such as multimedia, spatial, time-series, and scientific data, it becomes necessary to support efficient retrieval of multidimensional data. Both the dimensionality and the amount of data that needs to be processed are increasing rapidly. Reducing the dimension of the feature vectors to enhance the performance of the underlying technique is a popular solution to the infamous curse of dimensionality. We expect the techniques to have good quality of distance measures when the similarity distance between two feature vectors is approximated by some notion of distance between two lower-dimensional transformed vectors. Thus, it is desirable to develop techniques resulting in accurate approximations to the original similarity distance. We investigate dimensionality reduction techniques that directly target minimizing the errors made in the approximations. In particular, we develop dynamic techniques for efficient and accurate approximation of similarity evaluations between high-dimensional vectors based on inner-product approximations. Inner-product, by itself, is used as a distance measure in a wide area of applications such as document databases. A first order approximation to the inner-product is obtained from the Cauchy-Schwarz inequality. We extend this idea to higher order power symmetric functions of the multidimensional points. We show how to compute fixed coefficients that work as universal weights based on the moments of the probability density function of the data set. We also develop a dynamic model to compute the universal coefficients for data sets whose distribution is not known. Our experiments on synthetic and real data sets show that the similarity between two objects in high-dimensional space can be accurately approximated by a significantly lower-dimensional representation.  相似文献   

10.
All images of a convex Lambertian surface captured with a fixed pose under varying illumination are known to lie in a convex cone in the image space that is called the illumination cone. Since this cone model is too complex to be built in practice, researchers have attempted to approximate it with simpler models. In this paper, we propose a segmented linear subspace model to approximate the cone. Our idea of segmentation is based on the fact that the success of low dimensional linear subspace approximations of the illumination cone increases if the directions of the surface normals get close to each other. Hence, we propose to cluster the image pixels according to their surface normal directions and to approximate the cone with a linear subspace for each of these clusters separately. We perform statistical performance evaluation experiments to compare our system to other popular systems and demonstrate that the performance increase we obtain is statistically significant.  相似文献   

11.
A recently proposed Bayesian modeling framework for classification facilitates both the analysis and optimization of error estimation performance. The Bayesian error estimator is then defined to have optimal mean-square error performance, but in many situations closed-form representations are unavailable and approximations may not be feasible. To address this, we present a method to optimally calibrate arbitrary error estimators for minimum mean-square error performance within a supposed Bayesian framework. Assuming a fixed sample size, classification rule and error estimation rule, as well as a fixed Bayesian model, the calibration is done by first computing a calibration function that maps error estimates to their optimally calibrated values off-line. Once found, this calibration function may be easily applied to error estimates on the fly whenever the assumptions apply. We demonstrate that calibrated error estimators offer significant improvement in performance relative to classical error estimators under Bayesian models with both linear and non-linear classification rules.  相似文献   

12.
This paper presents a Monte-Carlo study on the practical reliability of numerical algorithms for FIML-estimation in nonlinear econometric models. The performance of different techniques of Hessian approximation in trust-region algorithms is compared regarding their “robustness” against “bad” starting points and their “global” and “local” convergence speed, i.e. the gain in the objective function, caused by individual iteration steps far off from and near to the optimum. Concerning robustness and global convergence speed the crude GLS-type Hessian approximations performed best, efficiently exploiting the special structure of the likelihood function. But, concerning local speed, general purpose techniques were strongly superior. So, some appropriate mixtures of these two types of approximations turned out to be the only techniques to be recommended.  相似文献   

13.
This work addresses the matching of a 3D deformable face model to 2D images through a 2.5D Active Appearance Models (AAM). We propose a 2.5D AAM that combines a 3D metric Point Distribution Model (PDM) and a 2D appearance model whose control points are defined by a full perspective projection of the PDM. The advantage is that, assuming a calibrated camera, 3D metric shapes can be retrieved from single view images. Two model fitting algorithms and their computational efficient approximations are proposed: the Simultaneous Forwards Additive (SFA) and the Normalization Forwards Additive (NFA), both based on the Lucas–Kanade framework. The SFA algorithm searches for shape and appearance parameters simultaneously whereas the NFA projects out the appearance from the error image and searches only for the shape parameters. SFA is therefore more accurate. Robust solutions for the SFA and NFA are also proposed in order to take into account the self-occlusion or partial occlusion of the face. Several performance evaluations for the SFA, NFA and theirs efficient approximations were performed. The experiments include evaluating the frequency of converge, the fitting performance in unseen data and the tracking performance in the FGNET Talking Face sequence. All results show that the 2.5D AAM can outperform both the 2D + 3D combined models and the 2D standard methods. The robust extensions to occlusion were tested on a synthetic sequence showing that the model can deal efficiently with large head rotation.  相似文献   

14.
We present an adaptive finite element method for evolutionary convection–diffusion problems. The algorithm is based on an a posteriori indicator of the size of the oscillations displayed by the finite element approximation. The procedure is able to refine or coarsen dynamically the mesh adjusting it automatically to evolving layers. The method produces nearly non-oscillatory approximations in the convection dominated regime. We check the performance of the adaptive method with some numerical experiments.  相似文献   

15.
This paper studies quantitative model checking of infinite tree-like (continuous-time) Markov chains. These tree-structured quasi-birth death processes are equivalent to probabilistic pushdown automata and recursive Markov chains and are widely used in the field of performance evaluation. We determine time-bounded reachability probabilities in these processes-which with direct methods, i.e., uniformization, result in an exponential blow-up-by applying abstraction. We contrast abstraction based on Markov decision processes (MDPs) and interval-based abstraction; study various schemes to partition the state space, and empirically show their influence on the accuracy of the obtained reachability probabilities. Results show that grid-like schemes, in contrast to chain- and tree-like ones, yield extremely precise approximations for rather coarse abstractions.  相似文献   

16.
In this paper, we consider a spectral method based on generalized Hermite functions in multiple dimensions. We first introduce three normed spaces and prove their equivalence, which enables us to develop and to analyze generalized Hermite approximations efficiently. We then establish some basic results on generalized Hermite orthogonal approximations in multiple dimensions, which play important roles in the relevant spectral methods. As examples, we consider an elliptic equation with a harmonic potential and a class of nonlinear wave equations. The spectral schemes are proposed, and the convergence is proved. Numerical results demonstrate the spectral accuracy of this approach.  相似文献   

17.
In this work a robust nonlinear model predictive controller for nonlinear convection-diffusion-reaction systems is presented. The controller makes use of a collection of reduced order approximations of the plant (models) reconstructed on-line by projection methods on proper orthogonal decomposition (POD) basis functions. The model selection and model update step is based on a sufficient condition that determines the maximum allowable process-model mismatch to guarantee stable control performance despite process uncertainty and disturbances. Proofs on the existence of a sequence of feasible approximations and control stability are given.Since plant approximations are built on-line based on actual measurements, the proposed controller can be interpreted as a multi-model nonlinear predictive control (MMPC). The performance of the MMPC strategy is illustrated by simulation experiments on a problem that involves reactant concentration control of a tubular reactor with recycle.  相似文献   

18.
In the context of structural optimization via a level-set method we propose a framework to handle geometric constraints related to a notion of local thickness. The local thickness is calculated using the signed distance function to the shape. We formulate global constraints using integral functionals and compute their shape derivatives. We discuss different strategies and possible approximations to handle the geometric constraints. We implement our approach in two and three space dimensions for a model of linearized elasticity. As can be expected, the resulting optimized shapes are strongly dependent on the initial guesses and on the specific treatment of the constraints since, in particular, some topological changes may be prevented by those constraints.  相似文献   

19.
We propose an efficient approach for the grouping of local orientations (points on vessels) via nilpotent approximations of sub-Riemannian distances in the 2D and 3D roto-translation groups SE(2) and SE(3). In our distance approximations we consider homogeneous norms on nilpotent groups that locally approximate SE(n), and which are obtained via the exponential and logarithmic map on SE(n). In a qualitative validation we show that the norms provide accurate approximations of the true sub-Riemannian distances, and we discuss their relations to the fundamental solution of the sub-Laplacian on SE(n). The quantitative experiments further confirm the accuracy of the approximations. Quantitative results are obtained by evaluating perceptual grouping performance of retinal blood vessels in 2D images and curves in challenging 3D synthetic volumes. The results show that (1) sub-Riemannian geometry is essential in achieving top performance and (2) grouping via the fast analytic approximations performs almost equally, or better, than data-adaptive fast marching approaches on \(\mathbb {R}^n\) and SE(n).  相似文献   

20.
This paper describes an accurate and efficient method to model and predict the performance of distributed/parallel systems. Various performance measures, such as the expected user response time, the system throughput and the average server utilization, can be easily estimated using this method. The methodology is based on known product form queueing network methods, with some additional approximations. The method is illustrated by evaluating performance of a multi-client multi-server distributed system. A system model is constructed and mapped to a probabilistic queueing network model which is used to predict its behavior. The effects of user think time and various design parameters on the performance of the system are investigated by both the analytical method and computer simulation. The accuracy of the former is verified. The methodology is applied to identify the bottleneck server and to establish proper balance between clients and servers in distributed/parallel systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号