首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12篇
  免费   0篇
无线电   3篇
一般工业技术   1篇
自动化技术   8篇
  2022年   1篇
  2013年   1篇
  2009年   2篇
  2007年   1篇
  2005年   2篇
  2004年   2篇
  2001年   2篇
  1999年   1篇
排序方式: 共有12条查询结果,搜索用时 15 毫秒
1.

Probabilistic topic modeling algorithms like Latent Dirichlet Allocation (LDA) have become powerful tools for the analysis of large collections of documents (such as papers, projects, or funding applications) in science, technology an innovation (STI) policy design and monitoring. However, selecting an appropriate and stable topic model for a specific application (by adjusting the hyperparameters of the algorithm) is not a trivial problem. Common validation metrics like coherence or perplexity, which are focused on the quality of topics, are not a good fit in applications where the quality of the document similarity relations inferred from the topic model is especially relevant. Relying on graph analysis techniques, the aim of our work is to state a new methodology for the selection of hyperparameters which is specifically oriented to optimize the similarity metrics emanating from the topic model. In order to do this, we propose two graph metrics: the first measures the variability of the similarity graphs that result from different runs of the algorithm for a fixed value of the hyperparameters, while the second metric measures the alignment between the graph derived from the LDA model and another obtained using metadata available for the corresponding corpus. Through experiments on various corpora related to STI, it is shown that the proposed metrics provide relevant indicators to select the number of topics and build persistent topic models that are consistent with the metadata. Their use, which can be extended to other topic models beyond LDA, could facilitate the systematic adoption of this kind of techniques in STI policy analysis and design.

  相似文献   
2.
In this paper, we analyze stochastic gradient learning rules for posterior probability estimation using networks with a single layer of weights and a general nonlinear activation function. We provide necessary and sufficient conditions on the learning rules and the activation function to obtain probability estimates. Also, we extend the concept of well-formed cost function, proposed by Wittner and Denker, to multiclass problems, and we provide theoretical results showing the advantages of this kind of objective functions.  相似文献   
3.
This Letter discusses the application of gradient-based methods to train a single layer perceptron subject to the constraint that the saturation degree of the sigmoid activation function (measured as its maximum slope in the sample space) is fixed to a given value. From a theoretical standpoint, we show that, if the training set is not linearly separable, the minimization of an L p error norm provides an approximation to the minimum error classifier, provided that the perceptron is highly saturated. Moreover, if data are linearly separable, the perceptron approximates the maximum margin classifier  相似文献   
4.
Decision theory shows that the optimal decision is a function of the posterior class probabilities. More specifically, in binary classification, the optimal decision is based on the comparison of the posterior probabilities with some threshold. Therefore, the most accurate estimates of the posterior probabilities are required near these decision thresholds. This paper discusses the design of objective functions that provide more accurate estimates of the probability values, taking into account the characteristics of each decision problem. We propose learning algorithms based on the stochastic gradient minimization of these loss functions. We show that the performance of the classifier is improved when these algorithms behave like sample selectors: samples near the decision boundary are the most relevant during learning.  相似文献   
5.
In some real applications, such as medical diagnosis or remote sensing, available training data do not often reflect the true a priori probabilities of the underlying data distribution. The classifier designed from these data may be suboptimal. Building classifiers that are robust against changes in prior probabilities is possible by applying a minimax learning strategy. In this paper, we propose a simple fixed-point algorithm that is able to train a neural minimax classifier [i.e., a classifier minimizing the worst (maximum) possible risk]. Moreover, we present a new parametric family of loss functions that is able to provide the most accurate estimates for the posterior class probabilities near the decision regions, and we also discuss the application of these functions together with a minimax learning strategy. The results of the experiments carried out on different real databases point out the ability of the proposed algorithm to find the minimax solution and produce a robust classifier when the real a priori probabilities differ from the estimated ones.  相似文献   
6.
This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes.  相似文献   
7.
The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed. We establish necessary and sufficient conditions that these costs must satisfy in one-class one-output networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions which verify two common properties: symmetry and separability (well-known cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for single-layer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions.  相似文献   
8.
In this paper we propose a new method for training classifiers for multi-class problems when classes are not (necessarily) mutually exclusive and may be related by means of a probabilistic tree structure. It is based on the definition of a Bayesian model relating network parameters, feature vectors and categories. Learning is stated as a maximum likelihood estimation problem of the classifier parameters. The proposed algorithm is specially suited to situations where each training sample is labeled with respect to only one or part of the categories in the tree. Our experiments on information retrieval scenarios show the advantages of the proposed method.  相似文献   
9.
Many types of nonlinear classifiers have been proposed to automatically generate land-cover maps from satellite images. Some are based on the estimation of posterior class probabilities, whereas others estimate the decision boundary directly. In this paper, we propose a modular design able to focus the learning process on the decision boundary by using posterior probability estimates. To do so, we use a self-configuring architecture that incorporates specialized modules to deal with conflicting classes, and we apply a learning algorithm that focuses learning on the posterior probability regions that are critical for the performance of the decision problem stated by the user-defined misclassification costs. Moreover, we show that by filtering the posterior probability map, the impulsive noise, which is a common effect in automatic land-cover classification, can be significantly reduced. Experimental results show the effectiveness of the proposed solutions on real multi- and hyperspectral images, versus other typical approaches, that are not based on probability estimates, such as Support Vector Machines.  相似文献   
10.
Optimal Selective Transmission under Energy Constraints in Sensor Networks   总被引:1,自引:0,他引:1  
An optimum selective transmission scheme for energy-limited sensor networks, where sensors send or forward messages of different importance (priority), is developed. Considering the energy costs, the available battery, the message importances and their statistical distribution, sensors decide whether to transmit or discard a message so that the importance sum of the effectively transmitted messages is maximized. It turns out that the optimal decision is made comparing the message importance with a time-variant threshold. Moreover, the gain of the selective transmission scheme, compared to a nonselective one, critically depends on the energy expenses, among other factors. Albeit suboptimal, practical schemes that operate under less demanding conditions than those for the optimal one are developed. Effort is placed into three directions: 1) the analysis of the optimal transmission policy for several stationary importance distributions; 2) the design of a transmission policy with invariant threshold that entails asymptotic optimality; and 3) the design of an adaptive algorithm that estimates the importance distribution from the actual received (or sensed) messages. Numerical results corroborating our theoretical claims and quantifying the gains of implementing the selective scheme close this paper.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号