首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17篇
  免费   0篇
能源动力   1篇
轻工业   1篇
一般工业技术   1篇
自动化技术   14篇
  2016年   1篇
  2013年   1篇
  2011年   1篇
  2008年   2篇
  2007年   5篇
  2006年   1篇
  2003年   3篇
  2002年   2篇
  1999年   1篇
排序方式: 共有17条查询结果,搜索用时 15 毫秒
1.
Hyvärinen A 《Neural computation》2008,20(12):3087-3110
In signal restoration by Bayesian inference, one typically uses a parametric model of the prior distribution of the signal. Here, we consider how the parameters of a prior model should be estimated from observations of uncorrupted signals. A lot of recent work has implicitly assumed that maximum likelihood estimation is the optimal estimation method. Our results imply that this is not the case. We first obtain an objective function that approximates the error occurred in signal restoration due to an imperfect prior model. Next, we show that in an important special case (small gaussian noise), the error is the same as the score-matching objective function, which was previously proposed as an alternative for likelihood based on purely computational considerations. Our analysis thus shows that score matching combines computational simplicity with statistical optimality in signal restoration, providing a viable alternative to maximum likelihood methods. We also show how the method leads to a new intuitive and geometric interpretation of structure inherent in probability distributions.  相似文献   
2.
Recently, a number of empirical studies have compared the performance of PCA and ICA as feature extraction methods in appearance-based object recognition systems, with mixed and seemingly contradictory results. In this paper, we briefly describe the connection between the two methods and argue that whitened PCA may yield identical results to ICA in some cases. Furthermore, we describe the specific situations in which ICA might significantly improve on PCA  相似文献   
3.
Hyvärinen A 《Neural computation》2006,18(10):2283-2292
A Boltzmann machine is a classic model of neural computation, and a number of methods have been proposed for its estimation. Most methods are plagued by either very slow convergence or asymptotic bias in the resulting estimates. Here we consider estimation in the basic case of fully visible Boltzmann machines. We show that the old principle of pseudolikelihood estimation provides an estimator that is computationally very simple yet statistically consistent.  相似文献   
4.
Hyvärinen  Aapo 《Natural computing》2002,1(2-3):185-198
The topographic organization of brain areas such as theprimary visual cortex is usually assumed to reflect exclusivelyanatomic constraints like wiring length. Here we argue thattopography is in fact imbedded in the statistical structure of thenatural sensory input. Thus, the topography of sensory areasreflects the statistics of its input as well, in the same way as thesparseness of cell outputs.  相似文献   
5.
Estimating Overcomplete Independent Component Bases for Image Windows   总被引:1,自引:0,他引:1  
Estimating overcomplete ICA bases for image windows is a difficult problem. Most algorithms require the estimation of values of the independent components which leads to computationally heavy procedures. Here we first review the existing methods, and then introduce two new algorithms that estimate an approximate overcomplete basis quite fast in a high-dimensional space. The first algorithm is based on the prior assumption that the basis vectors are randomly distributed in the space, and therefore close to orthogonal. The second replaces the conventional orthogonalization procedure by a transformation of the marginal density to gaussian.  相似文献   
6.
The author previously introduced a fast fixed-point algorithm for independent component analysis. The algorithm was derived from objective functions motivated by projection pursuit. In this paper, it is shown that the algorithm is closely connected to maximum likelihood estimation as well. The basic fixed-point algorithm maximizes the likelihood under the constraint of decorrelation, if the score function is used as the nonlinearity. Modifications of the algorithm maximize the likelihood without constraints.  相似文献   
7.
8.
Recently, statistical models of natural images have shown the emergence of several properties of the visual cortex. Most models have considered the nongaussian properties of static image patches, leading to sparse coding or independent component analysis. Here we consider the basic time dependencies of image sequences instead of their nongaussianity. We show that simple-cell-type receptive fields emerge when temporal response strength correlation is maximized for natural image sequences. Thus, temporal response strength correlation, which is a nonlinear measure of temporal coherence, provides an alternative to sparseness in modeling simple-cell receptive field properties. Our results also suggest an interpretation of simple cells in terms of invariant coding principles, which have previously been used to explain complex-cell receptive fields.  相似文献   
9.
Complex cell pooling and the statistics of natural images   总被引:3,自引:0,他引:3  
In previous work, we presented a statistical model of natural images that produced outputs similar to receptive fields of complex cells in primary visual cortex. However, a weakness of that model was that the structure of the pooling was assumed a priori and not learned from the statistical properties of natural images. Here, we present an extended model in which the pooling nonlinearity and the size of the subspaces are optimized rather than fixed, so we make much fewer assumptions about the pooling. Results on natural images indicate that the best probabilistic representation is formed when the size of the subspaces is relatively large, and that the likelihood is considerably higher than for a simple linear model with no pooling. Further, we show that the optimal nonlinearity for the pooling is squaring. We also highlight the importance of contrast gain control for the performance of the model. Our model is novel in that it is the first to analyze optimal subspace size and how this size is influenced by contrast normalization.  相似文献   
10.
Following the Kyoto Protocol, the European Union obligated itself to lower its greenhouse gas (GHG) emissions 20% below their 1990 level, by the year 2020. Carbon dioxide is the major GHG. To fulfil this obligation, the nations must meet the sustainability challenge of countering rising population plus affluence with the dematerialization of less energy per GDP plus the decarbonization of less carbon per energy. To test the feasibility of meeting the challenge, we analysed carbon dioxide emission during 1993–2004. Although emissions in the entire Union grew only by an average of 0.31% per year, emissions and their drivers varied markedly among the 27 member states. Dematerialization and decarbonization did occur, but not enough to offset the slight population growth plus rapidly increasing affluence. To fulfil its obligation in the next 12 years, the EU27 would have to counter its increasing population and affluence by a combined dematerialization and decarbonization 1.9–2.6 times faster than during 1993–2004. Hence, fulfilling its obligation by addressing fossil carbon emissions alone is very unlikely.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号