全文获取类型
收费全文 | 83篇 |
免费 | 3篇 |
专业分类
电工技术 | 2篇 |
金属工艺 | 2篇 |
轻工业 | 3篇 |
无线电 | 45篇 |
一般工业技术 | 3篇 |
冶金工业 | 8篇 |
自动化技术 | 23篇 |
出版年
2022年 | 1篇 |
2021年 | 1篇 |
2018年 | 1篇 |
2016年 | 4篇 |
2014年 | 1篇 |
2013年 | 1篇 |
2012年 | 3篇 |
2011年 | 3篇 |
2010年 | 2篇 |
2009年 | 1篇 |
2008年 | 1篇 |
2007年 | 4篇 |
2006年 | 8篇 |
2005年 | 6篇 |
2004年 | 7篇 |
2003年 | 1篇 |
2002年 | 6篇 |
2001年 | 3篇 |
2000年 | 3篇 |
1999年 | 5篇 |
1998年 | 7篇 |
1997年 | 3篇 |
1996年 | 1篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1993年 | 1篇 |
1992年 | 1篇 |
1991年 | 1篇 |
1990年 | 1篇 |
1989年 | 1篇 |
1986年 | 2篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有86条查询结果,搜索用时 594 毫秒
41.
The authors address sleep staging as a medical decision problem. They develop a model for automated sleep staging by combining signal information, human heuristic knowledge in the form of rules, and a mathematical framework. The EEG/EOG/EMG (electroencephalogram/electroculogram/electromyogram) events relevant for sleep staging are detected in real time by an existing front-end system and are summarized per minute. These token data are translated, normalized and constitute the input alphabet to a finite-state machine (automaton). The processed token events are used as partial belief in a set of anthropomimetic rules, which encode human knowledge about the occurrence of a particular sleep stage. The Dempster-Shafer theory of evidence weighs the partial beliefs and attributes the minute sleep stage to the machine state transition that displays the highest final belief. Results are briefly presented 相似文献
42.
Feature extraction using information-theoretic learning 总被引:3,自引:0,他引:3
Hild KE Erdogmus D Torkkola K Principe JC 《IEEE transactions on pattern analysis and machine intelligence》2006,28(9):1385-1392
A classification system typically consists of both a feature extractor (preprocessor) and a classifier. These two components can be trained either independently or simultaneously. The former option has an implementation advantage since the extractor need only be trained once for use with any classifier, whereas the latter has an advantage since it can be used to minimize classification error directly. Certain criteria, such as minimum classification error, are better suited for simultaneous training, whereas other criteria, such as mutual information, are amenable for training the feature extractor either independently or simultaneously. Herein, an information-theoretic criterion is introduced and is evaluated for training the extractor independently of the classifier. The proposed method uses nonparametric estimation of Renyi's entropy to train the extractor by maximizing an approximation of the mutual information between the class labels and the output of the feature extractor. The evaluations show that the proposed method, even though it uses independent training, performs at least as well as three feature extraction methods that train the extractor and classifier simultaneously. 相似文献
43.
Badong Chen Yu Zhu Jinchun Hu Jose C. Principe 《International Journal of Control, Automation and Systems》2011,9(6):1049-1055
In this paper, we propose an optimal adaptive FIR filter, in which the step-size and error nonlinearity are simultaneously
optimized to maximize the decrease of the mean square deviation (MSD) of the weight error vector at each iteration. The optimal
step-size and error nonlinearity are derived, and a variable step-size stochastic information gradient (VS-SIG) algorithm
is developed to approximately implement the optimal adaptation. Simulation results indicate that this new algorithm achieves
faster convergence rate and lower misadjustment error in comparison with other adaptive algorithms. 相似文献
44.
Vector quantization using information theoretic concepts 总被引:1,自引:0,他引:1
Tue?Lehn-schi?lerEmail author Anant?Hegde Deniz?Erdogmus Jose?C.?Principe 《Natural computing》2005,4(1):39-51
The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen self-organizing map (SOM) and the Linde Buzo Gray (LBG) algorithm have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical interpretation and relies on minimization of a well defined cost function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact equivalent to minimizing a divergence measure between the distribution of the data and the distribution of the processing elements, hence, the algorithm can be seen as a density matching method. 相似文献
45.
Hild K.E. II Pinto D. Erdogmus D. Principe J.C. 《IEEE transactions on circuits and systems. I, Regular papers》2005,52(10):2188-2196
A method to perform convolutive blind source separation of super-Gaussian sources by minimizing the mutual information between segments of output signals is presented. The proposed approach is essentially an implementation of an idea previously proposed by Pham. The formulation of mutual information in the proposed criterion makes use of a nonparametric estimator of Renyi's /spl alpha/-entropy, which becomes Shannon's entropy in the limit as /spl alpha/ approaches 1. Since /spl alpha/ can be any number greater than 0, this produces a family of criteria having an infinite number of members. Interestingly, it appears that Shannon's entropy cannot be used for convolutive source separation with this type of estimator. In fact, only one value of /spl alpha/ appears to be appropriate, namely /spl alpha/=2, which corresponds to Renyi's quadratic entropy. Four experiments are included to show the efficacy of the proposed criterion. 相似文献
46.
47.
48.
49.
A new unsupervised algorithm is proposed that performs competitive principal component analysis (PCA) of a time series. A set of expert PCA networks compete, through the mixture of experts (MOE) formalism, on the basis of their ability to reconstruct the original signal. The resulting network finds an optimal projection of the input onto a reduced dimensional space as a function of the input and, hence, of time. As a byproduct, the time series is both segmented and identified according to stationary regions. Examples showing the performance of the algorithm are included 相似文献
50.
Jarchi D Sanei S Principe JC Makkiabadi B 《IEEE transactions on bio-medical engineering》2011,58(1):132-143
A novel spatiotemporal filtering method for single trial estimation of event-related potential (ERP) subcomponents is proposed here. Unlike some previous works in ERP estimation [1], , the proposed method is able to estimate temporally correlated ERP subcomponents such as P3a and P3b. A new cost function is, therefore, defined which can deflate one of the correlated subcomponents. The method is applied to both simulated and real data and has shown to perform very well even in low signal-to-noise ratio situations. In addition, the method is compared to spatial principal component analysis and its superiority has been confirmed by using simulated signals. The approach can be especially useful in mental fatigue analysis where the relative variability of P300 subcomponents is the key factor in detecting the level of fatigue. 相似文献