全文获取类型
收费全文 | 83篇 |
免费 | 3篇 |
专业分类
电工技术 | 2篇 |
金属工艺 | 2篇 |
轻工业 | 3篇 |
无线电 | 45篇 |
一般工业技术 | 3篇 |
冶金工业 | 8篇 |
自动化技术 | 23篇 |
出版年
2022年 | 1篇 |
2021年 | 1篇 |
2018年 | 1篇 |
2016年 | 4篇 |
2014年 | 1篇 |
2013年 | 1篇 |
2012年 | 3篇 |
2011年 | 3篇 |
2010年 | 2篇 |
2009年 | 1篇 |
2008年 | 1篇 |
2007年 | 4篇 |
2006年 | 8篇 |
2005年 | 6篇 |
2004年 | 7篇 |
2003年 | 1篇 |
2002年 | 6篇 |
2001年 | 3篇 |
2000年 | 3篇 |
1999年 | 5篇 |
1998年 | 7篇 |
1997年 | 3篇 |
1996年 | 1篇 |
1995年 | 1篇 |
1994年 | 2篇 |
1993年 | 1篇 |
1992年 | 1篇 |
1991年 | 1篇 |
1990年 | 1篇 |
1989年 | 1篇 |
1986年 | 2篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有86条查询结果,搜索用时 0 毫秒
1.
Haan-Go Choi Principe J.C. Hutchison A.A. Wozniak J.A. 《IEEE transactions on bio-medical engineering》1994,41(3):257-266
Analysis of respiratory electromyographic (EMG) signals in the study of respiratory control requires the detection of burst activity from background (signal segmentation), and focuses upon the determination of onset and cessation points of the burst activity (boundary estimation). The authors describe a new automated multiresolution technique for signal segmentation and boundary estimation. During signal segmentation, a new transitional segment is defined which contains the boundary between background a burst activity. Boundary estimation is then performed within this transitional segment. Boundary candidates are selected and a probability is attributed to each candidate, using an artificial neural network. The final boundary for a given transitional segment is the boundary estimate with the maximum a posteriori probability. This new method has proved accurate when compared to boundaries chosen by two investigators 相似文献
2.
Robert Jenssen Author Vitae Deniz Erdogmus Author Vitae Author Vitae Jose C. Principe Author Vitae Author Vitae 《Pattern recognition》2007,40(3):796-806
We introduce a new graph cut for clustering which we call the Information Cut. It is derived using Parzen windowing to estimate an information theoretic distance measure between probability density functions. We propose to optimize the Information Cut using a gradient descent-based approach. Our algorithm has several advantages compared to many other graph-based methods in terms of determining an appropriate affinity measure, computational complexity, memory requirements and coping with different data scales. We show that our method may produce clustering and image segmentation results comparable or better than the state-of-the art graph-based methods. 相似文献
3.
This paper presents a new loss function for neural network classification, inspired by the recently proposed similarity measure called Correntropy. We show that this function essentially behaves like the conventional square loss for samples that are well within the decision boundary and have small errors, and L0 or counting norm for samples that are outliers or are difficult to classify. Depending on the value of the kernel size parameter, the proposed loss function moves smoothly from convex to non-convex and becomes a close approximation to the misclassification loss (ideal 0–1 loss). We show that the discriminant function obtained by optimizing the proposed loss function in the neighborhood of the ideal 0–1 loss function to train a neural network is immune to overfitting, more robust to outliers, and has consistent and better generalization performance as compared to other commonly used loss functions, even after prolonged training. The results also show that it is a close competitor to the SVM. Since the proposed method is compatible with simple gradient based online learning, it is a practical way of improving the performance of neural network classifiers. 相似文献
4.
Nicholas A. Yaraghi Nicolás Guarín‐Zapata Lessa K. Grunenfelder Eric Hintsala Sanjit Bhowmick Jon M. Hiller Mark Betts Edward L. Principe Jae‐Young Jung Leigh Sheppard Richard Wuhrer Joanna McKittrick Pablo D. Zavattieri David Kisailus 《Advanced materials (Deerfield Beach, Fla.)》2016,28(32):6835-6844
5.
Liangjun Chen Hua Qu Jihong Zhao Badong Chen Jose C. Principe 《Neural computing & applications》2016,27(4):1019-1031
Deep learning systems aim at using hierarchical models to learning high-level features from low-level features. The progress in deep learning is great in recent years. The robustness of the learning systems with deep architectures is however rarely studied and needs further investigation. In particular, the mean square error (MSE), a commonly used optimization cost function in deep learning, is rather sensitive to outliers (or impulsive noises). Robust methods are needed to improve the learning performance and immunize the harmful influences caused by outliers which are pervasive in real-world data. In this paper, we propose an efficient and robust deep learning model based on stacked auto-encoders and Correntropy-induced loss function (CLF), called CLF-based stacked auto-encoders (CSAE). CLF as a nonlinear measure of similarity is robust to outliers and can approximate different norms (from \(l_0\) to \(l_2\)) of data. Essentially, CLF is an MSE in reproducing kernel Hilbert space. Different from conventional stacked auto-encoders, which use, in general, the MSE as the reconstruction loss and KL divergence as the sparsity penalty term, the reconstruction loss and sparsity penalty term in CSAE are both built with CLF. The fine-tuning procedure in CSAE is also based on CLF, which can further enhance the learning performance. The excellent and robust performance of the proposed model is confirmed by simulation experiments on MNIST benchmark dataset. 相似文献
6.
Advanced search algorithms for information-theoretic learning with kernel-based estimators 总被引:1,自引:0,他引:1
Recent publications have proposed various information-theoretic learning (ITL) criteria based on Renyi's quadratic entropy with nonparametric kernel-based density estimation as alternative performance metrics for both supervised and unsupervised adaptive system training. These metrics, based on entropy and mutual information, take into account higher order statistics unlike the mean-square error (MSE) criterion. The drawback of these information-based metrics is the increased computational complexity, which underscores the importance of efficient training algorithms. In this paper, we examine familiar advanced-parameter search algorithms and propose modifications to allow training of systems with these ITL criteria. The well known algorithms tailored here for ITL include various improved gradient-descent methods, conjugate gradient approaches, and the Levenberg-Marquardt (LM) algorithm. Sample problems and metrics are presented to illustrate the computational efficiency attained by employing the proposed algorithms. 相似文献
7.
Learning from Examples with Information Theoretic Criteria 总被引:3,自引:0,他引:3
Jose C. Principe Dongxin Xu Qun Zhao John W. Fisher III 《The Journal of VLSI Signal Processing》2000,26(1-2):61-77
This paper discusses a framework for learning based on information theoretic criteria. A novel algorithm based on Renyi's quadratic entropy is used to train, directly from a data set, linear or nonlinear mappers for entropy maximization or minimization. We provide an intriguing analogy between the computation and an information potential measuring the interactions among the data samples. We also propose two approximations to the Kulback-Leibler divergence based on quadratic distances (Cauchy-Schwartz inequality and Euclidean distance). These distances can still be computed using the information potential. We test the newly proposed distances in blind source separation (unsupervised learning) and in feature extraction for classification (supervised learning). In blind source separation our algorithm is capable of separating instantaneously mixed sources, and for classification the performance of our classifier is comparable to the support vector machines (SVMs). 相似文献
8.
Anant Hegde Jose C. Principe Deniz Erdogmus Umut Ozertem Yadunandana N. Rao Hemanth Peddaneni 《The Journal of VLSI Signal Processing》2006,45(1-2):85-95
Principal components analysis is an important and well-studied subject in statistics and signal processing. Several algorithms
for solving this problem exist, and could be mostly grouped into one of the following three approaches: adaptation based on
Hebbian updates and deflation, optimization of a second order statistical criterion (like reconstruction error or output variance),
and fixed point update rules with deflation. In this study, we propose an alternate approach that avoids deflation and gradient-search
techniques. The proposed method is an on-line procedure based on recursively updating the eigenvector and eigenvalue matrices
with every new sample such that the estimates approximately track their true values as would be calculated analytically from
the current sample estimate of the data covariance matrix. The perturbation technique is theoretically shown to be applicable
for recursive canonical correlation analysis, as well. The performance of this algorithm is compared with that of a structurally
similar matrix perturbation-based method and also with a few other traditional methods like Sanger’s rule and APEX.
相似文献
相似文献
9.
We have previously proposed the quadratic Renyi's error entropy as an alternative cost function for supervised adaptive system training. An entropy criterion instructs the minimization of the average information content of the error signal rather than merely trying to minimize its energy. In this paper, we propose a generalization of the error entropy criterion that enables the use of any order of Renyi's entropy and any suitable kernel function in density estimation. It is shown that the proposed entropy estimator preserves the global minimum of actual entropy. The equivalence between global optimization by convolution smoothing and the convolution by the kernel in Parzen windowing is also discussed. Simulation results are presented for time-series prediction and classification where experimental demonstration of all the theoretical concepts is presented. 相似文献
10.
Iasemidis LD Shiau DS Chaovalitwongse W Sackellares JC Pardalos PM Principe JC Carney PR Prasad A Veeramani B Tsakalis K 《IEEE transactions on bio-medical engineering》2003,50(5):616-627
Current epileptic seizure "prediction" algorithms are generally based on the knowledge of seizure occurring time and analyze the electroencephalogram (EEG) recordings retrospectively. It is then obvious that, although these analyses provide evidence of brain activity changes prior to epileptic seizures, they cannot be applied to develop implantable devices for diagnostic and therapeutic purposes. In this paper, we describe an adaptive procedure to prospectively analyze continuous, long-term EEG recordings when only the occurring time of the first seizure is known. The algorithm is based on the convergence and divergence of short-term maximum Lyapunov exponents (STLmax) among critical electrode sites selected adaptively. A warning of an impending seizure is then issued. Global optimization techniques are applied for selecting the critical groups of electrode sites. The adaptive seizure prediction algorithm (ASPA) was tested in continuous 0.76 to 5.84 days intracranial EEG recordings from a group of five patients with refractory temporal lobe epilepsy. A fixed parameter setting applied to all cases predicted 82% of seizures with a false prediction rate of 0.16/h. Seizure warnings occurred an average of 71.7 min before ictal onset. Similar results were produced by dividing the available EEG recordings into half training and testing portions. Optimizing the parameters for individual patients improved sensitivity (84% overall) and reduced false prediction rate (0.12/h overall). These results indicate that ASPA can be applied to implantable devices for diagnostic and therapeutic purposes. 相似文献