首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   83篇
  免费   3篇
电工技术   2篇
金属工艺   2篇
轻工业   3篇
无线电   45篇
一般工业技术   3篇
冶金工业   8篇
自动化技术   23篇
  2022年   1篇
  2021年   1篇
  2018年   1篇
  2016年   4篇
  2014年   1篇
  2013年   1篇
  2012年   3篇
  2011年   3篇
  2010年   2篇
  2009年   1篇
  2008年   1篇
  2007年   4篇
  2006年   8篇
  2005年   6篇
  2004年   7篇
  2003年   1篇
  2002年   6篇
  2001年   3篇
  2000年   3篇
  1999年   5篇
  1998年   7篇
  1997年   3篇
  1996年   1篇
  1995年   1篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1986年   2篇
  1982年   1篇
  1981年   1篇
  1976年   1篇
排序方式: 共有86条查询结果,搜索用时 15 毫秒
1.
Analysis of respiratory electromyographic (EMG) signals in the study of respiratory control requires the detection of burst activity from background (signal segmentation), and focuses upon the determination of onset and cessation points of the burst activity (boundary estimation). The authors describe a new automated multiresolution technique for signal segmentation and boundary estimation. During signal segmentation, a new transitional segment is defined which contains the boundary between background a burst activity. Boundary estimation is then performed within this transitional segment. Boundary candidates are selected and a probability is attributed to each candidate, using an artificial neural network. The final boundary for a given transitional segment is the boundary estimate with the maximum a posteriori probability. This new method has proved accurate when compared to boundaries chosen by two investigators  相似文献   
2.
We introduce a new graph cut for clustering which we call the Information Cut. It is derived using Parzen windowing to estimate an information theoretic distance measure between probability density functions. We propose to optimize the Information Cut using a gradient descent-based approach. Our algorithm has several advantages compared to many other graph-based methods in terms of determining an appropriate affinity measure, computational complexity, memory requirements and coping with different data scales. We show that our method may produce clustering and image segmentation results comparable or better than the state-of-the art graph-based methods.  相似文献   
3.
This paper presents a new loss function for neural network classification, inspired by the recently proposed similarity measure called Correntropy. We show that this function essentially behaves like the conventional square loss for samples that are well within the decision boundary and have small errors, and L0 or counting norm for samples that are outliers or are difficult to classify. Depending on the value of the kernel size parameter, the proposed loss function moves smoothly from convex to non-convex and becomes a close approximation to the misclassification loss (ideal 0–1 loss). We show that the discriminant function obtained by optimizing the proposed loss function in the neighborhood of the ideal 0–1 loss function to train a neural network is immune to overfitting, more robust to outliers, and has consistent and better generalization performance as compared to other commonly used loss functions, even after prolonged training. The results also show that it is a close competitor to the SVM. Since the proposed method is compatible with simple gradient based online learning, it is a practical way of improving the performance of neural network classifiers.  相似文献   
4.
5.
Deep learning systems aim at using hierarchical models to learning high-level features from low-level features. The progress in deep learning is great in recent years. The robustness of the learning systems with deep architectures is however rarely studied and needs further investigation. In particular, the mean square error (MSE), a commonly used optimization cost function in deep learning, is rather sensitive to outliers (or impulsive noises). Robust methods are needed to improve the learning performance and immunize the harmful influences caused by outliers which are pervasive in real-world data. In this paper, we propose an efficient and robust deep learning model based on stacked auto-encoders and Correntropy-induced loss function (CLF), called CLF-based stacked auto-encoders (CSAE). CLF as a nonlinear measure of similarity is robust to outliers and can approximate different norms (from \(l_0\) to \(l_2\)) of data. Essentially, CLF is an MSE in reproducing kernel Hilbert space. Different from conventional stacked auto-encoders, which use, in general, the MSE as the reconstruction loss and KL divergence as the sparsity penalty term, the reconstruction loss and sparsity penalty term in CSAE are both built with CLF. The fine-tuning procedure in CSAE is also based on CLF, which can further enhance the learning performance. The excellent and robust performance of the proposed model is confirmed by simulation experiments on MNIST benchmark dataset.  相似文献   
6.
Recent publications have proposed various information-theoretic learning (ITL) criteria based on Renyi's quadratic entropy with nonparametric kernel-based density estimation as alternative performance metrics for both supervised and unsupervised adaptive system training. These metrics, based on entropy and mutual information, take into account higher order statistics unlike the mean-square error (MSE) criterion. The drawback of these information-based metrics is the increased computational complexity, which underscores the importance of efficient training algorithms. In this paper, we examine familiar advanced-parameter search algorithms and propose modifications to allow training of systems with these ITL criteria. The well known algorithms tailored here for ITL include various improved gradient-descent methods, conjugate gradient approaches, and the Levenberg-Marquardt (LM) algorithm. Sample problems and metrics are presented to illustrate the computational efficiency attained by employing the proposed algorithms.  相似文献   
7.
Principal components analysis is an important and well-studied subject in statistics and signal processing. Several algorithms for solving this problem exist, and could be mostly grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second order statistical criterion (like reconstruction error or output variance), and fixed point update rules with deflation. In this study, we propose an alternate approach that avoids deflation and gradient-search techniques. The proposed method is an on-line procedure based on recursively updating the eigenvector and eigenvalue matrices with every new sample such that the estimates approximately track their true values as would be calculated analytically from the current sample estimate of the data covariance matrix. The perturbation technique is theoretically shown to be applicable for recursive canonical correlation analysis, as well. The performance of this algorithm is compared with that of a structurally similar matrix perturbation-based method and also with a few other traditional methods like Sanger’s rule and APEX.
  相似文献   
8.
Generalized information potential criterion for adaptive system training   总被引:2,自引:0,他引:2  
We have previously proposed the quadratic Renyi's error entropy as an alternative cost function for supervised adaptive system training. An entropy criterion instructs the minimization of the average information content of the error signal rather than merely trying to minimize its energy. In this paper, we propose a generalization of the error entropy criterion that enables the use of any order of Renyi's entropy and any suitable kernel function in density estimation. It is shown that the proposed entropy estimator preserves the global minimum of actual entropy. The equivalence between global optimization by convolution smoothing and the convolution by the kernel in Parzen windowing is also discussed. Simulation results are presented for time-series prediction and classification where experimental demonstration of all the theoretical concepts is presented.  相似文献   
9.
An experience is reported with solitary necrotic nodules of the liver, rare benign lesions described for the first time in 1983. Two patients were referred to our department because of the presence of hepatic lesions that radiology showed to be suspected secondary liver tumors. At laparotomy, both patients underwent liver resections because the lesions appeared to be malignant. Subsequent histological examination of the surgical specimens revealed that both were solitary necrotic nodules of the liver. These were the only solitary necrotic nodules found in a total of 840 operations carried out in our department since October 1981. Although they are completely benign, solitary necrotic nodules have a similar ultrasound pattern and radiological features to metastases and have been described in the literature during the follow-up of patients with other tumors. Uncertainty remains as to the etiology of these lesions, which still represent an occasional finding in liver surgery.  相似文献   
10.
In the design of brain-machine interface (BMI) algorithms, the activity of hundreds of chronically recorded neurons is used to reconstruct a variety of kinematic variables. A significant problem introduced with the use of neural ensemble inputs for model building is the explosion in the number of free parameters. Large models not only affect model generalization but also put a computational burden on computing an optimal solution especially when the goal is to implement the BMI in low-power, portable hardware. In this paper, three methods are presented to quantitatively rate the importance of neurons in neural to motor mapping, using single neuron correlation analysis, sensitivity analysis through a vector linear model, and a model-independent cellular directional tuning analysis for comparisons purpose. Although, the rankings are not identical, up to sixty percent of the top 10 ranking cells were in common. This set can then be used to determine a reduced-order model whose performance is similar to that of the ensemble. It is further shown that by pruning the initial ensemble neural input with the ranked importance of cells, a reduced sets of cells (between 40 and 80, depending upon the methods) can be found that exceed the BMI performance levels of the full ensemble.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号