首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Yang C  Olson B  Si J 《Neural computation》2011,23(1):215-250
Extracellular chronic recordings have been used as important evidence in neuroscientific studies to unveil the fundamental neural network mechanisms in the brain. Spike detection is the first step in the analysis of recorded neural waveforms to decipher useful information and provide useful signals for brain-machine interface applications. The process of spike detection is to extract action potentials from the recordings, which are often compounded with noise from different sources. This study proposes a new detection algorithm that leverages a technique from wavelet-based image edge detection. It utilizes the correlation between wavelet coefficients at different sampling scales to create a robust spike detector. The algorithm has one tuning parameter, which potentially reduces the subjectivity of detection results. Both artificial benchmark data sets and real neural recordings are used to evaluate the detection performance of the proposed algorithm. Compared with other detection algorithms, the proposed method has a comparable or better detection performance. In this letter, we also demonstrate its potential for real-time implementation.  相似文献   

2.
L1范局部线性嵌入   总被引:1,自引:0,他引:1       下载免费PDF全文
数据降维问题存在于包括机器学习、模式识别、数据挖掘等多个信息处理领域。局部线性嵌入(LLE)是一种用于数据降维的无监督非线性流行学习算法,因其优良的性能,LLE得以广泛应用。针对传统的LLE对离群(或噪声)敏感的问题,提出一种鲁棒的基于L1范数最小化的LLE算法(L1-LLE)。通过L1范数最小化来求取局部重构矩阵,减小了重构矩阵能量,能有效克服离群(或噪声)干扰。利用现有优化技术,L1-LLE算法简单且易实现。证明了L1-LLE算法的收敛性。分别对人造和实际数据集进行应用测试,通过与传统LLE方法进行性能比较,结果显示L1-LLE方法是稳定、有效的。  相似文献   

3.
A method for extracting single-unit spike trains from extracellular recordings containing the activity of several simultaneously active cells is presented. The technique is particularly effective when spikes overlap temporally. It is capable of identifying the exact number of neurons contributing to a recording and of creating reliable spike templates. The procedure is based on fuzzy clustering and its performance is controlled by minimizing a cluster-validity index which optimizes the compactness and separation of the identified clusters. Application examples with synthetic spike trains generated from real spikes and segments of background noise show the advantage of the fuzzy method over conventional template-creation approaches in a wide range of signal-to-noise ratios.  相似文献   

4.
In this paper, we develop a semi-supervised regression algorithm to analyze data sets which contain both categorical and numerical attributes. This algorithm partitions the data sets into several clusters and at the same time fits a multivariate regression model to each cluster. This framework allows one to incorporate both multivariate regression models for numerical variables (supervised learning methods) and k-mode clustering algorithms for categorical variables (unsupervised learning methods). The estimates of regression models and k-mode parameters can be obtained simultaneously by minimizing a function which is the weighted sum of the least-square errors in the multivariate regression models and the dissimilarity measures among the categorical variables. Both synthetic and real data sets are presented to demonstrate the effectiveness of the proposed method.  相似文献   

5.
徐伟  冷静 《计算机应用与软件》2021,38(3):314-318,333
为了降低网络入侵检测系统的虚警率,提出一种混合式网络入侵检测方法,将人工蜂群(ABC)算法用于特征提取,XGBoost算法用于特征分类和评价。选择和定义不同的场景和攻击类型,并设计混合式网络拓扑;对预处理后的数据,采用ABC算法进行特征提取,利用XGBoost算法将需要评价的特征进行分类;得到特征的最优子集,利用这些特征完成网络异常检测。在多个公开数据集上的实验结果表明,该混合方法在准确度和检测率方面优于其他方法,且其时间复杂度和空间复杂度较低,表现出较高的检测效率。  相似文献   

6.
一种基于拆分的基因选择算法   总被引:1,自引:0,他引:1  
基因表达数据是由成千上万个基因及几十个样本组成的,有效的基因选择算法是基因表达数据研究的重要内容。粗糙集是一个有效的去掉冗余特征的工具。然而,对于含有成千上万特征、几十个样本的基因表达数据,现有基于粗糙集的特征选择算法的计算效率会变得非常低。为此,将拆分方法应用于特征选择,提出了一种基于拆分的特征选择算法。该算法把一个复杂的表拆分成简单的、更容易处理的主表与子表形式,然后把它们的结果连接到一起解决初始表的问题。实验结果表明,该算法在保证分类精度的同时,能明显提高计算效率。  相似文献   

7.
LANDSAT-8卫星影像长条带数据处理   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 LANDSAT-8卫星发射以来,美国地质调查局(USGS)向全球发布WRS(world reference system)分幅体系下的标准景产品,该产品覆盖区域较小。针对面向区域遥感应用需要较大覆盖范围长条带卫星影像的问题,提出一种长条带数据处理方法。方法 长条带处理分为预处理、辐射校正和几何校正3个部分,包含了预处理、长条带数据辐射一致性纠正和姿轨数据精化等过程,解决了长条带影像处理的关键技术。结果 利用3组LANDSAT-8数据进行实验,本文方法处理得到的长条带影像,辐射均一性得到提高,整体精度同单幅标准景产品的精度相当,且效率较传统的分景处理再镶嵌的方式提高了45.9%。结论 本文LANDSAT-8长条带处理方法能够得到高质量的长条带影像,且处理效率较高,能够有效满足大区域遥感应用的数据需求。  相似文献   

8.
提出用重叠度来刻画模糊类间的距离,在此基础上针对模糊划分总重叠度有随类数增加而单调递增的趋势,提出基于重叠度增量的聚类有效性函数。该算法由重叠度增量最大值来确定最佳聚类数,不但克服了传统有效性函数的单调问题,而且计算简单。基于模糊C-均值聚类算法(FCM),应用多组测试数据对其进行性能分析,并与当前广泛应用且具代表性的有效性函数进行深入比较。仿真结果表明,该函数的有效性和优越性。  相似文献   

9.
聚类分析是数据挖掘中的一项关键技术,有多种具体的用途。对于大规模的数据集合,常规的聚类分析方法基于样本数据的遍历,计算效率低下。针对这一问题,研究K-中心点聚类方法,首先建立优化数学模型,在对其随机变量取值范围进行松弛的基础上,采用梯度算法寻找其中心点;并对优化模型进行分解,建立时序数据的K-中心点修正算法。通过对电缆状态监测数据进行聚类中心点分析,判断所采集数据的孤立点,说明所提出方法是非常有效的。  相似文献   

10.
In this paper, a novel unsupervised dimensionality reduction algorithm, unsupervised Globality-Locality Preserving Projections in Transfer Learning (UGLPTL) is proposed, based on the conventional Globality-Locality Preserving dimensionality reduction algorithm (GLPP) that does not work well in real-world Transfer Learning (TL) applications. In TL applications, one application (source domain) contains sufficient labeled data, but the related application contains only unlabeled data (target domain). Compared to the existing TL methods, our proposed method incorporates all the objectives, such as minimizing the marginal and conditional distributions between both the domains, maximizing the variance of the target domain, and performing Geometrical Diffusion on Manifolds, all of which are essential for transfer learning applications. UGLPTL seeks a projection vector that projects the source and the target domains data into a common subspace where both the labeled source data and the unlabeled target data can be utilized to perform dimensionality reduction. Comprehensive experiments have verified that the proposed method outperforms many state-of-the-art non-transfer learning and transfer learning methods on two popular real-world cross-domain visual transfer learning data sets. Our proposed UGLPTL approach achieved 82.18% and 87.14% mean accuracies over all the tasks of PIE Face and Office-Caltech data sets, respectively.  相似文献   

11.
稀疏自编码和Softmax回归的快速高效特征学习   总被引:1,自引:0,他引:1  
针对特征学习效果与时间平衡问题,提出了一种快速高效的特征学习方法.将稀疏自编码和Softmax回归组合成一个新的特征提取模型,在提取原始图像潜在信息的基础上,利用多分类器返回值可以反映输入信息的相似程度的特点,快速高效的学习利于分类的特征向量.鉴于标签信息已知,该算法在图像分类效果上明显优于几种典型的特征学习方法.为了使所提算法具有更好的泛化能力,回归模型的损失函数中加入了L2范数防止过拟合,同时,采用随机梯度下降的方法得到模型的最优参数.在4个标准数据集上的测试结果表明该算法是有效可行的.  相似文献   

12.
Dimensionality reduction is a great challenge in high dimensional unlabelled data processing. The existing dimensionality reduction methods are prone to employing similarity matrix and spectral clustering algorithm. However, the noises in original data always make the similarity matrix unreliable and degrade the clustering performance. Besides, existing spectral clustering methods just focus on the local structures and ignore the global discriminative information, which may lead to overfitting in some cases. To address these issues, a novel unsupervised 2-dimensional dimensionality reduction method is proposed in this paper, which incorporates the similarity matrix learning and global discriminant information into the procedure of dimensionality reduction. Particularly, the number of the connected components in the learned similarity matrix is equal to cluster number. We compare the proposed method with several 2-dimensional unsupervised dimensionality reduction methods and evaluate the clustering performance by K-means on several benchmark data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.  相似文献   

13.
A simple and fast algorithm for K-medoids clustering   总被引:1,自引:0,他引:1  
This paper proposes a new algorithm for K-medoids clustering which runs like the K-means algorithm and tests several methods for selecting initial medoids. The proposed algorithm calculates the distance matrix once and uses it for finding new medoids at every iterative step. To evaluate the proposed algorithm, we use some real and artificial data sets and compare with the results of other algorithms in terms of the adjusted Rand index. Experimental results show that the proposed algorithm takes a significantly reduced time in computation with comparable performance against the partitioning around medoids.  相似文献   

14.
Choosing appropriate classification algorithms for a given data set is very important and useful in practice but also is full of challenges. In this paper, a method of recommending classification algorithms is proposed. Firstly the feature vectors of data sets are extracted using a novel method and the performance of classification algorithms on the data sets is evaluated. Then the feature vector of a new data set is extracted, and its k nearest data sets are identified. Afterwards, the classification algorithms of the nearest data sets are recommended to the new data set. The proposed data set feature extraction method uses structural and statistical information to characterize data sets, which is quite different from the existing methods. To evaluate the performance of the proposed classification algorithm recommendation method and the data set feature extraction method, extensive experiments with the 17 different types of classification algorithms, the three different types of data set characterization methods and all possible numbers of the nearest data sets are conducted upon the 84 publicly available UCI data sets. The results indicate that the proposed method is effective and can be used in practice.  相似文献   

15.
Outliers and gross errors in training data sets can seriously deteriorate the performance of traditional supervised feedforward neural networks learning algorithms. This is why several learning methods, to some extent robust to outliers, have been proposed. In this paper we present a new robust learning algorithm based on the iterative Least Median of Squares, that outperforms some existing solutions in its accuracy or speed. We demonstrate how to minimise new non-differentiable performance function by a deterministic approximate method. Results of simulations and comparison with other learning methods are demonstrated. Improved robustness of our novel algorithm, for data sets with varying degrees of outliers, is shown.  相似文献   

16.
Recent advances in the technology of multiunit recordings make it possible to test Hebb's hypothesis that neurons do not function in isolation but are organized in assemblies. This has created the need for statistical approaches to detecting the presence of spatiotemporal patterns of more than two neurons in neuron spike train data. We mention three possible measures for the presence of higher-order patterns of neural activation--coefficients of log-linear models, connected cumulants, and redundancies--and present arguments in favor of the coefficients of log-linear models. We present test statistics for detecting the presence of higher-order interactions in spike train data by parameterizing these interactions in terms of coefficients of log-linear models. We also present a Bayesian approach for inferring the existence or absence of interactions and estimating their strength. The two methods, the frequentist and the Bayesian one, are shown to be consistent in the sense that interactions that are detected by either method also tend to be detected by the other. A heuristic for the analysis of temporal patterns is also proposed. Finally, a Bayesian test is presented that establishes stochastic differences between recorded segments of data. The methods are applied to experimental data and synthetic data drawn from our statistical models. Our experimental data are drawn from multiunit recordings in the prefrontal cortex of behaving monkeys, the somatosensory cortex of anesthetized rats, and multiunit recordings in the visual cortex of behaving monkeys.  相似文献   

17.
Most clustering algorithms operate by optimizing (either implicitly or explicitly) a single measure of cluster solution quality. Such methods may perform well on some data sets but lack robustness with respect to variations in cluster shape, proximity, evenness and so forth. In this paper, we have proposed a multiobjective clustering technique which optimizes simultaneously two objectives, one reflecting the total cluster symmetry and the other reflecting the stability of the obtained partitions over different bootstrap samples of the data set. The proposed algorithm uses a recently developed simulated annealing-based multiobjective optimization technique, named AMOSA, as the underlying optimization strategy. Here, points are assigned to different clusters based on a newly defined point symmetry-based distance rather than the Euclidean distance. Results on several artificial and real-life data sets in comparison with another multiobjective clustering technique, MOCK, three single objective genetic algorithm-based automatic clustering techniques, VGAPS clustering, GCUK clustering and HNGA clustering, and several hybrid methods of determining the appropriate number of clusters from data sets show that the proposed technique is well suited to detect automatically the appropriate number of clusters as well as the appropriate partitioning from data sets having point symmetric clusters. The performance of AMOSA as the underlying optimization technique in the proposed clustering algorithm is also compared with PESA-II, another evolutionary multiobjective optimization technique.  相似文献   

18.
Spike sorting is the essential step in analyzing recording spike signals for studying information processing mechanisms within the nervous system. Overlapping is one of the most serious problems in the spike sorting for multi-channel recordings. In this paper, a modified radial basis function (RBF) network is proposed to decompose the overlapping signals and separate spikes within the same RBF network. A modified radial basis function based on the Gaussian function is employed in the method to improve the accuracy of overlap decomposition. In addition, the improved constructing algorithm reduces the calculation cost by taking advantage of the symmetry of the RBF network. The performance of the presented method is tested at various signal-to-noise ratio levels based on simulated data coming from the University of Leicester and Wave-clus software. Experiment results show that our method successfully solves the fully overlapping problem and has higher accuracy comparing with the Gaussian function.  相似文献   

19.
针对浮选中泡沫尺寸分布的特殊性,如非高斯分布,左偏斜,高峰值等,常规分析方法无法准确描述尺寸分布的特点,因此无法准确检测和诊断浮选过程中出现的故障。提出对泡沫尺寸分布的输出概率密度函数(PDF)的统计分析,形成了一种新的浮选过程故障检测和诊断方法。通过采用自设计的核方法逼近将输出PDF转化为动态权系数,建立带有时滞的非线性不确定性权动态模型,基于线性矩阵不等式设计得到可行的故障检测和诊断算法。通过仿真验证分析,证明此算法的有效性。结合现场浮选过程,讨论了此方法的应用前景和优势。  相似文献   

20.
Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号