首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Neural receptive fields are dynamic in that with experience, neurons change their spiking responses to relevant stimuli. To understand how neural systems adapt their representations of biological information, analyses of receptive field plasticity from experimental measurements are crucial. Adaptive signal processing, the well-established engineering discipline for characterizing the temporal evolution of system parameters, suggests a framework for studying the plasticity of receptive fields. We use the Bayes' rule Chapman-Kolmogorov paradigm with a linear state equation and point process observation models to derive adaptive filters appropriate for estimation from neural spike trains. We derive point process filter analogues of the Kalman filter, recursive least squares, and steepest-descent algorithms and describe the properties of these new filters. We illustrate our algorithms in two simulated data examples. The first is a study of slow and rapid evolution of spatial receptive fields in hippocampal neurons. The second is an adaptive decoding study in which a signal is decoded from ensemble neural spiking activity as the receptive fields of the neurons in the ensemble evolve. Our results provide a paradigm for adaptive estimation for point process observations and suggest a practical approach for constructing filtering algorithms to track neural receptive field dynamics on a millisecond timescale.  相似文献   

2.
A widely used signal processing paradigm is the state-space model. The state-space model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neurophysiology experiments in which neural spiking activity is induced by an implicit (latent) stimulus, we develop an algorithm to estimate a state-space model observed through point process measurements. We represent the latent process modulating the neural spiking activity as a gaussian autoregressive model driven by an external stimulus. Given the latent process, neural spiking activity is characterized as a general point process defined by its conditional intensity function. We develop an approximate expectation-maximization (EM) algorithm to estimate the unobservable state-space process, its parameters, and the parameters of the point process. The EM algorithm combines a point process recursive nonlinear filter algorithm, the fixed interval smoothing algorithm, and the state-space covariance algorithm to compute the complete data log likelihood efficiently. We use a Kolmogorov-Smirnov test based on the time-rescaling theorem to evaluate agreement between the model and point process data. We illustrate the model with two simulated data examples: an ensemble of Poisson neurons driven by a common stimulus and a single neuron whose conditional intensity function is approximated as a local Bernoulli process.  相似文献   

3.
Minimum output mutual information is regarded as a natural criterion for independent component analysis (ICA) and is used as the performance measure in many ICA algorithms. Two common approaches in information-theoretic ICA algorithms are minimum mutual information and maximum output entropy approaches. In the former approach, we substitute some form of probability density function (pdf) estimate into the mutual information expression, and in the latter we incorporate the source pdf assumption in the algorithm through the use of nonlinearities matched to the corresponding cumulative density functions (cdf). Alternative solutions to ICA use higher-order cumulant-based optimization criteria, which are related to either one of these approaches through truncated series approximations for densities. In this article, we propose a new ICA algorithm motivated by the maximum entropy principle (for estimating signal distributions). The optimality criterion is the minimum output mutual information, where the estimated pdfs are from the exponential family and are approximate solutions to a constrained entropy maximization problem. This approach yields an upper bound for the actual mutual information of the output signals - hence, the name minimax mutual information ICA algorithm. In addition, we demonstrate that for a specific selection of the constraint functions in the maximum entropy density estimation procedure, the algorithm relates strongly to ICA methods using higher-order cumulants.  相似文献   

4.
Measures of relevance between features play an important role in classification and regression analysis. Mutual information has been proved an effective measure for decision tree construction and feature selection. However, there is a limitation in computing relevance between numerical features with mutual information due to problems of estimating probability density functions in high-dimensional spaces. In this work, we generalize Shannon’s information entropy to neighborhood information entropy and propose a measure of neighborhood mutual information. It is shown that the new measure is a natural extension of classical mutual information which reduces to the classical one if features are discrete; thus the new measure can also be used to compute the relevance between discrete variables. In addition, the new measure introduces a parameter delta to control the granularity in analyzing data. With numeric experiments, we show that neighborhood mutual information produces the nearly same outputs as mutual information. However, unlike mutual information, no discretization is required in computing relevance when used the proposed algorithm. We combine the proposed measure with four classes of evaluating strategies used for feature selection. Finally, the proposed algorithms are tested on several benchmark data sets. The results show that neighborhood mutual information based algorithms yield better performance than some classical ones.  相似文献   

5.
We set forth an information-theoretical measure to quantify neurotransmission reliability while taking into full account the metrical properties of the spike train space. This parametric information analysis relies on similarity measures induced by the metrical relations between neural responses as spikes flow in. Thus, in order to assess the entropy, the conditional entropy, and the overall information transfer, this method does not require any a priori decoding algorithm to partition the space into equivalence classes. It therefore allows the optimal parameters of a class of distances to be determined with respect to information transmission. To validate the proposed information-theoretical approach, we study precise temporal decoding of human somatosensory signals recorded using microneurography experiments. For this analysis, we employ a similarity measure based on the Victor-Purpura spike train metrics. We show that with appropriate parameters of this distance, the relative spike times of the mechanoreceptors' responses convey enough information to perform optimal discrimination--defined as maximum metrical information and zero conditional entropy--of 81 distinct stimuli within 40 ms of the first afferent spike. The proposed information-theoretical measure proves to be a suitable generalization of Shannon mutual information in order to consider the metrics of temporal codes explicitly. It allows neurotransmission reliability to be assessed in the presence of large spike train spaces (e.g., neural population codes) with high temporal precision.  相似文献   

6.
Low density parity check codes (LDPC) exhibit near capacity performance in terms of error correction. Large hardware costs, limited flexibility in terms of code length/code rate and considerable power consumption limit the use of belief-propagation algorithm based LDPC decoders in area and energy sensitive mobile environment. Serial bit flipping algorithms offer a trade-off between resource utilization and error correction performance at the expense of increased number of decoding iterations required for convergence. Parallel weighted bit flipping decoding and its variants aim at reducing the decoding iteration and time by flipping the potential erroneous bits in parallel. However, in most of the existing parallel decoding methods, the flipping threshold requires complex computations.In this paper, Hybrid Weighted Bit Flipping (HWBF) decoding is proposed to allow multiple bit flipping in each decoding iteration. To compute the number of bits that can be flipped in parallel, a criterion for determining the relationship between the erroneous bits in received code word is proposed. Using the proposed relation the proposed scheme can detect and correct a maximum of 3 erreneous hard decision bits in an iteration. The simulation results show that as compared to existing serial bit flipping decoding methods, the number of iterations required for convergence is reduced by 45% and the decoding time is reduced by 40%, by the use of proposed HWBF decoding. As compared to existing parallel bit flipping decoding methods, the proposed HWBF decoding can achieve similar bit error rate (BER) with same number of iterations and lesser computational complexity. Due to reduced number of decoding iterations, less computational complexity and reduced decoding time, the proposed HWBF decoding can be useful in energy sensitive mobile platforms.  相似文献   

7.
It remains unclear whether the variability of neuronal spike trains in vivo arises due to biological noise sources or represents highly precise encoding of temporally varying synaptic input signals. Determining the variability of spike timing can provide fundamental insights into the nature of strategies used in the brain to represent and transmit information in the form of discrete spike trains. In this study, we employ a signal estimation paradigm to determine how variability in spike timing affects encoding of random time-varying signals. We assess this for two types of spiking models: an integrate-and-fire model with random threshold and a more biophysically realistic stochastic ion channel model. Using the coding fraction and mutual information as information-theoretic measures, we quantify the efficacy of optimal linear decoding of random inputs from the model outputs and study the relationship between efficacy and variability in the output spike train. Our findings suggest that variability does not necessarily hinder signal decoding for the biophysically plausible encoders examined and that the functional role of spiking variability depends intimately on the nature of the encoder and the signal processing task; variability can either enhance or impede decoding performance.  相似文献   

8.
Efficient Markov chain Monte Carlo methods for decoding neural spike trains   总被引:1,自引:0,他引:1  
Stimulus reconstruction or decoding methods provide an important tool for understanding how sensory and motor information is represented in neural activity. We discuss Bayesian decoding methods based on an encoding generalized linear model (GLM) that accurately describes how stimuli are transformed into the spike trains of a group of neurons. The form of the GLM likelihood ensures that the posterior distribution over the stimuli that caused an observed set of spike trains is log concave so long as the prior is. This allows the maximum a posteriori (MAP) stimulus estimate to be obtained using efficient optimization algorithms. Unfortunately, the MAP estimate can have a relatively large average error when the posterior is highly nongaussian. Here we compare several Markov chain Monte Carlo (MCMC) algorithms that allow for the calculation of general Bayesian estimators involving posterior expectations (conditional on model parameters). An efficient version of the hybrid Monte Carlo (HMC) algorithm was significantly superior to other MCMC methods for gaussian priors. When the prior distribution has sharp edges and corners, on the other hand, the "hit-and-run" algorithm performed better than other MCMC methods. Using these algorithms, we show that for this latter class of priors, the posterior mean estimate can have a considerably lower average error than MAP, whereas for gaussian priors, the two estimators have roughly equal efficiency. We also address the application of MCMC methods for extracting nonmarginal properties of the posterior distribution. For example, by using MCMC to calculate the mutual information between the stimulus and response, we verify the validity of a computationally efficient Laplace approximation to this quantity for gaussian priors in a wide range of model parameters; this makes direct model-based computation of the mutual information tractable even in the case of large observed neural populations, where methods based on binning the spike train fail. Finally, we consider the effect of uncertainty in the GLM parameters on the posterior estimators.  相似文献   

9.
Over the past few years, there has been a renewed interest in the consensus clustering problem. Several new methods have been proposed for finding a consensus partition for a set of n data objects that optimally summarizes an ensemble. In this paper, we propose new consensus clustering algorithms with linear computational complexity in n. We consider clusterings generated with random number of clusters, which we describe by categorical random variables. We introduce the idea of cumulative voting as a solution for the problem of cluster label alignment, where, unlike the common one-to-one voting scheme, a probabilistic mapping is computed. We seek a first summary of the ensemble that minimizes the average squared distance between the mapped partitions and the optimal representation of the ensemble, where the selection criterion of the reference clustering is defined based on maximizing the information content as measured by the entropy. We describe cumulative vote weighting schemes and corresponding algorithms to compute an empirical probability distribution summarizing the ensemble. Given the arbitrary number of clusters of the input partitions, we formulate the problem of extracting the optimal consensus as that of finding a compressed summary of the estimated distribution that preserves maximum relevant information. An efficient solution is obtained using an agglomerative algorithm that minimizes the average generalized Jensen-Shannon divergence within the cluster. The empirical study demonstrates significant gains in accuracy and superior performance compared to several recent consensus clustering algorithms.  相似文献   

10.
基于互信息最大化的特征选择算法及应用   总被引:3,自引:2,他引:1       下载免费PDF全文
该文以互信息最大化原则为指导,经过推导和分析后提出了一种基于信息论模型的新的特征选择算法,称之为基于互信息最大化的特征选择算法(MaxMI)。基本思想就是特征选择后,应当尽可能多地保留关于类别的信息。该算法与传统的信息增益、互信息和交叉熵在表达形式上具有一定的相似性,但是并不完全相同。从实验上验证了基于互信息最大化的特征选择算法优于其它三种算法。  相似文献   

11.
Analyzing the dependencies between spike trains is an important step in understanding how neurons work in concert to represent biological signals. Usually this is done for pairs of neurons at a time using correlation-based techniques. Chornoboy, Schramm, and Karr (1988) proposed maximum likelihood methods for the simultaneous analysis of multiple pair-wise interactions among an ensemble of neurons. One of these methods is an iterative, continuous-time estimation algorithm for a network likelihood model formulated in terms of multiplicative conditional intensity functions. We devised a discrete-time version of this algorithm that includes a new, efficient computational strategy, a principled method to compute starting values, and a principled stopping criterion. In an analysis of simulated neural spike trains from ensembles of interacting neurons, the algorithm recovered the correct connectivity matrices and interaction parameters. In the analysis of spike trains from an ensemble of rat hippocampal place cells, the algorithm identified a connectivity matrix and interaction parameters consistent with the pattern of conjoined firing predicted by the overlap of the neurons' spatial receptive fields. These results suggest that the network likelihood model can be an efficient tool for the analysis of ensemble spiking activity.  相似文献   

12.
黄胜  郑秀凤  曹志雄 《计算机工程》2022,48(1):170-174+181
传统的串行抵消比特翻转(SCF)译码算法仅用对数似然比(LLR)的绝对值去衡量信息比特译码结果的可靠情况,导致误块率(BLER)过高和翻转的尝试次数较多。提出一种串行抵消比特翻转译码算法PLR-SCF,分析SC译码算法发生错误译码的原因,通过仿真观察LLR、极化信道可靠度和信息位所在的位置与SC译码算法发生首个判决错误之间的关系,并利用上述因素设计一个能准确衡量信息位发生译码错误程度的度量公式。仿真结果表明,相对于传统的SCF译码算法,该算法能够有效降低BLER,特别是在高信噪比下获得的最大信噪比增益约为0.12 dB,翻转尝试次数与SCF减少13.6%。  相似文献   

13.
程龙  刘洋 《控制与决策》2018,33(5):923-937
脉冲神经网络是目前最具有生物解释性的人工神经网络,是类脑智能领域的核心组成部分.首先介绍各类常用的脉冲神经元模型以及前馈和循环型脉冲神经网络结构;然后介绍脉冲神经网络的时间编码方式,在此基础上,系统地介绍脉冲神经网络的学习算法,包括无监督学习和监督学习算法,其中监督学习算法按照梯度下降算法、结合STDP规则的算法和基于脉冲序列卷积核的算法3大类别分别展开详细介绍和总结;接着列举脉冲神经网络在控制领域、模式识别领域和类脑智能研究领域的应用,并在此基础上介绍各国脑计划中,脉冲神经网络与神经形态处理器相结合的案例;最后分析脉冲神经网络目前所存在的困难和挑战.  相似文献   

14.
基于条件信息熵的决策表约简   总被引:313,自引:8,他引:313  
Rough集理论是近年来发展起来的一种有效地处理不精确、不确定、含糊信息的数学理论方法,在机器学习、数据挖掘、智能数据分析、控制算法获取等领域取得了很大的成功。研究者从不同的角度对这个理论进行研究。本文将从信息论观点出发对Rough集理论的基本概念和主要运算进行分析讨论,通过与Rough集理论的代数观点进行比较分析,得到这两种观点下的一些等价性质和不同的特性,并基于条件信息熵提出决策表的约简算法。  相似文献   

15.
用近似熵对睡眠脑电信号进行分期,由于睡眠Ⅲ期和Ⅳ期近似熵值非常接近,靠近似熵值无法区分,提出基于神经网络集成的睡眠脑电信号分期,采用BP神经网络为分类器,对用AR参数提取的睡眠脑电特征对睡眠Ⅲ期和Ⅳ期进行分期。为进一步提高BP神经网络性能,采用Bagging算法对BP神经网络分类器进行加权投票,实验表明,提出的方法具有很好的分期效果。  相似文献   

16.
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.  相似文献   

17.
姚晟  徐风  吴照玉  陈菊  汪杰  王维 《控制与决策》2019,34(2):353-361
属性约简是粗糙集理论一项重要的应用,目前已广泛运用于机器学习和数据挖掘等领域,邻域粗糙集是粗糙集理论中处理连续型数据的一种重要方法.针对目前邻域粗糙集模型中属性约简存在的缺陷,构造一种基于邻域粗糙集的邻域粗糙熵模型,并基于此给出邻域粗糙联合熵、邻域粗糙条件熵和邻域粗糙互信息熵等概念.邻域粗糙互信息熵是评估属性集相关性的一种重要的方法,具有非单调性变化的特性,对此,提出一种基于邻域粗糙互信息熵的非单调性属性约简算法.实验分析表明,所提出算法不仅比目前已有的单调性属性约简算法具有更优越的属性约简结果,而且具有更高的约简效率.  相似文献   

18.
基于信息论的高维海量数据离群点挖掘   总被引:1,自引:1,他引:0  
针对高维海量数据集离群点挖掘存在“维数灾难”的问题,提出了基于信息论的高维海量数据的离群点挖掘算法。该算法采用属性选择,去除冗余属性降维。利用信息嫡作为离群点判断的度量标准,消除距离和密度量纲的弊端。在真实数据集上的实验结果表明,算法对高维海量数据离群点挖掘是有效可行的,其效率和精度得到了明显提高。  相似文献   

19.
Bagging-based spectral clustering ensemble selection   总被引:2,自引:0,他引:2  
Traditional clustering ensemble methods combine all obtained clustering results at hand. However, we can often achieve a better clustering solution if only parts of the clustering results available are combined. In this paper, we generalize the selective clustering ensemble algorithm proposed by Azimi and Fern and a novel clustering ensemble method, SELective Spectral Clustering Ensemble (SELSCE), is proposed. The component clusterings of the ensemble system are generated by spectral clustering (SC) capable of engendering diverse committees. The random scaling parameter, Nyström approximation are used to perturb SC for producing the components of the ensemble system. After the generation of component clusterings, the bagging technique, usually applied in supervised learning, is used to assess the component clustering. We randomly pick part of the available clusterings to get a consensus result and then compute normalized mutual information (NMI) or adjusted rand index (ARI) between the consensus result and the component clusterings. Finally, the components are ranked by aggregating multiple NMI or ARI values. The experimental results on UCI dataset and images demonstrate that the proposed algorithm can achieve a better result than the traditional clustering ensemble methods.  相似文献   

20.
The S-tree linear representation is an efficient structure for representing binary images which requires three bits for each disjoint binary region. We present parallel algorithms for encoding and decoding the S-tree representation from/onto a binary pixel array in a hypercube connected machine. Both the encoding and the decoding algorithms make use of a condensation procedure in order to produce the final result cooperatively. The encoding algorithm conceptually uses a pyramid configuration, where in each iteration half of the processors in the grid below it remain active. The decoding algorithm is based on the observation that each processor can independently decode a given binary region if it contains in its memory an S-tree segment augmented with a linear prefix. We analyze the algorithms in terms of processing and communication time and present results of experiments performed with real and randomly generated images that verify our theoretical results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号