Current epileptic seizure "prediction" algorithms are generally based on the knowledge of seizure occurring time and analyze the electroencephalogram (EEG) recordings retrospectively. It is then obvious that, although these analyses provide evidence of brain activity changes prior to epileptic seizures, they cannot be applied to develop implantable devices for diagnostic and therapeutic purposes. In this paper, we describe an adaptive procedure to prospectively analyze continuous, long-term EEG recordings when only the occurring time of the first seizure is known. The algorithm is based on the convergence and divergence of short-term maximum Lyapunov exponents (STLmax) among critical electrode sites selected adaptively. A warning of an impending seizure is then issued. Global optimization techniques are applied for selecting the critical groups of electrode sites. The adaptive seizure prediction algorithm (ASPA) was tested in continuous 0.76 to 5.84 days intracranial EEG recordings from a group of five patients with refractory temporal lobe epilepsy. A fixed parameter setting applied to all cases predicted 82% of seizures with a false prediction rate of 0.16/h. Seizure warnings occurred an average of 71.7 min before ictal onset. Similar results were produced by dividing the available EEG recordings into half training and testing portions. Optimizing the parameters for individual patients improved sensitivity (84% overall) and reduced false prediction rate (0.12/h overall). These results indicate that ASPA can be applied to implantable devices for diagnostic and therapeutic purposes. 相似文献
This paper discusses a framework for learning based on information theoretic criteria. A novel algorithm based on Renyi's quadratic entropy is used to train, directly from a data set, linear or nonlinear mappers for entropy maximization or minimization. We provide an intriguing analogy between the computation and an information potential measuring the interactions among the data samples. We also propose two approximations to the Kulback-Leibler divergence based on quadratic distances (Cauchy-Schwartz inequality and Euclidean distance). These distances can still be computed using the information potential. We test the newly proposed distances in blind source separation (unsupervised learning) and in feature extraction for classification (supervised learning). In blind source separation our algorithm is capable of separating instantaneously mixed sources, and for classification the performance of our classifier is comparable to the support vector machines (SVMs). 相似文献
In the design of brain-machine interface (BMI) algorithms, the activity of hundreds of chronically recorded neurons is used to reconstruct a variety of kinematic variables. A significant problem introduced with the use of neural ensemble inputs for model building is the explosion in the number of free parameters. Large models not only affect model generalization but also put a computational burden on computing an optimal solution especially when the goal is to implement the BMI in low-power, portable hardware. In this paper, three methods are presented to quantitatively rate the importance of neurons in neural to motor mapping, using single neuron correlation analysis, sensitivity analysis through a vector linear model, and a model-independent cellular directional tuning analysis for comparisons purpose. Although, the rankings are not identical, up to sixty percent of the top 10 ranking cells were in common. This set can then be used to determine a reduced-order model whose performance is similar to that of the ensemble. It is further shown that by pruning the initial ensemble neural input with the ranked importance of cells, a reduced sets of cells (between 40 and 80, depending upon the methods) can be found that exceed the BMI performance levels of the full ensemble. 相似文献
This research aims to illustrate the potential use of concepts, techniques, and mining process tools to improve the systematic review process. Thus, a review was performed on two online databases (Scopus and ISI Web of Science) from 2012 to 2019. A total of 9649 studies were identified, which were analyzed using probabilistic topic modeling procedures within a machine learning approach. The Latent Dirichlet Allocation method, chosen for modeling, required the following stages: 1) data cleansing, and 2) data modeling into topics for coherence and perplexity analysis. All research was conducted according to the standards of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses in a fully computerized way. The computational literature review is an integral part of a broader literature review process. The results presented met three criteria: (1) literature review for a research area, (2) analysis and classification of journals, and (3) analysis and classification of academic and individual research teams. The contribution of the article is to demonstrate how the publication network is formed in this particular field of research, and how the content of abstracts can be automatically analyzed to provide a set of research topics for quick understanding and application in future projects.
Recent publications have proposed various information-theoretic learning (ITL) criteria based on Renyi's quadratic entropy with nonparametric kernel-based density estimation as alternative performance metrics for both supervised and unsupervised adaptive system training. These metrics, based on entropy and mutual information, take into account higher order statistics unlike the mean-square error (MSE) criterion. The drawback of these information-based metrics is the increased computational complexity, which underscores the importance of efficient training algorithms. In this paper, we examine familiar advanced-parameter search algorithms and propose modifications to allow training of systems with these ITL criteria. The well known algorithms tailored here for ITL include various improved gradient-descent methods, conjugate gradient approaches, and the Levenberg-Marquardt (LM) algorithm. Sample problems and metrics are presented to illustrate the computational efficiency attained by employing the proposed algorithms. 相似文献
An adaptive two-step paradigm for the super-resolution of optical images is developed in this paper. The procedure locally projects image samples onto a family of kernels that are learned from image data. First, an unsupervised feature extraction is performed on local neighborhood information from a training image. These features are then used to cluster the neighborhoods into disjoint sets for which an optimal mapping relating homologous neighborhoods across scales can be learned in a supervised manner. A super-resolved image is obtained through the convolution of a low-resolution test image with the established family of kernels. Results demonstrate the effectiveness of the approach. 相似文献
We study the problem of linear approximation of a signal using the parametric gamma bases in L2 space. These bases have a time scale parameter, which has the effect of modifying the relative angle between the signal and the projection space, thereby yielding an extra degree of freedom in the approximation. Gamma bases have a simple analog implementation that is a cascade of identical lowpass filters. We derive the normal equation for the optimum value of the time scale parameter and decouple it from that of the basis weights. Using statistical signal processing tools, we further develop a numerical method for estimating the optimum time scale 相似文献
An interactive design and analysis tool for displaying and quantifying multiple channels of data is presented. The system allows one to easily visualize multiple data channels and simultaneously observe the effects of filters on the data and to evaluate signal detection algorithms. The software is designed for a workstation environment; it will find application in a variety of applications where one needs to simultaneously visualize multiple data channels. TDAT is being used for the design and evaluation of filters and detection algorithms for electroencephalogram (EEG) waveforms, and it is serving as a prototype of a paperless system to be used by electroencephalographers. This paper describes the general software structure of the system and illustrates many of the system features with examples. 相似文献
This paper presents a theoretical approach to understand the basic dynamics of a hierarchical and realistic computational model of the olfactory system proposed by W. J. Freeman. While the system's parameter space could be scanned to obtain the desired dynamical behavior, our approach exploits the hierarchical organization and focuses on understanding the simplest building block of this highly connected network. Based on bifurcation analysis, we obtain analytical solutions of how to control the qualitative behavior of a reduced KII set taking into consideration both the internal coupling coefficients and the external stimulus. This also provides useful insights for investigating higher level structures that are composed of the same basic structure. Experimental results are presented to verify our theoretical analysis. 相似文献
Mathematical models for the evaluation of residence time distribution (RTD) curves on a large variety of vessels are presented. These models have been constructed by combination of different tanks or volumes. In order to obtain a good representation of RTD curves, a new volume (called convection diffusion volume) is introduced. The convection-diffusion volume allows the approximation of different experimental or numerical RTD curves with very simple models. An algorithm has been developed to calculate the parameters of the models for any given set of RTD curve experimental points. Validation of the models is carried out by comparison with experimental RTD curves taken from the literature and with a numerical RTD curve obtained by three-dimensional simulation of the flow inside a tundish. 相似文献