首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Efficient coding has been proposed as a first principle explaining neuronal response properties in the central nervous system. The shape of optimal codes, however, strongly depends on the natural limitations of the particular physical system. Here we investigate how optimal neuronal encoding strategies are influenced by the finite number of neurons N (place constraint), the limited decoding time window length T (time constraint), the maximum neuronal firing rate f(max) (power constraint), and the maximal average rate (f)(max) (energy constraint). While Fisher information provides a general lower bound for the mean squared error of unbiased signal reconstruction, its use to characterize the coding precision is limited. Analyzing simple examples, we illustrate some typical pitfalls and thereby show that Fisher information provides a valid measure for the precision of a code only if the dynamic range (f(min)T, f(max)T) is sufficiently large. In particular, we demonstrate that the optimal width of gaussian tuning curves depends on the available decoding time T. Within the broader class of unimodal tuning functions, it turns out that the shape of a Fisher-optimal coding scheme is not unique. We solve this ambiguity by taking the minimum mean square error into account, which leads to flat tuning curves. The tuning width, however, remains to be determined by energy constraints rather than by the principle of efficient coding.  相似文献   

2.
A number of CCK(2) antagonists have been reported to play an important role in controlling gastric acid-related conditions, nervous system related disorders and certain types of cancer. To obtain the helpful information for designing potent antagonists with novel structures and to investigate the quantitative structure-activity relationship of a group of 62 different CCK(2) receptor antagonists with varying structures and potencies, CoMFA, CoMSIA, and HQSAR studies were carried out on a series of 1,3,4-benzotriazepine-based CCK(2) receptor antagonists. QSAR models were derived from a training set of 49 compounds. By applying leave-one-out (LOO) cross-validation study, cross-validated (r(cv)(2)) values of 0.673 and 0.608 and non-cross-validated (r(ncv)(2)) values of 0.966 and 0.969 were obtained for the CoMFA and CoMSIA models, respectively. The predictive ability of the CoMFA and CoMSIA models was determined using a test set of 13 compounds, which gave predictive correlation coefficients (r(pred)(2)) of 0.793 and 0.786, respectively. HQSAR was also carried out as a complementary study, and the best HQSAR model was generated using atoms, bonds, hydrogen atoms, and chirality as fragment distinction with fragment size (2-5) and six components showing r(cv)(2) and r(ncv)(2) values of 0.744 and 0.918, respectively. CoMFA steric and electrostatic, CoMSIA hydrophobic and hydrogen bond acceptor fields, and HQSAR atomic contribution maps were used to analyze the structural features of the datasets that govern their antagonistic potency.  相似文献   

3.
Gaussian Mixture Kalman Predictive Coding of Line Spectral Frequencies   总被引:1,自引:0,他引:1  
Gaussian mixture model (GMM)-based predictive coding of line spectral frequencies (LSFs) has gained wide acceptance. In such coders, each mixture of a GMM can be interpreted as defining a linear predictive transform coder. In this paper, we use Kalman filtering principles to model each of these linear predictive transform coders to present GMM Kalman predictive coding. In particular, we show how suitable modeling of quantization noise leads to an adaptive a posteriori GMM that defines a signal-adaptive predictive coder that provides improved coding of LSFs in comparison with the baseline recursive GMM predictive coder. Moreover, we show how running the GMM Kalman predictive coders to convergence can be used to design a stationary GMM Kalman predictive coding system which again provides improved coding of LSFs but now with only a modest increase in run-time complexity over the baseline. In packet loss conditions, this stationary GMM Kalman predictive coder provides much better performance than the recursive GMM predictive coder, and in fact has comparable mean performance to a memoryless GMM coder. Finally, we illustrate how one can utilize Kalman filtering principles to design a postfilter which enhances decoded vectors from a recursive GMM predictive coder without any modifications to the encoding process.  相似文献   

4.
Case based time series prediction (CTSP) is a machine learning technique to predict the future behavior of the current time series by referring similar old cases. To reduce the cost of the visual prostheses research, we devote to the investigation of predictive performance of CTSP in electrical evoked potential (EEP) prediction instead of doing numerous biological experiments. The heart of CTSP for EEP prediction is a similarity measure of training case for target electrical stimulus by using distance metric. As EEP experimental case consists of the stationary electrical stimulation values and time-varying EEP elicited values, this paper proposes a new distance metric which takes the advantage of point-to-point distance's efficient operation in stationary data and time series distance's high capability in temporal data, called as biased time warp distance (BTWD). In BTWD metric, stimulation set difference (Diff_I) and EEP sequence difference (Diff_II) are calculated respectively, and a time-dependent bias configuration is added to reflect the different influences of Diff_I and Diff_II to the numerical computation of BTWD. Similarity-related adaptation coefficient summation is employed to yield the predictive EEP values at given time point in principle of k nearest neighbors. The proposed predictor using BTWD was empirically tested with data collected from the electrophysiological EEP eliciting experiments. We statistically validated our results by comparing them with other predictor using classical point-to-point distances and time series distances. The empirical results indicated that our proposed method produces superior performance in EEP prediction in terms of predictive accuracy and computational complexity.  相似文献   

5.
Three-dimensional quantitative structure-activity relationship (3D-QSAR) models were developed using comparative molecular field analysis (CoMFA) and comparative molecular similarity analysis (CoMSIA) on a series of agonists of thyroid hormone receptor beta (TRbeta), which may lead to safe therapies for non-thyroid disorders while avoiding the cardiac side effects. The reasonable q(2) (cross-validated) values 0.600 and 0.616 and non-cross-validated r(2) values of 0.974 and 0.974 were obtained for CoMFA and CoMSIA models for the training set compounds, respectively. The predictive ability of two models was validated using a test set of 12 molecules which gave predictive correlation coefficients (r(pred)(2)) of 0.688 and 0.674, respectively. The Lamarckian Genetic Algorithm (LGA) of AutoDock 4.0 was employed to explore the binding mode of the compound at the active site of TRbeta. The results not only lead to a better understanding of interactions between these agonists and the thyroid hormone receptor beta but also can provide us some useful information about the influence of structures on the activity which will be very useful for designing some new agonist with desired activity.  相似文献   

6.
Increasing evidence over the past decade indicates that financial markets exhibit nonlinear dynamics in the form of chaotic behavior. Traditionally, the prediction of stock markets has relied on statistical methods including multivariate statistical methods, autoregressive integrated moving average models and autoregressive conditional heteroskedasticity models. In recent years, neural networks and other knowledge techniques have been applied extensively to the task of predicting financial variables.
This paper examines the relationship between chaotic models and learning techniques. In particular, chaotic analysis indicates the upper limits of predictability for a time series. The learning techniques involve neural networks and case–based reasoning. The chaotic models take the form of R/S analysis to measure persistence in a time series, the correlation dimension to encapsulate system complexity, and Lyapunov exponents to indicate predictive horizons. The concepts are illustrated in the context of a major emerging market, namely the Polish stock market.  相似文献   

7.
The explicit linear quadratic regulator for constrained systems   总被引:8,自引:0,他引:8  
For discrete-time linear time invariant systems with constraints on inputs and states, we develop an algorithm to determine explicitly, the state feedback control law which minimizes a quadratic performance criterion. We show that the control law is piece-wise linear and continuous for both the finite horizon problem (model predictive control) and the usual infinite time measure (constrained linear quadratic regulation). Thus, the on-line control computation reduces to the simple evaluation of an explicitly defined piecewise linear function. By computing the inherent underlying controller structure, we also solve the equivalent of the Hamilton-Jacobi-Bellman equation for discrete-time linear constrained systems. Control based on on-line optimization has long been recognized as a superior alternative for constrained systems. The technique proposed in this paper is attractive for a wide range of practical problems where the computational complexity of on-line optimization is prohibitive. It also provides an insight into the structure underlying optimization-based controllers.  相似文献   

8.
We discuss the problem of modelcomplexity control also known as modelselection. This problem frequently arises inthe context of predictive learning and adaptiveestimation of dependencies from finite data.First we review the problem of predictivelearning as it relates to model complexitycontrol. Then we discuss several issuesimportant for practical implementation ofcomplexity control, using the frameworkprovided by Statistical Learning Theory (orVapnik-Chervonenkis theory). Finally, we showpractical applications of Vapnik-Chervonenkis(VC) generalization bounds for model complexitycontrol. Empirical comparisons of differentmethods for complexity control suggestpractical advantages of using VC-based modelselection in settings where VC generalizationbounds can be rigorously applied. We also arguethat VC-theory provides methodologicalframework for complexity control even when itstechnical results can not be directly applied.  相似文献   

9.
邹朋成  王建东  杨国庆  张霞  王丽娜 《软件学报》2013,24(11):2642-2655
对于时间序列聚类任务而言,一个有效的距离度量至关重要.为了提高时间序列聚类的性能,考虑借助度量学习方法,从数据中学习一种适用于时序聚类的距离度量.然而,现有的度量学习未注意到时序的特性,且时间序列数据存在成对约束等辅助信息不易获取的问题.提出一种辅助信息自动生成的时间序列距离度量学习(distancemetric learning based on side information autogeneration for time series,简称SIADML)方法.该方法利用动态时间弯曲(dynamic time warping,简称DTW)距离在捕捉时序特性上的优势,自动生成成对约束信息,使习得的度量尽可能地保持时序之间固有的近邻关系.在一系列时间序列标准数据集上的实验结果表明,采用该方法得到的度量能够有效改善时间序列聚类的性能.  相似文献   

10.
姜逸凡  叶青 《计算机应用》2019,39(4):1041-1045
在时间序列分类等数据挖掘工作中,不同数据集基于类别的相似性表现有明显不同,因此一个合理有效的相似性度量对数据挖掘非常关键。传统的欧氏距离、余弦距离和动态时间弯曲等方法仅针对数据自身进行相似度公式计算,忽略了不同数据集所包含的知识标注对于相似性度量的影响。为了解决这一问题,提出基于孪生神经网络(SNN)的时间序列相似性度量学习方法。该方法从样例标签的监督信息中学习数据之间的邻域关系,建立时间序列之间的高效距离度量。在UCR提供的时间序列数据集上进行的相似性度量和验证性分类实验的结果表明,与ED/DTW-1NN相比SNN在分类质量总体上有明显的提升。虽然基于动态时间弯曲(DTW)的1近邻(1NN)分类方法在部分数据上表现优于基于SNN的1NN分类方法,但在分类过程的相似度计算复杂度和速度上SNN优于DTW。可见所提方法能明显提高分类数据集相似性的度量效率,在高维、复杂的时间序列的数据分类上有不错的表现。  相似文献   

11.
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and, conversely, the low-dimensional space allows dynamics to be learned efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. A divide, conquer, and coordinate method is proposed. The solution approximates the nonlinear manifold and dynamics using simple piecewise linear models. The interactions and coordinations among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of overfitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification, and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.  相似文献   

12.
We propose graph-based predictable feature analysis (GPFA), a new method for unsupervised learning of predictable features from high-dimensional time series, where high predictability is understood very generically as low variance in the distribution of the next data point given the previous ones. We show how this measure of predictability can be understood in terms of graph embedding as well as how it relates to the information-theoretic measure of predictive information in special cases. We confirm the effectiveness of GPFA on different datasets, comparing it to three existing algorithms with similar objectives—namely slow feature analysis, forecastable component analysis, and predictable feature analysis—to which GPFA shows very competitive results.  相似文献   

13.
14.
In the present study, a series of 179 quinoline and quinazoline heterocyclic analogues exhibiting inhibitory activity against Gastric (H+/K+)-ATPase were investigated using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices (CoMSIA) methods. Both the models exhibited good correlation between the calculated 3D-QSAR fields and the observed biological activity for the respective training set compounds. The most optimal CoMFA and CoMSIA models yielded significant leave-one-out cross-validation coefficient, q(2) of 0.777, 0.744 and conventional cross-validation coefficient, r(2) of 0.927, 0.914 respectively. The predictive ability of generated models was tested on a set of 52 compounds having broad range of activity. CoMFA and CoMSIA yielded predicted activities for test set compounds with r(pred)(2) of 0.893 and 0.917 respectively. These validation tests not only revealed the robustness of the models but also demonstrated that for our models r(pred)(2) based on the mean activity of test set compounds can accurately estimate external predictivity. The factors affecting activity were analyzed carefully according to standard coefficient contour maps of steric, electrostatic, hydrophobic, acceptor and donor fields derived from the CoMFA and CoMSIA. These contour plots identified several key features which explain the wide range of activities. The results obtained from models offer important structural insight into designing novel peptic-ulcer inhibitors prior to their synthesis.  相似文献   

15.
Full-vehicle finite element models have a large number of degrees of freedom. This makes them ill suited for design work, numerical optimization or stochastic analyses in an early development phase, because they require a high level of detailed information, most of which is yet unavailable. They are also computationally expensive, thus severely limiting the number of function evaluations. Both difficulties can be alleviated through the use of substitute models, which capture only the relevant mechanisms, associated with a smaller number of degrees of freedom. This work provides a substitute modeling and calibration methodology which improves output value prediction for substantial deviations from the reference design, including three significant innovations. First, a new measure to quantify the agreement of calibrated and reference model is proposed. Second, a multi-model calibration is introduced, which incorporates an array of reference models for calibration and cross validation. Third, the calibration is performed on the basis of a hybrid objective function, weighting the agreement of the time dependent system states, called physics-based contribution, and the time independent output values, called predictive or regression-based. This ensures a large range of validity while simultaneously improving the predictive quality of the model. It is also shown that the discretization of the structural mass has negligible influence on the target values, allowing for reduced model complexity.  相似文献   

16.
改进模糊熵算法及其在孤独症儿童脑电分析中的应用   总被引:1,自引:0,他引:1  
模糊熵(Fuzzy entropy,FuzzyEn)是衡量时间序列在维数变化时产生新模式的概率,反映时间序列复杂性和无规则程度的参数指标.本文针对传统模糊熵算法只针对时间信号序列进行总体分析,忽略了瞬时信号变化的问题,提出了一种改进模糊熵的算法.算法将指数函数的宽度进行了优化设置,设置为0.15倍一阶差分时间序列的标准差,以此保证充分提取时间序列瞬时复杂性特征.与传统模糊熵相比,改进模糊熵包含更多时间模式信息.基于改进模糊熵结合锁相位算法,分析孤独症儿童脑电信号(Electroencephalogram,EEG)复杂性与同步性,结果表明:孤独症(Autism spectrum disorders,ASD)前颞叶的脑电信号同步性下降、复杂性降低,具有显著性差异(P < 0.05).  相似文献   

17.
Regression problems provide some of the most challenging research opportunities in the area of machine learning, and more broadly intelligent systems, where the predictions of some target variables are critical to a specific application. Rainfall is a prime example, as it exhibits unique characteristics of high volatility and chaotic patterns that do not exist in other time series data. This work’s main impact is to show the benefit machine learning algorithms, and more broadly intelligent systems have over the current state-of-the-art techniques for rainfall prediction within rainfall derivatives. We apply and compare the predictive performance of the current state-of-the-art (Markov chain extended with rainfall prediction) and six other popular machine learning algorithms, namely: Genetic Programming, Support Vector Regression, Radial Basis Neural Networks, M5 Rules, M5 Model trees, and k-Nearest Neighbours. To assist in the extensive evaluation, we run tests using the rainfall time series across data sets for 42 cities, with very diverse climatic features. This thorough examination shows that the machine learning methods are able to outperform the current state-of-the-art. Another contribution of this work is to detect correlations between different climates and predictive accuracy. Thus, these results show the positive effect that machine learning-based intelligent systems have for predicting rainfall based on predictive accuracy and with minimal correlations existing across climates.  相似文献   

18.
模型复杂性是决定学习机器泛化性能的关键因素,对其进行合理的控制是模型选择的重要原则.极限学习机(extreme learning machine,ELM)作为一种新的机器学习算法,表现出了优越的学习性能.但对于如何在ELM的模型选择过程中合理地度量和控制其模型复杂性这一基本问题,目前尚欠缺系统的研究.本文讨论了基于Vapnik-Chervonenkis(VC)泛化界的ELM模型复杂性控制方法(记作VM),并与其他4种经典模型选择方法进行了系统的比较研究.在人工和实际数据集上的实验表明,与其他4种经典方法相比,VM具有更优的模型选择性能:能选出同时具有最低模型复杂性和最低(或近似最低)实际预测风险的ELM模型.此外,本文也为VC维理论的实际应用价值研究提供了一个新的例证.  相似文献   

19.
A study of the predictive value of a variety of syntax-based problem complexity measures is reported. Experimentation with variants of chunk-oriented measures showed that one should judiciously select measurable software attributes as proper indicators of what one wishes to predict, rather than hoping for a single, all-purpose complexity measure. The authors have shown that it is possible for particular complexity measures or other factors to serve as good predictors of some properties of program but not for others. For example, a good predictor of construction time will not necessarily correlate well with the number of error occurrences. M.H. Halstead's (1977) efforts measure (E) was found to be a better predictor that the two nonchunk measures evaluated, namely, T.J. McCabe's (1976) V(G) and lines of code, but at least one chunk measure predicted better than E in every case  相似文献   

20.
Learning is a task that generalizes many of the analyses that are applied to collections of data, in particular, to collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. Kasiviswanathan et al. (in SIAM J. Comput., 40(3):793–826, 2011) initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that for finite, discrete domains (ignoring time complexity), every PAC learning task could be performed privately with polynomially many labeled examples; in many natural cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by Blum et al. in STOC, pp. 609–618, 2008), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号