首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   0篇
无线电   2篇
自动化技术   17篇
  2019年   1篇
  2014年   1篇
  2013年   3篇
  2012年   2篇
  2011年   1篇
  2010年   2篇
  2009年   2篇
  2008年   2篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2002年   1篇
  2001年   1篇
排序方式: 共有19条查询结果,搜索用时 15 毫秒
1.
2.
Population models are widely applied in biomedical data analysis since they characterize both the average and individual responses of a population of subjects. In the absence of a reliable mechanistic model, one can resort to the Bayesian nonparametric approach that models the individual curves as Gaussian processes. This paper develops an efficient computational scheme for estimating the average and individual curves from large data sets collected in standardized experiments, i.e. with a fixed sampling schedule. It is shown that the overall scheme exhibits a “client-server” architecture. The server is in charge of handling and processing the collective data base of past experiments. The clients ask the server for the information needed to reconstruct the individual curve in a single new experiment. This architecture allows the clients to take advantage of the overall data set without violating possible privacy and confidentiality constraints and with negligible computational effort.  相似文献   
3.
Deconvolution allows the reconstruction of non-accessible inputs (e.g. hormone secretion rate) from their causally-related measurable effects (e.g. hormone plasma concentration). Deconvolution is challenging under several aspects both general (e.g. determination of a suitable trade-off between data fit and solution smoothness in order to contrast ill-conditioning, assessment of the confidence intervals) as well as specific of physiological systems (e.g. non-uniform and infrequent data sampling). Recently, a stochastic regularization approach has been proposed and validated to handle these difficulties (De Nicolao et al., Automatica 33 (1997) 851-870). In this paper, an interactive program, WINSTODEC, is presented to allow the clinical investigator to easily obtain the solution of a deconvolution problem by this approach.  相似文献   
4.
Standard single-task kernel methods have recently been extended to the case of multitask learning in the context of regularization theory. There are experimental results, especially in biomedicine, showing the benefit of the multitask approach compared to the single-task one. However, a possible drawback is computational complexity. For instance, when regularization networks are used, complexity scales as the cube of the overall number of training data, which may be large when several tasks are involved. The aim of this paper is to derive an efficient computational scheme for an important class of multitask kernels. More precisely, a quadratic loss is assumed and each task consists of the sum of a common term and a task-specific one. Within a Bayesian setting, a recursive online algorithm is obtained, which updates both estimates and confidence intervals as new data become available. The algorithm is tested on two simulated problems and a real data set relative to xenobiotics administration in human patients.  相似文献   
5.
6.
Many wavelet-based algorithms have been proposed in recent years to solve the problem of function estimation from noisy samples. In particular it has been shown that threshold approaches lead to asymptotically optimal estimation and are extremely effective when dealing with real data. Working under a Bayesian perspective, in this paper we first study optimality of the hard and soft thresholding rules when the function is modelled as a stochastic process with known covariance function. Next, we consider the case where the covariance function is unknown, and propose a novel approach that models the covariance as a certain wavelet combination estimated from data by Bayesian model selection. Simulated data are used to show that the new method outperforms traditional threshold approaches as well as other wavelet-based Bayesian techniques proposed in the literature.  相似文献   
7.
This paper presents a new regularized kernel-based approach for the estimation of the second order moments of stationary stochastic processes. The proposed estimator is defined by a Tikhonov-type variational problem. It contains few unknown parameters which can be estimated by cross validation solving a sequence of problems whose computational complexity scales linearly with the number of noisy moments (derived from the samples of the process). The correlation functions are assumed to be summable and the hypothesis space is a reproducing kernel Hilbert space induced by the recently introduced stable spline kernel. In this way, information on the decay to zero of the functions to be reconstructed is incorporated in the estimation process. An application to the identification of transfer functions in the case of white noise as input is also presented. Numerical simulations show that the proposed method compares favorably with respect to standard nonparametric estimation algorithms that exploit an oracle-type tuning of the parameters.  相似文献   
8.
We consider the semi-blind deconvolution problem; i.e., estimating an unknown input function to a linear dynamical system using a finite set of linearly related measurements where the dynamical system is known up to some system parameters. Without further assumptions, this problem is often ill-posed and ill-conditioned. We overcome this difficulty by modeling the unknown input as a realization of a stochastic process with a covariance that is known up to some finite set of covariance parameters. We first present an empirical Bayes method where the unknown parameters are estimated by maximizing the marginal likelihood/posterior and subsequently the input is reconstructed via a Tikhonov estimator (with the parameters set to their point estimates). Next, we introduce a Bayesian method that recovers the posterior probability distribution, and hence the minimum variance estimates, for both the unknown parameters and the unknown input function. Both of these methods use the eigenfunctions of the random process covariance to obtain an efficient representation of the unknown input function and its probability distributions. Simulated case studies are used to test the two methods and compare their relative performance.  相似文献   
9.
Most of the currently used techniques for linear system identification are based on classical estimation paradigms coming from mathematical statistics. In particular, maximum likelihood and prediction error methods represent the mainstream approaches to identification of linear dynamic systems, with a long history of theoretical and algorithmic contributions. Parallel to this, in the machine learning community alternative techniques have been developed. Until recently, there has been little contact between these two worlds. The first aim of this survey is to make accessible to the control community the key mathematical tools and concepts as well as the computational aspects underpinning these learning techniques. In particular, we focus on kernel-based regularization and its connections with reproducing kernel Hilbert spaces and Bayesian estimation of Gaussian processes. The second aim is to demonstrate that learning techniques tailored to the specific features of dynamic systems may outperform conventional parametric approaches for identification of stable linear systems.  相似文献   
10.
In this paper we focus on collaborative multi-agent systems, where agents are distributed over a region of interest and collaborate to achieve a common estimation goal. In particular, we introduce two consensus-based distributed linear estimators. The first one is designed for a Bayesian scenario, where an unknown common finite-dimensional parameter vector has to be reconstructed, while the second one regards the nonparametric reconstruction of an unknown function sampled at different locations by the sensors. Both of the algorithms are characterized in terms of the trade-off between estimation performance, communication, computation and memory complexity. In the finite-dimensional setting, we derive mild sufficient conditions which ensure that a distributed estimator performs better than the local optimal ones in terms of estimation error variance. In the nonparametric setting, we introduce an on-line algorithm that allows the agents to simultaneously compute the function estimate with small computational, communication and data storage efforts, as well as to quantify its distance from the centralized estimate given by a Regularization Network, one of the most powerful regularized kernel methods. These results are obtained by deriving bounds on the estimation error that provide insights on how the uncertainty inherent in a sensor network, such as imperfect knowledge on the number of agents and the measurement models used by the sensors, can degrade the performance of the estimation process. Numerical experiments are included to support the theoretical findings.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号