首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Split Bregman method for large scale fused Lasso   总被引:1,自引:0,他引:1  
Ordering of regression or classification coefficients occurs in many real-world applications. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients through an ?1 norm regularizer. However, due to nonseparability and nonsmoothness of the regularization term, solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size, or a special case of the fused Lasso problem in which the predictor matrix is the identity matrix. In this paper, we propose an iterative algorithm based on the split Bregman method to solve a class of large-scale fused Lasso problems, including a generalized fused Lasso and a fused Lasso support vector classifier. We derive our algorithm using an augmented Lagrangian method and prove its convergence properties. The performance of our method is tested on both artificial data and real-world applications including proteomic data from mass spectrometry and genomic data from array comparative genomic hybridization (array CGH). We demonstrate that our method is many times faster than the existing solvers, and show that it is especially efficient for large p, small n problems, where p is the number of variables and n is the number of samples.  相似文献   

2.
In this paper, the optimal strategies for discrete-time linear system quadratic zero-sum games related to the H-infinity optimal control problem are solved in forward time without knowing the system dynamical matrices. The idea is to solve for an action dependent value function Q(x,u,w) of the zero-sum game instead of solving for the state dependent value function V(x) which satisfies a corresponding game algebraic Riccati equation (GARE). Since the state and actions spaces are continuous, two action networks and one critic network are used that are adaptively tuned in forward time using adaptive critic methods. The result is a Q-learning approximate dynamic programming (ADP) model-free approach that solves the zero-sum game forward in time. It is shown that the critic converges to the game value function and the action networks converge to the Nash equilibrium of the game. Proofs of convergence of the algorithm are shown. It is proven that the algorithm ends up to be a model-free iterative algorithm to solve the GARE of the linear quadratic discrete-time zero-sum game. The effectiveness of this method is shown by performing an H-infinity control autopilot design for an F-16 aircraft.  相似文献   

3.
R package flexmix provides flexible modelling of finite mixtures of regression models using the EM algorithm. Several new features of the software such as fixed and nested varying effects for mixtures of generalized linear models and multinomial regression for a priori probabilities given concomitant variables are introduced. The use of the software in addition to model selection is demonstrated on a logistic regression example.  相似文献   

4.
In this paper, sampled-data control of a set of continuous-time LTI systems is considered. It is assumed that a predefined guaranteed continuous-time quadratic cost function, which is, in fact, the sum of the performance indices for all systems, is given. The main objective here is to design a decentralized periodic output feedback controller with a prespecified form, e.g., polynomial, piecewise constant, exponential, etc., which minimizes the above mentioned guaranteed cost function. This problem is first formulated as a set of matrix inequalities, and then by using a well-known technique, it is reformulated as a LMI problem. The set of linear matrix inequalities obtained provides necessary and sufficient conditions for the existence of a decentralized optimal simultaneous stabilizing controller with the prespecified form (rather than a general form). Moreover, an algorithm is presented to solve the resultant LMI problem. Finally, the efficiency of the proposed method is demonstrated in two numerical examples.  相似文献   

5.
Based on the method of (n,k)-universal sets, we present a deterministic parameterized algorithm for the weighted rd-matching problem with time complexity O(4(r−1)k+o(k)), improving the previous best upper bound O(4rk+o(k)). In particular, the algorithm applied to the unweighted 3d-matching problem results in a deterministic algorithm with time O(16k+o(k)), improving the previous best result O(21.26k). For the weighted r-set packing problem, we present a deterministic parameterized algorithm with time complexity O(2(2r−1)k+o(k)), improving the previous best result O(22rk+o(k)). The algorithm, when applied to the unweighted 3-set packing problem, has running time O(32k+o(k)), improving the previous best result O(43.62k+o(k)). Moreover, for the weighted r-set packing and weighted rd-matching problems, we give a kernel of size O(kr), which is the first kernelization algorithm for the problems on weighted versions.  相似文献   

6.
In this paper, we present new multivariate quantile distributions and utilise likelihood-free Bayesian algorithms for inferring the parameters. In particular, we apply a sequential Monte Carlo (SMC) algorithm that is adaptive in nature and requires very little tuning compared with other approximate Bayesian computation algorithms. Furthermore, we present a framework for the development of multivariate quantile distributions based on a copula. We consider bivariate and time series extensions of the g-and-k distribution under this framework, and develop an efficient component-wise updating scheme free of likelihood functions to be used within the SMC algorithm. In addition, we trial the set of octiles as summary statistics as well as functions of these that form robust measures of location, scale, skewness and kurtosis. We show that these modifications lead to reasonably precise inferences that are more closely comparable to computationally intensive likelihood-based inference. We apply the quantile distributions and algorithms to simulated data and an example involving daily exchange rate returns.  相似文献   

7.
Stochastic volatility (SV) models have been considered as a real alternative to time-varying volatility of the ARCH family. Existing asymmetric SV (ASV) models treat volatility asymmetry via the leverage effect hypothesis. Generalised ASV models that take account of both volatility asymmetry and normality violation expressed simultaneously by skewness and excess kurtosis are introduced. The new generalised ASV models are estimated using the Bayesian Markov Chain Monte Carlo approach for parametric and log-volatility estimation. By using simulated and real financial data series, the new models are compared to existing SV models for their statistical properties, and for their estimation performance in within and out-of-sample periods. Results show that there is much to gain from the introduction of the generalised ASV models.  相似文献   

8.
An adaptive controller based on multi-input fuzzy rules emulated networks (MIFRENs) is introduced for omni-directional mobile robot systems in the discrete-time domain without any kinematic or dynamic models. An approximated model for unknown systems is developed by using two MIFRENs with an online learning algorithm in addition to the stability analysis. The main theorem in this model is proposed to guarantee closed-loop performance and system robustness for all adjustable parameters inside MIFRENs. The system is validated by an experimental setup with a FESTO omni-directional mobile robot called Robotino®. The proposed algorithm is shown to have superior performance compared to that of an algorithm that uses only an embedded controller. The advantage of the MIFREN initial setting is verified comparing its results with those of a controller that is based on neural networks.  相似文献   

9.
The selection of a subset of input variables is often based on the previous construction of a ranking to order the variables according to a given criterion of relevancy. The objective is then to linearize the search, estimating the quality of subsets containing the topmost ranked variables. An algorithm devised to rank input variables according to their usefulness in the context of a learning task is presented. This algorithm is the result of a combination of simple and classical techniques, like correlation and orthogonalization, which allow the construction of a fast algorithm that also deals explicitly with redundancy. Additionally, the proposed ranker is endowed with a simple polynomial expansion of the input variables to cope with nonlinear problems. The comparison with some state-of-the-art rankers showed that this combination of simple components is able to yield high-quality rankings of input variables. The experimental validation is made on a wide range of artificial data sets and the quality of the rankings is assessed using a ROC-inspired setting, to avoid biased estimations due to any particular learning algorithm.  相似文献   

10.
In this paper, the partially varying coefficient single index proportional hazards regression models are discussed. All unknown functions are fitted by polynomial B splines. The index parameters and B-spline coefficients are estimated by the partial likelihood method and a two-step Newton-Raphson algorithm. Consistency and asymptotic normality of the estimators of all the parameters are derived. Through a simulation study and the VA data example, we illustrate that the proposed estimation procedure is accurate, rapid and stable.  相似文献   

11.
We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=⌈log∣A∣⌉, where A is the algorithm’s action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones.In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1≤bΘ(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio Ω(log(n)/b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n)/b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio kO(1/b) for any choice of Θ(1)≤b≤logk.  相似文献   

12.
13.
Given a molecule, which consists of a set of atoms, a molecular surface is defined for a spherical probe approximating a solvent molecule. Molecular surface is used for both the visualization of the molecule and the computation of various molecular properties such as the area and volume of a protein, which are important for studying problems such as protein docking and folding.In this paper, we present an O(n) time algorithm, in the worst case, for triangulating molecular surface based on the combinatorial information provided by the β-shape of the molecule with n atoms. The proposed algorithm takes advantage of the concise representation of topology among atoms stored in the β-shape.A molecular surface consists of two parts: a blending surface consisting of blending patches and a (solvent) contact surface consisting of (solvent) contact patches. For each blending patch, the algorithm uses compact masks for the construction of a triangular mesh in O(c) time in the worst case, where c is the number of point evaluations on the blending patch. For each contact patch, the algorithm uses a template, for each atom type, for the triangulation of the boundary of the atom. Then, the triangular mesh is trimmed off by hyperplanes where each hyperplane corresponds to an arc of the boundary of the contact patch. The triangulation of a contact patch takes O(c) time in the worst case, where c is the number of point evaluations on the boundary of an atom. Since there are at most O(n) patches, the worst case time complexity is O(n).The proposed algorithm also handles internal voids and guarantees the watertightness of the produced triangular mesh of a molecular surface. In addition, the level-of-detail is easily achieved as a by-product of the proposed scheme. The proposed algorithm is fully implemented and statistics from experiments are also collected.  相似文献   

14.
In this work, we study The Abelian Sandpile Model from the point of view of computational complexity. We begin by studying the length distribution of sandpile avalanches triggered by the addition of two critical configurations: we prove that those avalanches are long on average, their length is bounded below by a constant fraction of the length of the longest critical avalanche which is, in most of the cases, superlinear. At the end of the paper we take the point of view of computational complexity, we analyze the algorithmic hardness of the problem consisting in computing the addition of two critical configurations, we prove that this problem is P complete, and we prove that most algorithmic problems related to The Abelian Sandpile Model are NC reducible to it.  相似文献   

15.
Semiparametric reproductive dispersion mixed-effects model (SPRDMM) is an extension of the reproductive dispersion model and the semiparametric mixed model, and it includes many commonly encountered models as its special cases. A Bayesian procedure is developed for analyzing SPRDMMs on the basis of P-spline estimates of nonparametric components. A hybrid algorithm combining the Gibbs sampler and the Metropolis-Hastings algorithm is used to simultaneously obtain the Bayesian estimates of unknown parameters, smoothing function and random effects, as well as their standard error estimates. The Bayes factor for model comparison is employed to select better approximation of the smoothing function via path sampling. Several simulation studies and a real example are used to illustrate the proposed methodologies.  相似文献   

16.
The satisfiability problem is a basic core NP-complete problem. In recent years, a lot of heuristic algorithms have been developed to solve this problem, and many experiments have evaluated and compared the performance of different heuristic algorithms. However, rigorous theoretical analysis and comparison are rare. This paper analyzes and compares the expected runtime of three basic heuristic algorithms: RandomWalk, (1+1) EA, and hybrid algorithm. The runtime analysis of these heuristic algorithms on two 2-SAT instances shows that the expected runtime of these heuristic algorithms can be exponential time or polynomial time. Furthermore, these heuristic algorithms have their own advantages and disadvantages in solving different SAT instances. It also demonstrates that the expected runtime upper bound of RandomWalk on arbitrary k-SAT (k?3) is O(n(k−1)), and presents a k-SAT instance that has Θ(n(k−1)) expected runtime bound.  相似文献   

17.
This paper proposes a new multi-robot coordinated exploration algorithm that applies a global optimization strategy based on K-Means clustering to guarantee a balanced and sustained exploration of big workspaces. The algorithm optimizes the on-line assignment of robots to targets, keeps the robots working in separate areas and efficiently reduces the variance of average waiting time on those areas. The latter ensures that the different areas of the workspace are explored at a similar speed, thus avoiding that some areas are explored much later than others, something desirable for many exploration applications, such as search & rescue. The algorithm leads to the lowest variance of regional waiting time (WTV) and the lowest variance of regional exploration percentages (EPV). Both features are presented through a comparative evaluation of the proposed algorithm with different state-of-the-art approaches.  相似文献   

18.
A novel class of models for multivariate time series is presented. We consider hierarchical mixture-of-expert (HME) models in which the experts, or building blocks of the model, are vector autoregressions (VAR). It is assumed that the VAR-HME model partitions the covariate space, specifically including time as a covariate, into overlapping regions called overlays. In each overlay a given number of VAR experts compete with each other so that the most suitable one for the overlay is favored by a large weight. The weights have a particular parametric form that allows the modeler to include relevant covariates. Estimation of the model parameters is achieved via the EM (expectation-maximization) algorithm. A new algorithm to select the optimal number of overlays, the number of VAR models and the model orders of the VARs that define a particular VAR-HME model configuration, is also developed. The algorithm uses the Bayesian information criterion (BIC) as an optimality criterion. Issues of model checking and inference of latent structure in multiple time series are investigated. The new methodology is illustrated by analyzing a synthetic data set and a 7-channel electroencephalogram data set.  相似文献   

19.
Although there have been many researches on cluster analysis considering feature (or variable) weights, little effort has been made regarding sample weights in clustering. In practice, not every sample in a data set has the same importance in cluster analysis. Therefore, it is interesting to obtain the proper sample weights for clustering a data set. In this paper, we consider a probability distribution over a data set to represent its sample weights. We then apply the maximum entropy principle to automatically compute these sample weights for clustering. Such method can generate the sample-weighted versions of most clustering algorithms, such as k-means, fuzzy c-means (FCM) and expectation & maximization (EM), etc. The proposed sample-weighted clustering algorithms will be robust for data sets with noise and outliers. Furthermore, we also analyze the convergence properties of the proposed algorithms. This study also uses some numerical data and real data sets for demonstration and comparison. Experimental results and comparisons actually demonstrate that the proposed sample-weighted clustering algorithms are effective and robust clustering methods.  相似文献   

20.
We show an O?(n(?+1))-time algorithm for the channel assignment problem, where ? is the maximum edge weight. This improves on the previous O?(n(?+2))-time algorithm by Král (2005) [1], as well as algorithms for important special cases, like L(2,1)-labeling. For the latter problem, our algorithm works in O?(n3) time. The progress is achieved by applying the fast zeta transform in combination with the inclusion-exclusion principle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号