首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A memetic approach that combines a genetic algorithm (GA) and quadratic programming is used to address the problem of optimal portfolio selection with cardinality constraints and piecewise linear transaction costs. The framework used is an extension of the standard Markowitz mean–variance model that incorporates realistic constraints, such as upper and lower bounds for investment in individual assets and/or groups of assets, and minimum trading restrictions. The inclusion of constraints that limit the number of assets in the final portfolio and piecewise linear transaction costs transforms the selection of optimal portfolios into a mixed-integer quadratic problem, which cannot be solved by standard optimization techniques. We propose to use a genetic algorithm in which the candidate portfolios are encoded using a set representation to handle the combinatorial aspect of the optimization problem. Besides specifying which assets are included in the portfolio, this representation includes attributes that encode the trading operation (sell/hold/buy) performed when the portfolio is rebalanced. The results of this hybrid method are benchmarked against a range of investment strategies (passive management, the equally weighted portfolio, the minimum variance portfolio, optimal portfolios without cardinality constraints, ignoring transaction costs or obtained with L1 regularization) using publicly available data. The transaction costs and the cardinality constraints provide regularization mechanisms that generally improve the out-of-sample performance of the selected portfolios.  相似文献   

2.
Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001–July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002–March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.  相似文献   

3.
To address the sparse system identification problem in a non‐Gaussian impulsive noise environment, the recursive generalized maximum correntropy criterion (RGMCC) algorithm with sparse penalty constraints is proposed to combat impulsive‐inducing instability. Specifically, a recursive algorithm based on the generalized correntropy with a forgetting factor of error is developed to improve the performance of the sparsity aware maximum correntropy criterion algorithms by achieving a robust steady‐state error. Considering an unknown sparse system, the l1‐norm and correntropy induced metric are employed in the RGMCC algorithm to exploit sparsity as well as to mitigate impulsive noise simultaneously. Numerical simulations are given to show that the proposed algorithm is robust while providing robust steady‐state estimation performance.  相似文献   

4.
Hyperspectral unmixing (HU) is a popular tool in remotely sensed hyperspectral data interpretation, and it is used to estimate the number of reference spectra (end-members), their spectral signatures, and their fractional abundances. However, it can also be assumed that the observed image signatures can be expressed in the form of linear combinations of a large number of pure spectral signatures known in advance (e.g. spectra collected on the ground by a field spectro-radiometer, called a spectral library). Under this assumption, the solution of the fractional abundances of each spectrum can be seen as sparse, and the HU problem can be modelled as a constrained sparse regression (CSR) problem used to compute the fractional abundances in a sparse (i.e. with a small number of terms) linear mixture of spectra, selected from large libraries. In this article, we use the l 1/2 regularizer with the properties of unbiasedness and sparsity to enforce the sparsity of the fractional abundances instead of the l 0 and l 1 regularizers in CSR unmixing models, as the l 1/2 regularizer is much easier to be solved than the l 0 regularizer and has stronger sparsity than the l 1 regularizer (Xu et al. 2010). A reweighted iterative algorithm is introduced to convert the l 1/2 problem into the l 1 problem; we then use the Split Bregman iterative algorithm to solve this reweighted l 1 problem by a linear transformation. The experiments on simulated and real data both show that the l 1/2 regularized sparse regression method is effective and accurate on linear hyperspectral unmixing.  相似文献   

5.
ABSTRACT

Hyperspectral unmixing is essential for image analysis and quantitative applications. To further improve the accuracy of hyperspectral unmixing, we propose a novel linear hyperspectral unmixing method based on l1?l2 sparsity and total variation (TV) regularization. First, the enhanced sparsity based on the l1?l2 norm is explored to depict the intrinsic sparse characteristic of the fractional abundances in a sparse regression unmixing model because the l1?l2 norm promotes stronger sparsity than the l1 norm. Then, TV is minimized to enforce the spatial smoothness by considering the spatial correlation between neighbouring pixels. Finally, the extended alternating direction method of multipliers (ADMM) is utilized to solve the proposed model. Experimental results on simulated and real hyperspectral datasets show that the proposed method outperforms several state-of-the-art unmixing methods.  相似文献   

6.
This paper presents an identification scheme for sparse FIR systems with quantised data. We consider a general quantisation scheme, which includes the commonly deployed static quantiser as a special case. To tackle the sparsity issue, we utilise a Bayesian approach, where an ?1 a priori distribution for the parameters is used as a mechanism to promote sparsity. The general framework used to solve the problem is maximum likelihood (ML). The ML problem is solved by using a generalised expectation maximisation algorithm.  相似文献   

7.
稀疏表示因其所具有的鲁棒性,在模式分类领域逐渐得到关注.研究了一种基于稀疏保留模型的新颖领域适应学习方法,并提出一种鲁棒的稀疏标签传播领域适应学习(sparse label propagation domain adaptation learning,简称SLPDAL)算法.SLPDAL通过将目标领域数据进行稀疏重构,以实现源领域数据标签向目标领域平滑传播.具体来讲,SLPDAL算法分为3步:首先,基于领域间数据分布均值差最小化准则寻求一个优化的核空间,并将领域数据嵌入到该核空间;然后,在该嵌入核空间,基于l1-范最小化准则计算各领域数据的核稀疏重构系数;最后,通过保留领域数据间核稀疏重构系数约束,实现源领域数据标签向目标领域的传播.最后,将SLPDAL算法推广到多核学习框架,提出一个SLPDAL多核学习模型.在鲁棒人脸识别、视频概念检测和文本分类等领域适应学习任务上进行比较实验,所提出的方法取得了优于或可比较的学习性能.  相似文献   

8.
Fei Zang  Jiangshe Zhang 《Neurocomputing》2011,74(12-13):2176-2183
Recently, sparsity preserving projections (SPP) algorithm has been proposed, which combines l1-graph preserving the sparse reconstructive relationship of the data with the classical dimensionality reduction algorithm. However, when applied to classification problem, SPP only focuses on the sparse structure but ignores the label information of samples. To enhance the classification performance, a new algorithm termed discriminative learning by sparse representation projections or DLSP for short is proposed in this paper. DLSP algorithm incorporates the merits of both local interclass geometrical structure and sparsity property. That makes it possess the advantages of the sparse reconstruction, and more importantly, it has better capacity of discrimination, especially when the size of the training set is small. Extensive experimental results on serval publicly available data sets show the feasibility and effectiveness of the proposed algorithm.  相似文献   

9.
Sparse kernel spectral clustering models for large-scale data analysis   总被引:1,自引:0,他引:1  
Kernel spectral clustering has been formulated within a primal-dual optimization setting allowing natural extensions to out-of-sample data together with model selection in a learning framework. This becomes important for predictive purposes and for good generalization capabilities. The clustering model is formulated in the primal in terms of mappings to high-dimensional feature spaces typical of support vector machines and kernel-based methodologies. The dual problem corresponds to an eigenvalue decomposition of a centered Laplacian matrix derived from pairwise similarities within the data. The out-of-sample extension can also be used to introduce sparsity and to reduce the computational complexity of the resulting eigenvalue problem. In this paper, we propose several methods to obtain sparse and highly sparse kernel spectral clustering models. The proposed approaches are based on structural properties of the solutions when the clusters are well formed. Experimental results with difficult toy examples and images show the applicability of the proposed sparse models with predictive capabilities.  相似文献   

10.
Qiao  Chen  Yang  Lan  Shi  Yan  Fang  Hanfeng  Kang  Yanmei 《Applied Intelligence》2022,52(1):237-253

To have the sparsity of deep neural networks is crucial, which can improve the learning ability of them, especially for application to high-dimensional data with small sample size. Commonly used regularization terms for keeping the sparsity of deep neural networks are based on L1-norm or L2-norm; however, they are not the most reasonable substitutes of L0-norm. In this paper, based on the fact that the minimization of a log-sum function is one effective approximation to that of L0-norm, the sparse penalty term on the connection weights with the log-sum function is introduced. By embedding the corresponding iterative re-weighted-L1 minimization algorithm with k-step contrastive divergence, the connections of deep belief networks can be updated in a way of sparse self-adaption. Experiments on two kinds of biomedical datasets which are two typical small sample size datasets with a large number of variables, i.e., brain functional magnetic resonance imaging data and single nucleotide polymorphism data, show that the proposed deep belief networks with self-adaptive sparsity can learn the layer-wise sparse features effectively. And results demonstrate better performances including the identification accuracy and sparsity capability than several typical learning machines.

  相似文献   

11.
12.
In this paper, we consider the problem of finding fill-preserving sparse matrix orderings for parallel factorization. That is, given a large sparse symmetric and positive definite matrix A that has been ordered by some fill-reducing ordering, we want to determine a reordering that is appropriate in terms of preserving the sparsity and minimizing the cost to perform the Cholesky factorization in parallel. Past researches on this problem all are based on the elimination tree model, in which each node represents the task for factoring a column, and thus, can be seen as a coarse-grained task dependence model. To exploit more parallelism, Joseph Liu proposed a medium-grained task model, called the column task graph, and showed that it is amenable to the shared-memory supercomputers. Based on the column task graph, we devise a greedy reordering algorithm, and show that our algorithm can find the optimal ordering among the class of all fill-preserving orderings of the given sparse matrix A.  相似文献   

13.
The inclusion of transaction costs is an essential element of any realistic portfolio optimization. We extend the standard portfolio optimization problem to consider convex transaction costs incurred when rebalancing an investment portfolio. Market impact costs measure the effect on the price of a security that result from an effort to buy or sell the security, and they can constitute a large part of the total transaction costs. The loss to a portfolio from market impact costs is often modelled with a convex function that can be expressed using second-order cone constraints. The Markowitz framework of mean-variance efficiency is used. In order to properly represent the variance of the resulting portfolio, we suggest rescaling by the funds available after paying the transaction costs. This results in a fractional programming problem, which we show can be reformulated as an equivalent convex program of size comparable to the model without transaction costs. We show that an optimal solution to the convex program can always be found that does not discard assets.  相似文献   

14.
基于增强稀疏性特征选择的网络图像标注   总被引:1,自引:0,他引:1  
史彩娟  阮秋琦 《软件学报》2015,26(7):1800-1811
面对网络图像的爆炸性增长,网络图像标注成为近年来一个热点研究内容,稀疏特征选择在提升网络图像标注效率和性能方面发挥着重要的作用.提出了一种增强稀疏性特征选择算法,即,基于l2,1/2矩阵范数和共享子空间的半监督稀疏特征选择算法(semi-supervised sparse feature selection based on l2,1/2-matix norm with shared subspace learning,简称SFSLS)进行网络图像标注.在SFSLS算法中,应用l2,1/2矩阵范数来选取最稀疏和最具判别性的特征,通过共享子空间学习,考虑不同特征之间的关联信息.另外,基于图拉普拉斯的半监督学习,使SFSLS算法同时利用了有标签数据和无标签数据.设计了一种有效的迭代算法来最优化目标函数.SFSLS算法与其他稀疏特征选择算法在两个大规模网络图像数据库上进行了比较,结果表明,SFSLS算法更适合于大规模网络图像的标注.  相似文献   

15.
Index tracking has been gaining in popularity in recent years, as sustainable and stable yields exceeding market returns proved to be elusive. Leveraging on the search capability of evolutionary algorithm, this paper proposed a multi-objective evolutionary index tracking platform that could simultaneously optimize both tracking performance and transaction costs throughout the investment horizon and address various real-world implementation issues in index tracking. For model evaluation, a realistic instantiation of the index tracking optimization problem that accounted for stochastic capital injections, practical transactional cost structures and other real-world constraints was formulated. Portfolio rebalancing strategies for the alignment of the tracker portfolio to time-varying market conditions were investigated also. Empirical studies based on equity indices from major global markets were conducted and the results validated the tracking capability of the proposed index tracking system in out-of-sample data sets, whilst minimizing transaction costs throughout the investment horizon.  相似文献   

16.
目的 高光谱解混是高光谱遥感数据分析中的热点问题,其难点在于信息不充分导致的问题病态性。基于光谱库的稀疏性解混方法是目前的代表性方法,但是在实际情况中,高光谱数据通常包含高斯、脉冲和死线等噪声,且各波段噪声的强度往往不同,因此常用的稀疏解混方法鲁棒性不够,解混精度有待提高。针对该问题,本文对高光谱图像进行非负稀疏分量分解建模,提出了一种基于非负稀疏分量分析的鲁棒解混方法。方法 首先综合考虑真实高光谱数据的混合噪声及其各波段噪声强度不同的统计特性,在最大后验概率框架下建立非负矩阵稀疏分量分解模型,然后采用l1,1范数刻画噪声的稀疏性,l2,0范数刻画丰度的全局行稀疏性,全变分(total variation,TV)正则项刻画像元的局部同质性和分段平滑性,建立基于非负稀疏分量分析的高光谱鲁棒解混优化模型,最后采用交替方向乘子法(alternating direction method of multipliers,ADMM)设计高效迭代算法。结果 在2组模拟数据集上的实验结果表明,相比于5种对比方法,提出方法在信号与重建误差比(signal to...  相似文献   

17.
韩敏  王新迎 《自动化学报》2011,37(12):1536-1540
为克服传统储备池方法缺乏良好在线学习算法的问题, 同时考虑到储备池本身存在的不适定问题, 本文提出一种储备池在线稀疏学习算法, 对储备池目标函数施加L1正则化约束,并采用截断梯度算法在线近似求解.所提算法在对储备池输出权值进行在线调整的同时, 可对储备池输出权值的稀疏性进行有效控制, 有效保证了网络的泛化性能.理论分析和仿真实例证明所提算法的有效性.  相似文献   

18.
Sun  Yuping  Quan  Yuhui  Fu  Jia 《Neural computing & applications》2018,30(4):1265-1275

In recent years, sparse coding via dictionary learning has been widely used in many applications for exploiting sparsity patterns of data. For classification, useful sparsity patterns should have discrimination, which cannot be well achieved by standard sparse coding techniques. In this paper, we investigate structured sparse coding for obtaining discriminative class-specific group sparsity patterns in the context of classification. A structured dictionary learning approach for sparse coding is proposed by considering the \(\ell _{2,0}\) norm on each class of data. An efficient numerical algorithm with global convergence is developed for solving the related challenging \(\ell _{2,0}\) minimization problem. The learned dictionary is decomposed into class-specific dictionaries for the classification that is done according to the minimum reconstruction error among all the classes. For evaluation, the proposed method was applied to classifying both the synthetic data and real-world data. The experiments show the competitive performance of the proposed method in comparison with several existing discriminative sparse coding methods.

  相似文献   

19.
We consider the problem of determining whether or not there exists a sparse univariate polynomial that interpolates a given setS={(x i ,y i )} of points. Several important cases are resolved, e.g., the case when thex i's are all positive rational numbers. But the general problem remains open.  相似文献   

20.
The structure of portfolio selection depends essentially on the form of transaction cost. In this paper, we deal with the portfolio selection problems with general transaction costs under the assumption that the returns of assets obey LR-type possibility distributions. For any type of transaction costs, we employ a comprehensive learning particle swarm optimizer algorithm to obtain the optimal portfolio. Furthermore, we offer numerical experiments of different forms of transaction costs to illustrate the effectiveness of the proposed model and approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号