首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 811 毫秒
1.
Recommendation system has been a rhetoric area and a topic of rigorous research owing to its application in various domains, from academics to industries through e-commerce. Recommendation system is useful in reducing information overload and improving decision making for customers in any arena. Recommending products to attract customers and meet their needs have become an important aspect in this competitive environment. Although there are many approaches to recommend items, collaborative filtering has emerged as an efficient mechanism to perform the same. Added to it there are many evolutionary methods that could be incorporated to achieve better results in terms of accuracy of prediction, handling sparsity as well as cold start problems. In this paper, we have used unsupervised learning to address the problem of scalability. The recommendation engine reduces calculation time by matching the interest profile of the user to its partitioned and even smaller training samples. Additionally, we have explored the aspect of finding global neighbours through transitive similarities and incorporating particle swarm optimization (PSO) to assign weights to various alpha estimates (including the proposed α7) that alleviate sparsity problem. Our experimental study reveals that the particle swarm optimized alpha estimate has significantly increased the accuracy of prediction over the traditional methods of collaborative filtering and fixed alpha scheme.  相似文献   

2.
This paper considers a sparse portfolio rebalancing problem in which rebalancing portfolios with minimum number of assets are sought. This problem is motivated by the need to understand whether the initial portfolio is worthwhile to adjust or not, inducing sparsity on the selected rebalancing portfolio to reduce transaction costs (TCs), out-of-sample performance and small changes in portfolio. We propose a sparse portfolio rebalancing model by adding an l1 penalty item into the objective function of a general portfolio rebalancing model. In this way, the model is sparse with low TCs and can decide whether and which assets to adjust based on inverse optimization. Numerical tests on four typical data sets show that the optimal adjustment given by the proposed sparse portfolio rebalancing model has the advantage of sparsity and better out-of-sample performance than the general portfolio rebalancing model.  相似文献   

3.
Conventional high-order finite element methods are rarely used for industrial problems because the Jacobian rapidly loses sparsity as the order is increased, leading to unaffordable solve times and memory requirements. This effect typically limits order to at most quadratic, despite the favorable accuracy and stability properties offered by quadratic and higher order discretizations. We present a method in which the action of the Jacobian is applied matrix-free exploiting a tensor product basis on hexahedral elements, while much sparser matrices based on Q 1 sub-elements on the nodes of the high-order basis are assembled for preconditioning. With this “dual-order” scheme, storage is independent of spectral order and a natural taping scheme is available to update a full-accuracy matrix-free Jacobian during residual evaluation. Matrix-free Jacobian application circumvents the memory bandwidth bottleneck typical of sparse matrix operations, providing several times greater floating point performance and better use of multiple cores with shared memory bus. Computational results for the p-Laplacian and Stokes problem, using block preconditioners and AMG, demonstrate mesh-independent convergence rates and weak (bounded) dependence on order, even for highly deformed meshes and nonlinear systems with several orders of magnitude dynamic range in coefficients. For spectral orders around 5, the dual-order scheme requires half the memory and similar time to assembled quadratic (Q 2) elements, making it very affordable for general use.  相似文献   

4.
Hyperspectral unmixing (HU) is a popular tool in remotely sensed hyperspectral data interpretation, and it is used to estimate the number of reference spectra (end-members), their spectral signatures, and their fractional abundances. However, it can also be assumed that the observed image signatures can be expressed in the form of linear combinations of a large number of pure spectral signatures known in advance (e.g. spectra collected on the ground by a field spectro-radiometer, called a spectral library). Under this assumption, the solution of the fractional abundances of each spectrum can be seen as sparse, and the HU problem can be modelled as a constrained sparse regression (CSR) problem used to compute the fractional abundances in a sparse (i.e. with a small number of terms) linear mixture of spectra, selected from large libraries. In this article, we use the l 1/2 regularizer with the properties of unbiasedness and sparsity to enforce the sparsity of the fractional abundances instead of the l 0 and l 1 regularizers in CSR unmixing models, as the l 1/2 regularizer is much easier to be solved than the l 0 regularizer and has stronger sparsity than the l 1 regularizer (Xu et al. 2010). A reweighted iterative algorithm is introduced to convert the l 1/2 problem into the l 1 problem; we then use the Split Bregman iterative algorithm to solve this reweighted l 1 problem by a linear transformation. The experiments on simulated and real data both show that the l 1/2 regularized sparse regression method is effective and accurate on linear hyperspectral unmixing.  相似文献   

5.
The subject of this paper is the analysis of sparse state feedback design procedures for linear discrete-time systems. By sparsity we mean the presence of zero rows in the gain matrix; this requirement is natural in the engineering practice when designing “economy” control systems which make use of a small amount of control inputs. Apart from the design of stabilizing sparse controllers, the linear-quadratic regulation problem is considered in the sparse formulation. Also, we consider a regularization scheme typical to the ?1-optimization theory. The efficiency of the approach is illustrated via numerical examples.  相似文献   

6.
Collaborative filtering is a popular recommendation technique, which suggests items to users by exploiting past user-item interactions involving affinities between pairs of users or items. In spite of their huge success they suffer from a range of problems, the most fundamental being that of data sparsity. When the rating matrix is sparse, local similarity measures yield a poor neighborhood set thus affecting the recommendation quality. In such cases global similarity measures can be used to enrich the neighborhood set by considering transitive relationships among users even in the absence of any common experiences. In this work we propose a recommender system framework utilizing both local and global similarities, taking into account not only the overall sparsity in the rating data, but also sparsity at the user-item level. Several schemes are proposed, based on various sparsity measures pertaining to the active user, for the estimation of the parameter α, that allows the variation of the importance given to the global user similarity with regards to local user similarity. Furthermore, we propose an automatic scheme for weighting the various sparsity measures, through evolutionary approach, to obtain a unified measure of sparsity (UMS). In order to take maximum possible advantage of the various sparsity measures relating to an active user, a scheme based on the UMS is suggested for estimating α. Experimental results demonstrate that the proposed estimates of α, markedly, outperform the schemes for which α is kept constant across all predictions (fixed-α schemes), on accuracy of predicted ratings.  相似文献   

7.
The paper deals with the estimation of the maximal sparsity degree for which a given measurement matrix allows sparse reconstruction through ? 1-minimization. This problem is a key issue in different applications featuring particular types of measurement matrices, as for instance in the framework of tomography with low number of views. In this framework, while the exact bound is NP hard to compute, most classical criteria guarantee lower bounds that are numerically too pessimistic. In order to achieve an accurate estimation, we propose an efficient greedy algorithm that provides an upper bound for this maximal sparsity. Based on polytope theory, the algorithm consists in finding sparse vectors that cannot be recovered by ? 1-minimization. Moreover, in order to deal with noisy measurements, theoretical conditions leading to a more restrictive but reasonable bounds are investigated. Numerical results are presented for discrete versions of tomography measurement matrices, which are stacked Radon transforms corresponding to different tomograph views.  相似文献   

8.
Global localization is a very fundamental and challenging problem in Robotic Soccer. Here, the main aim is to find the best method which is very robust and fast and requires less computational resources and memory compared to similar approaches and is precise enough for robot soccer games and technical challenges. In this work, the Reverse Monte Carlo localization (R-MCL) method is introduced. The algorithm is designed for fast, precise and robust global localization of autonomous robots in the robotic soccer domain, to overcome the uncertainties in the sensors, environment and the motion model. R-MCL is a hybrid method based on Markov localization (ML) and Monte Carlo localization (MCL), where the ML based module finds the region where the robot should be and the MCL based part predicts the geometrical location with high precision by selecting samples in this region. It is called Reverse since the MCL routine is applied in a reverse manner in this algorithm. In this work, this method is tested on a challenging data set that is used by many other researchers and compared in terms of error rate against different levels of noise, and sparsity. Additionally, the time required to recover from kidnapping and the processing time of the methods are tested and compared. According to the test results R-MCL is a considerable method against high sparsity and noise. It is preferable when its recovery from kidnapping and processing times are considered. It gives robust and fast but relatively coarse position estimations against imprecise and inadequate perceptions, and coarse action data, including regular misplacements, and false perceptions.  相似文献   

9.
To address the sparse system identification problem in a non‐Gaussian impulsive noise environment, the recursive generalized maximum correntropy criterion (RGMCC) algorithm with sparse penalty constraints is proposed to combat impulsive‐inducing instability. Specifically, a recursive algorithm based on the generalized correntropy with a forgetting factor of error is developed to improve the performance of the sparsity aware maximum correntropy criterion algorithms by achieving a robust steady‐state error. Considering an unknown sparse system, the l1‐norm and correntropy induced metric are employed in the RGMCC algorithm to exploit sparsity as well as to mitigate impulsive noise simultaneously. Numerical simulations are given to show that the proposed algorithm is robust while providing robust steady‐state estimation performance.  相似文献   

10.
Recommender systems usually employ techniques like collaborative filtering for providing recommendations on items/services. Maximum Margin Matrix Factorization (MMMF) is an effective collaborative filtering approach. MMMF suffers from the data sparsity problem, i.e., the number of items rated by the users are very small as compared to the very large item space. Recently, techniques like cross-domain collaborative filtering (transfer learning) is suggested for addressing the data sparsity problem. In this paper, we propose a model for transfer learning in collaborative filtering through MMMF to address the data sparsity issue. The latent feature matrices involved in MMMF are clustered and combined to generate a cluster-level rating pattern called codebook and a codebook transfer is used for transfer of information. Transferring of codebook and finding the predicted rating matrix is done in a novel way by introducing a softness constraint into the optimization function. We have experimented our methods with different levels of sparsity using benchmark datasets. Results from experiments show that our model approximates the target matrix well.  相似文献   

11.
李茂  周志刚  王涛 《计算机科学》2019,46(1):138-142
稀疏码分多址(即非正交多址)(Sparse Code Multiple Access,SCMA) 技术,具有在有限频谱资源下过载通信的特点,能够显著提升频谱利用率。得益于稀疏码分多址码本的稀疏性,消息传递算法(Message Passing Algorithm,MPA)成为经典多用户检测算法。在传统MPA方法中,尽管与最大似然译码具有相近的误比特率(Bit Error Ratio,BER)性能,但指数运算的复杂度仍然很高。据此,设计一种基于置信度的动态边缘选择更新方法,以减少不必要的节点运算。每次迭代中,利用因子图模型中功能节点到变量节点的置信度稳定性信息,动态判定是否需要节点更新运算。仿真结果表明,动态边缘选择方案使得算法的复杂度得到显著降低,并且能够与BER取得良好的均衡。  相似文献   

12.
In this paper, we study an efficient scheme for disseminating status information in a distributed computer system connected by multiple contention buses. Such a scheme is critical in resource sharing and load balancing applications. The collection of status information in these systems usually incurs a large overhead, which may impede regular message traffic and degrade system performance. Moreover, the status information collected may be outdated due to network delays. We describe our scheme with respect to the load balancing problem, although the scheme developed applies to resource sharing applications in general. We first reduce the decision problem for job migration in a system with multiple contention buses to the ordered-selection problem. A heuristic multiwindow protocol that utilizes the collision-detection capability of these buses is proposed and analyzed. The proposed protocol does not require explicit message transfers and can identify the t smallest variates out of N distributed random variates in an average of approximately (0.8 log2t + 0.2 log2N + 1.2) contention steps.  相似文献   

13.
This paper introduces a general principle for constructing multiscale kernels on surface meshes, and presents a construction of the multiscale pre‐biharmonic and multiscale biharmonic kernels. Our construction is based on an optimization problem that seeks to minimize a smoothness criterion, the Laplacian energy, subject to a sparsity inducing constraint. Namely, we use the lasso constraint, which sets an upper bound on the l1 ‐norm of the solution, to obtain a family of solutions parametrized by this upper‐bound parameter. The interplay between sparsity and smoothness results in smooth kernels that vanish away from the diagonal. We prove that the resulting kernels have gradually changing supports, consistent behavior over partial and complete meshes, and interesting limiting behaviors (e.g. in the limit of large scales, the multiscale biharmonic kernel converges to the Green's function of the biharmonic equation); in addition, these kernels are based on intrinsic quantities and so are insensitive to isometric deformations. We show empirically that our kernels are shape‐aware, are robust to noise, tessellation, and partial object, and are fast to compute. Finally, we demonstrate that the new kernels can be useful for function interpolation and shape correspondence.  相似文献   

14.
The sparsity driven classification technologies have attracted much attention in recent years, due to their capability of providing more compressive representations and clear interpretation. Two most popular classification approaches are support vector machines (SVMs) and kernel logistic regression (KLR), each having its own advantages. The sparsification of SVM has been well studied, and many sparse versions of 2-norm SVM, such as 1-norm SVM (1-SVM), have been developed. But, the sparsification of KLR has been less studied. The existing sparsification of KLR is mainly based on L 1 norm and L 2 norm penalties, which leads to the sparse versions that yield solutions not so sparse as it should be. A very recent study on L 1/2 regularization theory in compressive sensing shows that L 1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm, and, furthermore, the model can be efficiently solved by a simple iterative thresholding procedure. The objective function dealt with in L 1/2 regularization theory is, however, of square form, the gradient of which is linear in its variables (such an objective function is the so-called linear gradient function). In this paper, through extending the linear gradient function of L 1/2 regularization framework to the logistic function, we propose a novel sparse version of KLR, the 1/2 quasi-norm kernel logistic regression (1/2-KLR). The version integrates advantages of KLR and L 1/2 regularization, and defines an efficient implementation scheme of sparse KLR. We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence. We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR, at the same or better accuracy level. The conclusion is also true even in comparison with sparse SVMs (1-SVM and 2-SVM). We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity (correspondingly, the number of support vectors) is given, which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers. As an illustration of benefits of 1/2-KLR, we give two applications of 1/2-KLR in semi-supervised learning, showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.  相似文献   

15.
夏先进  李士宁  张羽  李志刚  杨哲 《软件学报》2015,26(8):1983-2006
无线传感器网络的固有通信特征会引发能耗不均衡现象,进而产生能量空洞问题;混合数据传输是新近提出的一种能量空洞避免策略,其能量均衡性能主要取决于各节点的传输概率.然而,传输概率的设置还缺乏相关理论模型的指导,而且在节点传输距离受限的条件下能否通过混合传输策略实现全网能量均衡,还有待进一步研究.将一维网络中混合传输策略的能量均衡问题转化为传输概率的优化分配问题,通过相应的形式化模型,推导传输概率的精确表达.研究中发现:传输概率主要取决于节点的位置,但当网络片段的个数超过某一阈值时,传输概率的取值非法,无法应用混合传输策略均衡网络能耗.在此基础上,从理论上给出了全网能量均衡的条件,证明仅当网络片段数不超过n0时全网能量均衡才能实现,n0仅取决于一个文中新发现的系数α,α是通信系统的能耗溢价率.还分析了传输距离设置对能量均衡的影响,给出了一般情况下混合传输策略的能量均衡上限.通过仿真实验对所给出的能量均衡条件进行了验证,实验结果与理论分析表明:该条件下,基于所提方法设置传输概率,能够均衡所有节点的能耗.  相似文献   

16.
The number of training samples per class (n) required for accurate Maximum Likelihood (ML) classification is known to be affected by the number of bands (p) in the input image. However, the general rule which defines that n should be 10p to 30p is often enforced universally in remote sensing without questioning its relevance to the complexity of the specific discrimination problem. Furthermore, identifying this many training samples is often problematic when many classes and/or many bands are used. It is important, then, to test how this generally accepted rule matches common remote sensing discrimination problems because it could be unnecessarily restrictive for many applications. This study was primarily conducted in order to test whether the general rule defining the relationship between n and p was well-suited for ML classification of a relatively simple remote sensing-based discrimination problem. To summarise the mean response of n-to-p for our study site, a Monte Carlo procedure was used to randomly stack various numbers of bands into thousands of separate image combinations that were then classified using an ML algorithm. The bands were randomly selected from a 119-band Enhanced Thematic Mapper-plus (ETM+) dataset comprised of 17 images acquired during the 2001-2002 southern hemisphere summer agricultural growing season over an irrigation area in south-eastern Australia. Results showed that the number of training samples needed for accurate ML classification was much lower than the current widely accepted rule. Due to the asymptotic nature of the relationship, we found that 95% of the accuracy attained using n = 30p samples could be achieved by using approximately 2p to 4p samples, or ≤ 1 / 7th the currently recommended value of n. Our findings show that the number of training samples needed for a simple discrimination problem is much less than that defined by the general rule and therefore the rule should not be universally enforced; the number of training samples needed should also be determined by considering the complexity of the discrimination problem.  相似文献   

17.
Staging is a powerful language construct that allows a program at one stage of evaluation to manipulate and specialize a program to be executed at a later stage. We propose a new staged language calculus, ??ML??, which extends the programmability of staged languages in two directions. First, ??ML?? supports dynamic type specialization: types can be dynamically constructed, abstracted, and passed as arguments, while preserving decidable typechecking via a System F??-style semantics combined with a restricted form of ?? ?? -style runtime type construction. With dynamic type specialization the data structure layout of a program can be optimized via staging. Second, ??ML?? works in a context where different stages of computation are executed in different process spaces, a property we term staged process separation. Programs at different stages can directly communicate program data in ??ML?? via a built-in serialization discipline. The language ??ML?? is endowed with a metatheory including type preservation, type safety, and decidability as demonstrated constructively by a sound type checking algorithm. While our language design is general, we are particularly interested in future applications of staging in resource-constrained and embedded systems: these systems have limited space for code and data, as well as limited CPU time, and specializing code for the particular deployment at hand can improve efficiency in all of these dimensions. The combination of dynamic type specialization and staging across processes greatly increases the utility of staged programming in these domains. We illustrate this via wireless sensor network programming examples.  相似文献   

18.
Nonnegative matrix factorization has been widely applied recently. The nonnegativity constraints result in parts-based, sparse representations which can be more robust than global, non-sparse features. However, existing techniques could not accurately dominate the sparseness. To address this issue, we present a unified criterion, called Nonnegative Matrix Factorization by Joint Locality-constrained and ? 2,1-norm Regularization(NMF2L), which is designed to simultaneously perform nonnegative matrix factorization and locality constraint as well as to obtain the row sparsity. We reformulate the nonnegative local coordinate factorization problem and use ? 2,1-norm on the coefficient matrix to obtain row sparsity, which results in selecting relevant features. An efficient updating rule is proposed, and its convergence is theoretically guaranteed. Experiments on benchmark face datasets demonstrate the effectiveness of our presented method in comparison to the state-of-the-art methods.  相似文献   

19.
Due to the ability of sensor nodes to collaborate, time synchronization is essential for many sensor network operations. With the aid of hardware capabilities, this work presents a novel time synchronization method, which employs a dual-clock delayed-message approach, for energy-constrained wireless sensor networks (WSNs). To conserve WSN energy, this study adopts the flooding time synchronization scheme based on one-way timing messages. Via the proposed approach, the maximum-likelihood (ML) estimation of time parameters, such as clock skew and clock offset, can be obtained for time synchronization. Additionally, with the proposed scheme, the clock skew and offset estimation problem will be transformed into a problem independent of random delay and propagation delay. The ML estimation of link propagation delay, which can be used for localization systems in the proposed scenario, is also obtained. In addition to good performance, the proposed method has low complexity.  相似文献   

20.
Asifullah  Syed Fahad  Abdul  Tae-Sun   《Pattern recognition》2008,41(8):2594-2610
We present an innovative scheme of blindly extracting message bits when a watermarked image is distorted. In this scheme, we have exploited the capabilities of machine learning (ML) approaches for nonlinearly classifying the embedded bits. The proposed technique adaptively modifies the decoding strategy in view of the anticipated attack. The extraction of bits is considered as a binary classification problem. Conventionally, a hard decoder is used with the assumption that the underlying distribution of the discrete cosine transform coefficients do not change appreciably. However, in case of attacks related to real world applications of watermarking, such as JPEG compression in case of shared medical image warehouses, these coefficients are heavily altered. The sufficient statistics corresponding to the maximum likelihood based decoding process, which are considered as features in the proposed scheme, overlap at the receiving end, and a simple hard decoder fails to classify them properly. In contrast, our proposed ML decoding model has attained highest accuracy on the test data. Experimental results show that through its training phase, our proposed decoding scheme is able to cope with the alterations in features introduced by a new attack. Consequently, it achieves promising improvement in terms of bit correct ratio in comparison to the existing decoding scheme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号