共查询到20条相似文献,搜索用时 15 毫秒
1.
随着运动数据越来越多地被应用于动画制作和科研领域,高效的运动数据压缩技术也逐渐成为一个热门的研究课题.基于稀疏表达提出一种新的运动数据有损压缩方法.首先对输入的运动数据进行分析生成稀疏表达字典;然后基于稀疏表达字典对运动数据中的每一帧进行稀疏线性表达;最后用K-SVD算法对字典和稀疏表示进行迭代优化.实验结果表明,本文方法可以达到较高的压缩比(50倍左右),同时保持原始运动数据的完整性,还原后可控制重建误差在肉眼不易分辨的范围内(平均RMS误差2.0以下),并且本文方法特别适用于对较短运动数据的压缩. 相似文献
2.
针对稀疏重构下二维波达方向(2D-DOA)估计存在计算量大的问题,提出一种基于协方差矩阵降维稀疏表示的二维波达方向估计方法。首先引入空间角构造流形矢量矩阵冗余字典,将方位角和俯仰角组合从二维空间映射到一维空间,降低了字典的长度和求解复杂度,并且能自动实现俯仰角和方位角配对;其次改进了样本协方差矩阵的稀疏表示模型,对该模型进行了降维处理;然后由协方差矩阵稀疏重构的残差约束特性得到约束残差项置信区间,避免采用正则化方法导致参数选取困难;最后通过凸优化包实现了二维波达方向的估计。仿真实验表明,待选取的协方差矩阵列数达到某个阈值(在只有两个入射信号情况下该值为3)时,可准确实现入射信号角的估计;当信噪比(SNR)较小(<5dB)时,该方法估计精度优于基于空间角的特征矢量算法;低快拍数(<100)下该方法估计精度略低于特征矢量法,但小间隔角度下估计精度与后者相当。 相似文献
3.
传统的多标签分类算法是以二值标签预测为基础的,而二值标签由于仅能指示数据是否具有相关类别,所含语义信息较少,无法充分表示标签语义信息。为充分挖掘标签空间的语义信息,提出了一种基于非负矩阵分解和稀疏表示的多标签分类算法(MLNS)。该算法结合非负矩阵分解与稀疏表示技术,将数据的二值标签转化为实值标签,从而丰富标签语义信息并提升分类效果。首先,对标签空间进行非负矩阵分解以获得标签潜在语义空间,并将标签潜在语义空间与原始特征空间结合以形成新的特征空间;然后,对此特征空间进行稀疏编码来获得样本间的全局相似关系;最后,利用该相似关系重构二值标签向量,从而实现二值标签与实值标签的转化。在5个标准多标签数据集和5个评价指标上将所提算法与MLBGM、ML2、LIFT和MLRWKNN等算法进行对比。实验结果表明,所提MLNS在多标签分类中优于对比的多标签分类算法,在50%的案例中排名第一,在76%的案例中排名前二,在全部的案例中排名前三。 相似文献
4.
5.
Sparse matrix partitioning is a common technique used for improving performance of parallel linear iterative solvers. Compared to solvers used for symmetric linear systems, solvers for nonsymmetric systems offer more potential for addressing different multiple communication metrics due to the flexibility of adopting different partitions on the input and output vectors of sparse matrix-vector multiplication operations. In this regard, there exist works based on one-dimensional (1D) and two-dimensional (2D) fine-grain partitioning models that effectively address both bandwidth and latency costs in nonsymmetric solvers. In this work, we propose two new models based on 2D checkerboard and jagged partitioning. These models aim at minimizing total message count while maintaining a balance on communication volume loads of processors; hence, they address both bandwidth and latency costs. We evaluate all partitioning models on two nonsymmetric system solvers implemented using the widely adopted PETSc toolkit and conduct extensive experiments using these solvers on a modern system (a BlueGene/Q machine) successfully scaling them up to 8K processors. Along with the proposed models, we put practical aspects of eight evaluated models (two 1D- and six 2D-based) under thorough analysis. To the best of our knowledge, this is the first work that analyzes practical performance of 2D models on this scale. Among evaluated models, the models that rely on 2D jagged partitioning obtain the most promising results by striking a balance between minimizing bandwidth and latency costs. 相似文献
6.
Weijing Song Ze Deng Lizhe Wang Bo Du Peng Liu Ke Lu 《The Journal of supercomputing》2017,73(8):3433-3450
Sparse representation is a building block for many image processing applications such as compression, denoising, fusion and so on. In the era of “Big data”, the current spare representation methods generally do not meet the demand of time-efficiently processing the large image dataset. Aiming at this problem, this paper employed the contemporary general-purpose computing on the graphics processing unit (GPGPU) to extend a sparse representation method for big image datasets, IK-SVD, namely G-IK-SVD. The GPU-aided IK-SVD parallelized IK-SVD with three GPU optimization methods: (1) a batch-OMP algorithm based on GPU-aided Cholesky decomposition algorithm, (2) a GPU sparse matrix operation optimization method and (3) a hybrid parallel scheme. The experimental results indicate that (1) the GPU-aided batch-OMP algorithm shows speedups of up to 30 times than the sparse coding part of IK-SVD, (2) the optimized sparse matrix operations improve the whole procedure of IK-SVD up to 15 times,(3) the proposed parallel scheme can further accelerate the procedure of sparsely representing one large image dataset up to 24 times, and (4) G-IK-SVD can gain the same quality of dictionary learning as IK-SVD. 相似文献
7.
目的 针对因采集的人脸图像样本受到污染而严重干扰人脸识别及训练样本较少(小样本)时会由于错误的稀疏系数导致性能急剧下降从而影响人脸识别的问题,提出了一种基于判别性非凸低秩矩阵分解的叠加线性稀疏表示算法。方法 首先由γ范数取代传统核范数,克服了传统低秩矩阵分解方法求解核范数时因矩阵奇异值倍数缩放导致的识别误差问题;然后引入结构不相干判别项,以增加不同类低秩字典间的非相干性,达到抑制类内变化和去除类间相关性的目的;最后利用叠加线性稀疏表示方法完成分类。结果 所提算法在AR人脸库中的识别率达到了98.67±0.57%,高于SRC(sparse representation-based classification)、ESRC(extended SRC)、RPCA(robust principal component analysis)+SRC、LRSI(low rank matrix decomposition with structural incoherence)、SLRC(superposed linear representation based classification)-l1等算法;同时,遮挡实验表明,算法对遮挡图像具有更好的鲁棒性,在不同遮挡比例下,相比其他算法均有更高的识别率。在CMU PIE人脸库中,对无遮挡图像添加0、10%、20%、30%、40%的椒盐噪声,算法识别率分别达到90.1%、85.5%、77.8%、65.3%和46.1%,均高于其他算法。结论 不同人脸库、不同比例遮挡和噪声的实验结果表明,所提算法针对人脸遮挡、表情和光照等噪声因素依然保持较高的识别率,鲁棒性更好。 相似文献
8.
9.
提出一种新的目标跟踪算法,将稀疏表示应用于LK(Lucas-Kanade)图像配准框架.通过最小化校准误差的L1范数来求解目标的状态参数,从而实现对目标的准确跟踪.对目标同时建立两个外观模型:动态字典和静态模板,其中动态模型由动态字典的稀疏表示来描述目标外观.为了解决由于动态字典不断更新造成的跟踪漂移问题,一个两阶段迭代机制被采用.两个阶段所采用的目标模型分别为动态字典和静态模板.大量的实验结果表明,本文算法能有效应对外观变化、局部遮挡、光照变化等挑战,同时具有较好的实时性. 相似文献
10.
近年来,随着信号的稀疏性理论越来越受到人们的关注,稀疏表征分类器也作为一种新型的分类算法被应用到话者识别系统中。该模型的基本思想是:只要超完备字典足够大,任意待测样本都能够用超完备字典进行线性表示。基于信号的稀疏性理论,未知话者的向量系数,即稀疏解可以通过L1范数最小化获取。超完备字典则可视为语音特征向量在高斯混合模型-通用背景模型(GMM-UBM)上进行MAP自适应而得到的大型数据库。采用稀疏表征模型作为话者辨认的分类方法,基于TIMIT语料库的实验结果表明,所采用的话者辨认方法,能够大大提高说话人识别系统的性能。 相似文献
11.
Liu Xuejie Wang Jingbin Yin Ming Edwards Benjamin Xu Peijuan 《Neural computing & applications》2017,28(1):135-143
Neural Computing and Applications - Context of data points, which is usually defined as the other data points in a data set, has been found to paly important roles in data representation and... 相似文献
12.
Antonela Tommasel Daniela Godoy Alejandro Zunino Cristian Mateos 《Knowledge and Information Systems》2017,51(2):459-497
Matrix computations are both fundamental and ubiquitous in computational science, and as a result, they are frequently used in numerous disciplines of scientific computing and engineering. Due to the high computational complexity of matrix operations, which makes them critical to the performance of a large number of applications, their efficient execution in distributed environments becomes a crucial issue. This work proposes a novel approach for distributing sparse matrix arithmetic operations on computer clusters aiming at speeding-up the processing of high-dimensional matrices. The approach focuses on how to split such operations into independent parallel tasks by considering the intrinsic characteristics that distinguish each type of operation and the particular matrices involved. The approach was applied to the most commonly used arithmetic operations between matrices. The performance of the presented approach was evaluated considering a high-dimensional text feature selection approach and two real-world datasets. Experimental evaluation showed that the proposed approach helped to significantly reduce the computing times of big-scale matrix operations, when compared to serial and multi-thread implementations as well as several linear algebra software libraries. 相似文献
13.
14.
When analysing patterns, our goals are (i) to find structure in the presence of noise, (ii) to decompose the observed structure into sub-components, and (iii) to use the components for pattern completion. Here, a novel loop architecture is introduced to perform these tasks in an unsupervised manner. The architecture combines sparse code shrinkage with non-negative matrix factorisation, and blends their favourable properties: sparse code shrinkage aims to remove Gaussian noise in a robust fashion; non-negative matrix factorisation extracts substructures from the noise filtered inputs. The loop architecture performs robust pattern completion when organised into a two-layered hierarchy. We demonstrate the power of the proposed architecture on the so-called bar-problem and on the FERET facial database. 相似文献
15.
Sparse representation is a mathematical model for data representation that has proved to be a powerful tool for solving problems in various fields such as pattern recognition, machine learning, and computer vision. As one of the building blocks of the sparse representation method, dictionary learning plays an important role in the minimization of the reconstruction error between the original signal and its sparse representation in the space of the learned dictionary. Although using training samples directly as dictionary bases can achieve good performance, the main drawback of this method is that it may result in a very large and inefficient dictionary due to noisy training instances. To obtain a smaller and more representative dictionary, in this paper, we propose an approach called Laplacian sparse dictionary (LSD) learning. Our method is based on manifold learning and double sparsity. We incorporate the Laplacian weighted graph in the sparse representation model and impose the l1-norm sparsity on the dictionary. An LSD is a sparse overcomplete dictionary that can preserve the intrinsic structure of the data and learn a smaller dictionary for each class. The learned LSD can be easily integrated into a classification framework based on sparse representation. We compare the proposed method with other methods using three benchmark-controlled face image databases, Extended Yale B, ORL, and AR, and one uncontrolled person image dataset, i-LIDS-MA. Results show the advantages of the proposed LSD algorithm over state-of-the-art sparse representation based classification methods. 相似文献
16.
17.
针对目前基于稀疏表示的常用图像融合算法计算复杂度高以及忽略图像局部特征的问题,提出多尺度稀疏表示(multi-scale sparse representation,MSR)的图像融合方法.充分利用小波多尺度分析较好突出图像局部特征的特点,将其和过完备稀疏表示有效结合;待融合图像在小波解析域中进行小波多层分解,对每个尺度的特征运用K-SVD (kernel singular value decomposition)多尺度字典进行OMP (orthogonal matching pursuit)稀疏编码,并在小波域中各个尺度中进行融合.实验结果表明,与传统的小波变换、轮廓波变换、稀疏表示融合算法相比,该算法更能保证图像局部特征的完整性,实现更好的性能. 相似文献
18.
基于可变形部件模型DPM的目标检测算法采用方向梯度直方图HOG进行特征表示,由于HOG无法处理模糊的边界而且忽略了平滑的特征区域,从而影响了DPM算法的性能。为了提高DPM的性能,提出了一种基于稀疏表示的可变形部件模型目标检测的方法。该方法利用稀疏编码构建一种新的特征描述子来取代原可变形部件所使用的方向梯度直方图,新的特征描述子能够描述物体更多的信息,对图像中的噪声不敏感。实验结果表明,该方法在PASCAL VOC 2012数据集上提高了原可变形部件模型算法的精度。 相似文献
19.
为了准确地进行SAR图像目标识别,提出一种基于稀疏表示的SAR目标识别方法,在用主成分分析(PCA)进行降维的前提下,利用降维后的训练样本构建稀疏线性模型,通过 ξ1范数最优化求解测试样本的稀疏系数解x,利用系数的稀疏性分布进行目标的分类识别。基于MSTAR数据进行了仿真验证,实验证明,基于稀疏表示的SAR目标识别方法在一定的特征维数下能够获得很好的识别性能,在目标方位角未知的情况下识别率仍可达到98%以上。 相似文献
20.
为了解决高维数据在分类时导致的维数灾难,降维是数据预处理阶段的主要步骤。基于稀疏学习进行特征选择是目前的研究热点。针对现实中大量非线性可分问题,借助核技巧,将非线性可分的数据样本映射到核空间,以解决特征的非线性相似问题。进一步对核空间的数据样本进行稀疏重构,得到原数据在核空间的一种简洁的稀疏表达方式,然后构建相应的评分机制选择最优子集。受益于稀疏学习的自然判别能力,该算法能够选择出保持原始数据结构特性的"好"特征,从而降低学习模型的计算复杂度并提升分类精度。在标准UCI数据集上的实验结果表明,其性能上与同类算法相比平均可提高约5%。 相似文献