首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 171 毫秒
1.
提出一种基于深度卷积联合适应网络(Convolutional neural network-joint adaptation network,CNN-JAN)的脑电信号(Electroencephalogram, EEG)情感识别模型。该模型将迁移学习中联合适应的思想融合到深度卷积网络中,首先采用长方形卷积核提取数据的空间特征,捕捉脑电数据通道间的深层情感相关信息,再将提取的空间特征输入含有联合分布的多核最大均值差异算法(Multi-kernel joint maximum mean discrepancy,MK-JMMD)的适配层进行迁移学习,使用MK-JMMD度量算法解决源域和目标域分布不同的问题。所提方法在SEED数据集上使用微分熵特征和微分尾端性特征分别进行情感分类实验,其中使用微分熵特征被试内跨试验准确率达到84.01%,与对比实验和目前流行的迁移学习方法相比,准确率进一步提高,跨被试实验精度也取得较好的性能,验证了该模型用于EEG信号情感识别任务的有效性。  相似文献   

2.
针对多物种鸟声识别中多物种鸟声样本不足的问题,尝试采用单物种鸟声样本训练多物种鸟声识别模型,并提出一种基于特征迁移的多物种鸟声识别方法。该方法引入特征迁移学习算法,利用最大均值差异(Maximum mean discrepancy,MMD)度量鸟声样本特征分布差异,将不同分布的单物种鸟声和多物种鸟声的音频特征映射为同分布的潜在音频特征,再基于同分布的音频特征构造识别模型。使得单物种鸟声样本训练的识别模型也能够适用于多物种鸟声识别 。在自然形成的多物种鸟声数据集上,算法在4项多标记评价指标上都取得了较好的识别效果;在人工构造的多物种鸟声数据集上对比试验表明,基于特征迁移的识别算法在单个物种上的正确识别率相较于对比算法最高提升了20%。  相似文献   

3.
针对现有的多标记迁移学习忽略条件分布而导致泛化能力不足的问题,设计了一种基于联合分布的多标记迁移学习(Multi-label Transfer Learning via Joint Distribution Alignment,J-MLTL)。分解原始特征生成特征子空间,在子空间中计算条件分布的权重系数,最小化跨领域数据的边际分布和条件分布差异;此外,为了防止标记内部结构信息损失,利用超图对具有多个相同标签的数据进行连接,保持领域内几何流行结构不受领域外知识结构的影响,进一步最小化领域间的分布差异。实验结果表明,相比于已有多标记迁移学习算法在分类精度方面具有显著提升。  相似文献   

4.
一种基于局部加权均值的领域适应学习框架   总被引:2,自引:0,他引:2  
皋军  黄丽莉  孙长银 《自动化学报》2013,39(7):1037-1052
最大均值差异(Maximum mean discrepancy, MMD)作为一种能有效度量源域和目标域分布差异的标准已被成功运用.然而, MMD作为一种全局度量方法一定程度上反映的是区域之间全局分布和全局结构上的差异.为此, 本文通过引入局部加权均值的方法和理论到MMD中, 提出一种具有局部保持能力的投影最大局部加权均值差异(Projected maximum local weighted mean discrepancy, PMLWD)度量,%从而一定程度上使得PMLWD更能有效度量源域和目标域中局部分块之间的分布和结构上的差异,结合传统的学习理论提出基于局部加权均值的领域适应学习框架(Local weighted mean based domain adaptation learning framework, LDAF), 在LDAF框架下, 衍生出两种领域适应学习方法: LDAF_MLC和 LDAF_SVM.最后,通过测试人工数据集、高维文本数据集和人脸数据集来表明LDAF比其他领域适应学习方法更具优势.  相似文献   

5.
最大均值差异仅用于反映样本空间总体的分布信息和全局结构信息,忽略了单个样本对全局度量贡献的差异性。为此,提出一种最大分布加权均值差异(MDWMD)度量方法,采用白化余弦相似性度量为源域和目标域的所有样本设计相应的分布权重,使得每个样本的分布差异信息在全局度量中均得以体现。进一步,在MDWMD基础上,结合联合分布调整思想,提出一种领域适应学习算法:基于最大分布加权均值嵌入的联合分布调整,同时对源域和目标域中的数据进行边缘概率分布调整和条件分布调整。实验结果表明,与现有典型的迁移学习和无迁移学习算法相比,所提算法在不同类型跨领域图片数据集上的分类精度较高。  相似文献   

6.
多核局部领域适应学习   总被引:1,自引:0,他引:1  
陶剑文  王士同 《软件学报》2012,23(9):2297-2310
领域适应(或跨领域)学习旨在利用源领域(或辅助领域)中带标签样本来学习一种鲁棒的目标分类器,其关键问题在于如何最大化地减小领域间的分布差异.为了有效解决领域间特征分布的变化问题,提出一种三段式多核局部领域适应学习(multiple kernel local leaning-based domain adaptation,简称MKLDA)方法:1)基于最大均值差(maximum mean discrepancy,简称MMD)度量准则和结构风险最小化模型,同时,学习一个再生多核Hilbert空间和一个初始的支持向量机(support vector machine,简称SVM),对目标领域数据进行初始划分;2)在习得的多核Hilbert空间,对目标领域数据的类别信息进行局部重构学习;3)最后,利用学习获得的类别信息,在目标领域训练学习一个鲁棒的目标分类器.实验结果显示,所提方法具有优化或可比较的领域适应学习性能.  相似文献   

7.
针对在单一匹配边缘概率分布以缩减源域和目标域的差异性时存在的泛化能力差的问题,提出联合边缘概率分布和条件概率分布减小域间差异性的基于特征和实例的迁移学习算法.通过核主成分分析在子空间中寻找样本新的特征表示,在该子空间中利用最小化最大均值差异,联合匹配边缘概率分布和条件概率分布以减小源域和目标域间的差异性.同时利用L2,1范数约束选择源域中相关实例进行训练,进一步提高迁移学习获得的模型泛化性能.在字符集和对象识别数据集上的实验表明文中算法的有效性.  相似文献   

8.
在域间分布适配的过程中,容易丢失一些重要的域自身信息,在源域上难以训练获得一个有效的分类器,影响其在目标域上的泛化与标注性能.基于此种情况,文中提出联合类间及域间分布适配的迁移学习方法.通过学习一个公共投影矩阵,分别将源域与目标域映射到一个公共子空间上.采用最大均值差异方法分别度量类间及域间分布距离.在目标函数的优化过程中,不但显式地使域间分布差异变小,而且增大不同类别间的差异性,提高源域与目标域之间知识迁移的性能.在迁移学习数据集上的实验表明文中方法的有效性.  相似文献   

9.
近年来,针对实际应用场景中可匹配的训练数据不足的问题,科研人员发展出了迁移学习的概念,希望通过提取源域数据的特征信息进行迁移,从而提升目标域的学习效果.本文根据迁移学习所处理的不同数据类型,构造了两种典型的模型:单类别投影基构造模型与监督多类别投影模型.由于子空间投影可以在一定程度上反映原始样本空间的特征性质.因此,本文应用线性判别分析的技巧以及最大均值差异的思想,分别构造了上述模型的求解算法并对相应的非线性核方法进行了推广.  相似文献   

10.
传统机器学习面临一个难题,即当训练数据与测试数据不再服从相同分布时,由训练集得到的分类器无法对测试集文本准确分类。针对该问题,根据迁移学习原理,在源领域和目标领域的交集特征中,依据改进的特征分布相似度进行特征加权;在非交集特征中,引入语义近似度和新提出的逆文本类别指数(TF-ICF),对特征在源领域内进行加权计算,充分利用大量已标记的源领域数据和少量已标记的目标领域数据获得所需特征,以便快速构建分类器。在文本数据集20Newsgroups和非文本数据集UCI中的实验结果表明,基于分布和逆文本类别指数的特征迁移加权算法能够在保证精度的前提下对特征快速迁移并加权。  相似文献   

11.
Domain adaptation learning(DAL) methods have shown promising results by utilizing labeled samples from the source(or auxiliary) domain(s) to learn a robust classifier for the target domain which has a few or even no labeled samples.However,there exist several key issues which need to be addressed in the state-of-theart DAL methods such as sufficient and effective distribution discrepancy metric learning,effective kernel space learning,and multiple source domains transfer learning,etc.Aiming at the mentioned-above issues,in this paper,we propose a unified kernel learning framework for domain adaptation learning and its effective extension based on multiple kernel learning(MKL) schema,regularized by the proposed new minimum distribution distance metric criterion which minimizes both the distribution mean discrepancy and the distribution scatter discrepancy between source and target domains,into which many existing kernel methods(like support vector machine(SVM),v-SVM,and least-square SVM) can be readily incorporated.Our framework,referred to as kernel learning for domain adaptation learning(KLDAL),simultaneously learns an optimal kernel space and a robust classifier by minimizing both the structural risk functional and the distribution discrepancy between different domains.Moreover,we extend the framework KLDAL to multiple kernel learning framework referred to as MKLDAL.Under the KLDAL or MKLDAL framework,we also propose three effective formulations called KLDAL-SVM or MKLDAL-SVM with respect to SVM and its variant μ-KLDALSVM or μ-MKLDALSVM with respect to v-SVM,and KLDAL-LSSVM or MKLDAL-LSSVM with respect to the least-square SVM,respectively.Comprehensive experiments on real-world data sets verify the outperformed or comparable effectiveness of the proposed frameworks.  相似文献   

12.
In this paper, a novel semi-supervised feature extraction algorithm, i.e., semi-supervised transfer discriminant analysis (STDA) with knowledge transfer capability is proposed, based on the traditional algorithm that cannot get adapted in the change of the learning environment. By using both the pseudo label information from target domain samples and the actual label information from source domain samples in the label iterative refinement process, not only the between-class scatter is maximized while that within-class scatter is minimized, but also the original space structure is maintained via Laplacian matrix, and the distribution difference is reduced by using maximum mean discrepancy as well. Moreover, semi-supervised transfer discriminant analysis based on cross-domain mean constraint (STDA-CMC) is proposed. In this algorithm, the cross-domain mean constraint term is incorporated into STDA, such that knowledge transfer between domains is facilitated by making source and target samples after being projected are located more closely in the low-dimensional feature subspace. The proposed algorithm is proved efficient and feasible from experiments on several datasets.  相似文献   

13.
程波  朱丙丽  熊江 《计算机应用》2016,36(8):2282-2286
针对当前基于机器学习的早期阿尔茨海默病(AD)诊断中训练样本不足的问题,提出一种基于多模态特征数据的多标记迁移学习方法,并将其应用于早期阿尔茨海默病诊断。所提方法框架主要包括两大模块:多标记迁移学习特征选择模块和多模态多标记分类回归学习器模块。首先,通过稀疏多标记学习模型对分类和回归学习任务进行有效结合;然后,将该模型扩展到来自多个学习领域的训练集,从而构建出多标记迁移学习特征选择模型;接下来,针对异质特征空间的多模态特征数据,采用多核学习技术来组合多模态特征核矩阵;最后,为了构建能同时用于分类与回归的学习模型,提出多标记分类回归学习器,从而构建出多模态多标记分类回归学习器。在国际老年痴呆症数据库(ADNI)进行实验,分类轻度认知功能障碍(MCI)最高平均精度为79.1%,预测神经心理学量表测试评分值最大平均相关系数为0.727。实验结果表明,所提多模态多标记迁移学习方法可以有效利用相关学习领域训练数据,从而提高早期老年痴呆症诊断性能。  相似文献   

14.
Domain adaptation learning (DAL) is a novel and effective technique to address pattern classification problems where the prior information for training is unavailable or insufficient. Its effectiveness depends on the discrepancy between the two distributions that respectively generate the training data for the source domain and the testing data for the target domain. However, DAL may not work so well when only the distribution mean discrepancy between source and target domains is considered and minimized. In this paper, we first construct a generalized projected maximum distribution discrepancy (GPMDD) metric for DAL on reproducing kernel Hilbert space (RKHS) based domain distributions by simultaneously considering both the projected maximum distribution mean and the projected maximum distribution scatter discrepancy between the source and the target domain. In the sequel, based on both the structure risk and the GPMDD minimization principle, we propose a novel domain adaptation kernelized support vector machine (DAKSVM) with respect to the classical SVM, and its two extensions called LS-DAKSVM and μ-DAKSVM with respect to the least-square SVM and the v-SVM, respectively. Moreover, our theoretical analysis justified that the proposed GPMDD metric could effectively measure the consistency not only between the RKHS embedding domain distributions but also between the scatter information of source and target domains. Hence, the proposed methods are distinctive in that the more consistency between the scatter information of source and target domains can be achieved by tuning the kernel bandwidth, the better the convergence of GPMDD metric minimization is and thus improving the scalability and generalization capability of the proposed methods for DAL. Experimental results on artificial and real-world problems indicate that the performance of the proposed methods is superior to or at least comparable with existing benchmarking methods.  相似文献   

15.
Bao  Jiachao  Wang  Yibin  Cheng  Yusheng 《Applied Intelligence》2022,52(6):6093-6105

As an effective method for mining latent information between labels, label correlation is widely adopted by many scholars to model multi-label learning algorithms. Most existing multi-label algorithms usually ignore that the correlation between labels may be asymmetric while asymmetry correlation commonly exists in the real-world scenario. To tackle this problem, a multi-label learning algorithm with asymmetry label correlation (ACML, Asymmetry Label Correlation for Multi-Label Learning) is proposed in this paper. First, measure the adjacency between labels to construct the label adjacency matrix. Then, cosine similarity is utilized to construct the label correlation matrix. Finally, we constrain the label correlation matrix with the label adjacency matrix. Thus, asymmetry label correlation is modeled for multi-label learning. Experiments on multiple multi-label benchmark datasets show that the ACML algorithm has certain advantages over other comparison algorithms. The results of statistical hypothesis testing further illustrate the effectiveness of the proposed algorithm.

  相似文献   

16.
In complex working site, bearings used as the important part of machine, could simultaneously have faults on several positions. Consequently, multi-label learning approach considering fully the correlation between different faulted positions of bearings becomes the popular learning pattern. Deep reinforcement learning (DRL) combining the perception ability of deep learning and the decision-making ability of reinforcement learning, could be adapted to the compound fault diagnosis while having a strong ability extracting the fault feature from the raw data. However, DRL is difficult to converge and easily falls into the unstable training problem. Therefore, this paper integrates the feature extraction ability of DRL and the knowledge transfer ability of transfer learning (TL), and proposes the multi-label transfer reinforcement learning (ML-TRL). In detail, the proposed method utilizes the improved trust region policy optimization (TRPO) as the basic DRL framework and pre-trains the fixed convolutional networks of ML-TRL using the multi-label convolutional neural network method. In compound fault experiment, the final results demonstrate powerfully that the proposed method could have the higher accuracy than other multi-label learning methods. Hence, the proposed method is a remarkable alternative when recognizing the compound fault of bearings.  相似文献   

17.
The application of transfer learning to effectively identify rolling bearing fault has been attracting much attention. Most of the current studies are based on single-source domain or multi-source domains constructed from different working conditions of the same machine. However, in practical scenarios, it is common to obtain multiple source domains from different machines, which brings new challenges to how to use these source domains to complete fault diagnosis. To solve the issue, a conditional distribution-guided adversarial transfer learning network with multi-source domains (CDGATLN) is developed for fault diagnosis of bearing installed on different machines. Firstly, the knowledge of multi-source domains from different machines is transferred to the single target domain by decreasing data distribution discrepancy between each source domain and target domain. Then, a conditional distribution-guided alignment strategy is introduced to decrease conditional distribution discrepancy and calculate the importance per source domain based on the conditional distribution discrepancy, so as to promote the knowledge transfer of each source domain. Finally, a monotone importance specification mechanism is constructed to constrain each importance to ensure that the source domain with low importance will not be discarded, which enables the knowledge of each source domain to participate in the construction of the model. Extensive experimental results verify the effectiveness and superiority of CDGATLN.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号