首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Multimodal machine learning(MML)aims to understand the world from multiple related modalities.It has attracted much attention as multimodal data has become increasingly available in real-world application.It is shown that MML can perform better than single-modal machine learning,since multi-modalities containing more information which could complement each other.However,it is a key challenge to fuse the multi-modalities in MML.Different from previous work,we further consider the side-information,which reflects the situation and influences the fusion of multi-modalities.We recover multimodal label distribution(MLD)by leveraging the side-information,representing the degree to which each modality contributes to describing the instance.Accordingly,a novel framework named multimodal label distribution learning(MLDL)is proposed to recover the MLD,and fuse the multimodalities with its guidance to learn an in-depth understanding of the jointly feature representation.Moreover,two versions of MLDL are proposed to deal with the sequential data.Experiments on multimodal sentiment analysis and disease prediction show that the proposed approaches perform favorably against state-of-the-art methods.  相似文献   

2.
模态是指人接收信息的方式,包括听觉、视觉、嗅觉、触觉等多种方式。多模态学习是指通过利用多模态之间的互补性,剔除模态间的冗余性,从而学习到更好的特征表示。多模态学习的目的是建立能够处理和关联来自多种模式信息的模型,它是一个充满活力的多学科领域,具有日益重要和巨大的潜力。目前比较热门的研究方向是图像、视频、音频、文本之间的多模态学习。着重介绍了多模态在视听语音识别、图文情感分析、协同标注等实际层面的应用,以及在匹配和分类、对齐表示学习等核心层面的应用,并针对多模态学习的核心问题:匹配和分类、对齐表示学习方面给出了说明。对多模态学习中常用的数据集进行了介绍,并展望了未来多模态学习的发展趋势。  相似文献   

3.
异质信息网络能够对真实世界的诸多复杂应用场景进行建模,其表示学习研究也得到了众多学者的广泛关注.现有的异质网络表示学习方法大多基于元路径来捕获网络中的结构和语义信息,已经在后续的网络分析任务中取得很好的效果.然而,此类方法忽略了元路径的内部节点信息和不同元路径实例的重要性;仅能捕捉到节点的局部信息.因此,提出互信息与多条元路径融合的异质网络表示学习方法.首先,利用一种称为关系旋转编码的元路径内部编码方式,基于相邻节点和元路径上下文节点捕获异质信息网络的结构和语义信息,采用注意力机制来建模各元路径实例的重要性;然后,提出一种互信息最大化与多条元路径融合的无监督异质网络表示学习方法,使用互信息捕获全局信息以及全局信息和局部信息之间的联系.最后,在两个真实数据集上进行实验,并与当前主流的算法进行比较分析.结果表明,所提方法在节点分类和聚类任务上性能都有提升,甚至和一些半监督算法相比也表现出强劲性能.  相似文献   

4.
随着多媒体技术的发展,可获取的媒体数据在种类和量级上大幅提升。受人类感知方式的启发,多种媒体数据互相融合处理,促进了人工智能在计算机视觉领域的研究发展,在遥感图像解译、生物医学和深度估计等方面有广泛的应用。尽管多模态数据在描述事物特征时具有明显优势,但仍面临着较大的挑战。1)受到不同成像设备和传感器的限制,难以收集到大规模、高质量的多模态数据集;2)多模态数据需要匹配成对用于研究,任一模态的缺失都会造成可用数据的减少;3)图像、视频数据在处理和标注上需要耗费较多的时间和人力成本,这些问题使得目前本领域的技术尚待攻关。本文立足于数据受限条件下的多模态学习方法,根据样本数量、标注信息和样本质量等不同的维度,将计算机视觉领域中的多模态数据受限方法分为小样本学习、缺乏强监督标注信息、主动学习、数据去噪和数据增强5个方向,详细阐述了各类方法的样本特点和模型方法的最新进展。并介绍了数据受限前提下的多模态学习方法使用的数据集及其应用方向(包括人体姿态估计、行人重识别等),对比分析了现有算法的优缺点以及未来的发展方向,对该领域的发展具有积极的意义。  相似文献   

5.

A great many of approaches have been developed for cross-modal retrieval, among which subspace learning based ones dominate the landscape. Concerning whether using the semantic label information or not, subspace learning based approaches can be categorized into two paradigms, unsupervised and supervised. However, for multi-label cross-modal retrieval, supervised approaches just simply exploit multi-label information towards a discriminative subspace, without considering the correlations between multiple labels shared by multi-modalities, which often leads to an unsatisfactory retrieval performance. To address this issue, in this paper we propose a general framework, which jointly incorporates semantic correlations into subspace learning for multi-label cross-modal retrieval. By introducing the HSIC-based regularization term, the correlation information among multiple labels can be not only leveraged but also the consistency between the modality similarity from each modality is well preserved. Besides, based on the semantic-consistency projection, the semantic gap between the low-level feature space of each modality and the shared high-level semantic space can be balanced by a mid-level consistent one, where multi-label cross-modal retrieval can be performed effectively and efficiently. To solve the optimization problem, an effective iterative algorithm is designed, along with its convergence analysis theoretically and experimentally. Experimental results on real-world datasets have shown the superiority of the proposed method over several existing cross-modal subspace learning methods.

  相似文献   

6.
This paper describes a novel organizational learning model for multiple adaptive robots. In this model, robots acquire their own appropriate functions through local interactions among their neighbors, and get out of deadlock situations without explicit control mechanisms or communication methods. Robots also complete given tasks by forming an organizational structure, and improve their organizational performance. We focus on the emergent processes of collective behaviors in multiple robots, and discuss how to control these behaviors with only local evaluation functions, rather than with a centralized control system. Intensive simulations of truss construction by multiple robots gave the following experimental results: (1) robots in our model acquire their own appropriate functions and get out of deadlock situations without explicit control mechanisms or communication methods; (2) robots form an organizational structure which completes given tasks in fewer steps than are needed with a centralized control mechanism. This work was presented, in part, at the Second International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1997  相似文献   

7.
随着智能时代和大数据时代的到来,各种复杂异构数据不断涌现,成为数据驱动的人工智能方法、机器学习模型的基础。复杂异构数据的表征直接关系着后续模型的学习性能,因此如何有效地表征复杂异构数据成为机器学习的一个重要研究领域。文中首先介绍了数据表征的多种类型,并提出了现有数据表征方法面临的挑战;其次,根据数据类型将数据划分成单一类型数据和复合类型数据,针对单一类型数据,分别介绍了4种典型数据的表征学习发展现状和代表算法,包含离散数据、网络数据、文本数据和图像数据;然后,详细介绍了4种由多个单一数据或数据源复合而成的复杂数据,包含了离散特征与连续特征混合的结构化数据、属性数据与复杂网络复合的属性网络数据、来自不同领域的跨领域数据和由多种数据类型复合的多模态数据,分别介绍了基于上述复杂数据的表征学习现状以及最新的表征学习模型;最后,对复杂异构数据表征学习的发展趋势进行了探讨。  相似文献   

8.
针对无监督跨模态检索任务中不能充分利用单个模态内的语义关联信息的问题,提出了一种基于图卷积网络的无监督跨模态哈希检索方法。通过图像和文本编码器分别获得两个模态的特征,输入到图卷积网络中挖掘单个模态的内部语义信息,将结果通过哈希编码层进行二值化操作后,与模态间的深度语义关联相似度矩阵进行对比计算损失,不断重构优化生成的二进制编码,直到生成样本对应的健壮哈希表达。实验结果表明,与经典的浅层方法和深度学习方法对比,该方法在多个数据集上的跨模态检索准确率均有明显提升。证明通过图卷积网络能够进一步挖掘模态内的语义信息,所提模型具有更高的准确性和鲁棒性。  相似文献   

9.
In recent years we have seen lots of strong work in visual recognition, dialogue interpretation and multi-modal learning that is targeted at provide the building blocks to enable intelligent robots to interact with humans in a meaningful way and even continuously evolve during this process. Building systems that unify those components under a common architecture has turned out to be challenging, as each approach comes with it’s own set of assumptions, restrictions, and implications.For example, the impact of recent progress on visual category recognition has been limited from a perspective of interactive systems. Reasons for this are diverse. We identify and address two major challenges in order to integrate modern techniques for visual categorization in an interactive learning system: reducing the number of required labelled training examples and dealing with potentially erroneous input.Today’s object categorization methods use either supervised or unsupervised training methods. While supervised methods tend to produce more accurate results, unsupervised methods are highly attractive due to their potential to use far more and unlabeled training data. We proposes a novel method that uses unsupervised training to obtain visual groupings of objects and a cross-modal learning scheme to overcome inherent limitations of purely unsupervised training. The method uses a unified and scale-invariant object representation that allows to handle labeled as well as unlabeled information in a coherent way. First experiments demonstrate the ability of the system to learn object category models from many unlabeled observations and a few dialogue interactions that can be ambiguous or even erroneous.  相似文献   

10.
We present a new approach for online incremental word acquisition and grammar learning by humanoid robots. Using no data set provided in advance, the proposed system grounds language in a physical context, as mediated by its perceptual capacities. It is carried out using show-and-tell procedures, interacting with its human partner. Moreover, this procedure is open-ended for new words and multiword utterances. These facilities are supported by a self-organizing incremental neural network, which can execute online unsupervised classification and topology learning. Embodied with a mental imagery, the system also learns by both top-down and bottom-up processes, which are the syntactic structures that are contained in utterances. Thereby, it performs simple grammar learning. Under such a multimodal scheme, the robot is able to describe online a given physical context (both static and dynamic) through natural language expressions. It can also perform actions through verbal interactions with its human partner.  相似文献   

11.
Capturing the underlying semantic relationships of sentences is helpful for machine translation. Variational neural machine translation approaches provide an effective way to model the uncertain underlying semantics in languages by introducing latent variables. Multitask learning is applied in multimodal machine translation to integrate multimodal data. However, these approaches usually lack a strong interpretation in utilizing out-of-text information in machine translation tasks. In this paper, we propose a novel architecture-free multimodal translation model, called variational multimodal machine translation (VMMT), under the variational framework which can model the uncertainty in languages caused by ambiguity through utilizing visual and textual information. In addition, the proposed model can eliminate the discrepancy between training and prediction in the existing variational translation models by constructing encoders only relying on source data. More importantly, the proposed multimodal translation model is designed as multitask learning in which the shared semantic representation for different modes is learned and the gap among semantic representation from various modes is reduced by incorporating additional constraints. Moreover, the information bottleneck theory is adopted in our variational encoder–decoder model, which helps the encoder to filter redundancy and the decoder to concentrate on useful information. Experiments on multimodal machine translation demonstrate that the proposed model is competitive.  相似文献   

12.
由于具有低存储成本、高效检索、低标注成本等方面的优势,无监督的哈希技术已经引起了学术界越来越多的关注,并且已经广泛地应用到大规模数据库检索问题中.先前的无监督方法大部分依靠数据集本身的语义结构作为指导信息,要求在哈希空间中,数据的语义信息能够得到保持,从而完成哈希编码的学习.因此如何精确地表示语义结构以及哈希编码成为了无监督哈希方法成功的关键.本文提出一种新的基于自监督学习的策略进行无监督哈希编码学习.具体来讲,本文首先利用对比学习对在目标数据集上对网络进行学习,从而能够构建准确的语义相似性结构.接着,提出一个新的目标损失函数,期望在哈希空间中,数据的局部语义相似性结构能够得到保持,同时哈希编码的辨识力能够得到提升.本文提出的网络框架是端到端可训练的.最后,提出的算法在两个大规模图像检索数据集上进行了测试,大量的实验验证了本文提出的算法的有效性.  相似文献   

13.
Recently, unsupervised Hashing has attracted much attention in the machine learning and information retrieval communities, due to its low storage and high search efficiency. Most of existing unsupervised Hashing methods rely on the local semantic structure of the data as the guiding information, requiring to preserve such semantic structure in the Hamming space. Thus, how to precisely represent the local structure of the data and Hashing code s becomes the key point to success. This study proposes a novel Hashing method based on self-supervised learning. Specifically, it is proposed to utilize the contrast learning to acquire a compact and accurate feature representation for each sample, and then a semantic structure matrix can be constructed for representing the similarity between samples. Meanwhile, a new loss function is proposed to preserve the semantic information and improve the discriminative ability in the Hamming space, by the spirit of the instance discrimination method proposed recently. The proposed framework is end-to-end trainable. Extensive experiments on two large-scale image retrieval data sets show that the proposed method can significantly outperform current state-of-the-art methods.  相似文献   

14.
社交网络的发展为情感分析研究提供了大量的多模态数据。结合多模态内容进行情感分类可以利用模态间数据的关联信息,从而避免单一模态对总体情感把握不全面的情况。使用简单的共享表征学习方法无法充分挖掘模态间的互补特征,因此提出多模态双向注意力融合(Multimodal Bidirectional Attention Hybrid, MBAH)模型,在深度模型提取的图像和文本特征基础上,利用双向注意力机制在一个模态下引入另一个模态信息,将该模态的底层特征与另一模态语义特征通过注意力计算学习模态间的关联信息,然后联结两种模态的高层特征形成跨模态共享表征并输入多层感知器得到分类结果。此外MBAH模型应用后期融合技术结合图文单模态自注意力模型搜寻最优决策权值,形成最终决策。实验结果表明,MBAH模型情感分类结果相较于其他方法具有明显的提升。  相似文献   

15.
网络表示学习是一个重要的研究课题,其目的是将高维的属性网络表示为低维稠密的向量,为下一步任务提供有效特征表示。最近提出的属性网络表示学习模型SNE(Social Network Embedding)同时使用网络结构与属性信息学习网络节点表示,但该模型属于无监督模型,不能充分利用一些容易获取的先验信息来提高所学特征表示的质量。基于上述考虑提出了一种半监督属性网络表示学习方法SSNE(Semi-supervised Social Network Embedding),该方法以属性网络和少量节点先验作为前馈神经网络输入,经过多个隐层非线性变换,在输出层通过保持网络链接结构和少量节点先验,学习最优化的节点表示。在四个真实属性网络和两个人工属性网络上,同现有主流方法进行对比,结果表明本方法学到的表示,在聚类和分类任务上具有较好的性能。  相似文献   

16.
The comprehensive utilization of incomplete multi-modality data is a difficult problem with strong practical value. Most of the previous multimodal learning algorithms require massive training data with complete modalities and annotated labels, which greatly limits their practicality. Although some existing algorithms can be used to complete the data imputation task, they still have two disadvantages: (1) they cannot control the semantics of the imputed modalities accurately; and (2) they need to establish multiple independent converters between any two modalities when extended to multimodal cases. To overcome these limitations, we propose a novel doubly semi-supervised multimodal learning (DSML) framework. Specifically, DSML uses a modality-shared latent space and multiple modality-specific generators to associate multiple modalities together. Here we divided the shared latent space into two independent parts, the semantic labels and the semantic-free styles, which allows us to easily control the semantics of generated samples. In addition, each modality has its own separate encoder and classifier to infer the corresponding semantic and semantic-free latent variables. The above DSML framework can be adversarially trained by using our specially designed softmax-based discriminators. Large amounts of experimental results show that the DSML obtains better performance than the baselines on three tasks, including semi-supervised classification, missing modality imputation and cross-modality retrieval.  相似文献   

17.
尽管深度学习因为强大的非线性表示能力已广泛应用于许多领域,多源异构模态数据间结构和语义上的鸿沟严重阻碍了后续深度学习模型的应用。虽然已经有许多学者提出了大量的表示学习方法以探索不同模态间的相关性和互补性,并提高深度学习预测和泛化性能。然而,多模态表示学习研究还处于初级阶段,依然存在许多科学问题尚需解决。迄今为止,多模态表示学习仍缺乏统一的认知,多模态表示学习研究的体系结构和评价指标尚不完全明确。根据不同模态的特征结构、语义信息和表示能力,从表示融合和表示对齐两个角度研究和分析了深度多模态表示学习的进展,并对现有研究工作进行了系统的总结和科学的分类。同时,解析了代表性框架和模型的基本结构、应用场景和关键问题,分析了深度多模态表示学习的理论基础和最新发展,并且指出了多模态表示学习研究当前面临的挑战和今后的发展趋势,以进一步推动深度多模态表示学习的发展和应用。  相似文献   

18.
The multimodal perception of intelligent robots is essential for achieving collision-free and efficient navigation. Autonomous navigation is enormously challenging when perception is acquired using only vision or LiDAR sensor data due to the lack of complementary information from different sensors. This paper proposes a simple yet efficient deep reinforcement learning (DRL) with sparse rewards and hindsight experience replay (HER) to achieve multimodal navigation. By adopting the depth images and pseudo-LiDAR data generated by an RGB-D camera as input, a multimodal fusion scheme is used to enhance the perception of the surrounding environment compared to using a single sensor. To alleviate the misleading way for the agent to navigate with dense rewards, the sparse rewards are intended to identify its tasks. Additionally, the HER technique is introduced to address the sparse reward navigation issue for accelerating optimal policy learning. The results show that the proposed model achieves state-of-the-art performance in terms of success, crash, and timeout rates, as well as generalization capability.  相似文献   

19.
Direct word discovery from audio speech signals is a very difficult and challenging problem for a developmental robot. Human infants are able to discover words directly from speech signals, and, to understand human infants’ developmental capability using a constructive approach, it is very important to build a machine learning system that can acquire knowledge about words and phonemes, i.e. a language model and an acoustic model, autonomously in an unsupervised manner. To achieve this, the nonparametric Bayesian double articulation analyzer (NPB-DAA) with the deep sparse autoencoder (DSAE) is proposed in this paper. The NPB-DAA has been proposed to achieve totally unsupervised direct word discovery from speech signals. However, the performance was still unsatisfactory, although it outperformed pre-existing unsupervised learning methods. In this paper, we integrate the NPB-DAA with the DSAE, which is a neural network model that can be trained in an unsupervised manner, and demonstrate its performance through an experiment about direct word discovery from auditory speech signals. The experiment shows that the combined method, the NPB-DAA with the DSAE, outperforms pre-existing unsupervised learning methods, and shows state-of-the-art performance. It is also shown that the proposed method outperforms several standard speech recognizer-based methods with true word dictionaries.  相似文献   

20.
韩滕跃  牛少彰  张文 《计算机应用》2022,42(6):1683-1688
针对如何利用商品的多模态信息提高序列推荐算法准确性的问题,提出一种基于对比学习技术的多模态序列推荐算法。该算法首先通过改变商品颜色和截取商品图片中心区域等手段进行数据增强,并把增强后的数据与原数据进行对比学习,以提取到商品的颜色和形状等视觉模态信息;其次对商品的文本模态信息进行低维空间嵌入,从而得到商品多模态信息的完整表达;最后根据商品的时序性,采用循环神经网络(RNN)建模多模态信息的序列交互特征,得到用户的偏好表达,从而进行商品推荐。在两个公开的数据集上进行实验测试的结果表明,与现有的序列推荐算法LESSR相比,所提算法排序性能有所提升,且该算法在特征维度值到达50后,推荐性能基本保持不变。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号