首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
哈希编码能够节省存储空间、提高检索效率,已引起广泛关注.提出一种成对相似度迁移哈希方法(pairwise similarity transferring hash,PSTH)用于无监督跨模态检索.对于每个模态,PSTH将可靠的模态内成对相似度迁移到汉明空间,使哈希编码继承原始空间的成对相似度,从而学习各模态数据对应的哈希编码;此外,PSTH重建相似度值而不是相似度关系,使得训练过程可以分批进行;与此同时,为缩小不同模态间的语义鸿沟,PSTH最大化模态间成对相似度.在三个公开数据集上进行了大量对比实验,PSTH取得了SOTA的效果.  相似文献   

2.
针对无监督跨模态检索任务中不能充分利用单个模态内的语义关联信息的问题,提出了一种基于图卷积网络的无监督跨模态哈希检索方法。通过图像和文本编码器分别获得两个模态的特征,输入到图卷积网络中挖掘单个模态的内部语义信息,将结果通过哈希编码层进行二值化操作后,与模态间的深度语义关联相似度矩阵进行对比计算损失,不断重构优化生成的二进制编码,直到生成样本对应的健壮哈希表达。实验结果表明,与经典的浅层方法和深度学习方法对比,该方法在多个数据集上的跨模态检索准确率均有明显提升。证明通过图卷积网络能够进一步挖掘模态内的语义信息,所提模型具有更高的准确性和鲁棒性。  相似文献   

3.
随着深度学习方法的不断发展,跨模态哈希检索技术也取得了长足的进步。但是,目前的跨模态哈希检索方法通常基于两种假设:a)相似文本描述的图像内容也相似;b)相同类别的图像有着较好的全局相似性。但是,真实数据集中的数据往往不能满足以上两种假设,导致了跨模态哈希检索模型性能的降低。针对以上两个问题,提出了一种基于文本引导对抗哈希的跨模态检索方法(text-guided adversarial hashing for cross-modal retrieval, TAH),此方法在构建的网络结构基础上,将文本哈希码作为训练图像网络的基础,并将图像的局部特征与全局特征结合用于表示图像内容。此外,还针对性地提出了文本模态内全局一致性损失、模态间局部与全局一致性损失和分类对抗损失用于训练跨模态网络。实验证明,TAH可以在三个数据集中取得良好的检索性能。  相似文献   

4.
5.
针对文本图像特征有时无法满足对物体材质进行真实准确分析的情况,本文在视听领域使用跨模态检索方法进行表面材质检索。首先提取声音的梅尔频率倒谱系数(MFCC)特征,使用卷积神经网络(CNN)提取图像特征,然后利用典型相关分析将两种特征映射到子空间并用欧氏距离进行检索,并在慕尼黑工业大学触觉纹理数据集上进行实验验证,实现了使用声音检索图像的跨模态检索过程。实验结果表明,所提出的方法在材质检索方面有较好应用效果。  相似文献   

6.
Due to its storage efficiency and fast query speed, cross-media hashing methods have attracted much attention for retrieving semantically similar data over heterogeneous datasets. Supervised hashing methods, which utilize the labeled information to promote the quality of hashing functions, achieve promising performance. However, the existing supervised methods generally focus on utilizing coarse semantic information between samples (e.g. similar or dissimilar), and ignore fine semantic information between samples which may degrade the quality of hashing functions. Accordingly, in this paper, we propose a supervised hashing method for cross-media retrieval which utilizes the coarse-to-fine semantic similarity to learn a sharing space. The inter-category and intra-category semantic similarity are effectively preserved in the sharing space. Then an iterative descent scheme is proposed to achieve an optimal relaxed solution, and hashing codes can be generated by quantizing the relaxed solution. At last, to further improve the discrimination of hashing codes, an orthogonal rotation matrix is learned by minimizing the quantization loss while preserving the optimality of the relaxed solution. Extensive experiments on widely used Wiki and NUS-WIDE datasets demonstrate that the proposed method outperforms the existing methods.  相似文献   

7.
Shao  Jie  Zhao  Zhicheng  Su  Fei 《Multimedia Tools and Applications》2019,78(12):16615-16631

This paper deals with the problem of modeling internet images and associated texts for cross-modal retrieval such as text-to-image retrieval and image-to-text retrieval. Recently, supervised cross-modal retrieval has attracted increasing attention. Inspired by a typical two-stage method, i.e., semantic correlation matching(SCM), we propose a novel two-stage deep learning method for supervised cross-modal retrieval. Limited by the fact that traditional canonical correlation analysis (CCA) is a 2-view method, the supervised semantic information is only considered in the second stage of SCM. To maximize the value of semantics, we expand CCA from 2-view to 3-view and conduct supervised learning in both stages. In the first learning stage, we embed 3-view CCA into a deep architecture to learn non-linear correlation between image, text and semantics. To overcome over-fitting, we add the reconstruct loss of each view into the loss function, which includes the correlation loss of every two views and regularization of parameters. In the second stage, we build a novel fully-convolutional network (FCN), which is trained by joint supervision of contrastive loss and center loss to learn better features. The proposed method is evaluated on two publicly available data sets, and the experimental results show that our method is competitive with state-of-the-art methods.

  相似文献   

8.
宫大汉    陈辉    陈仕江  包勇军  丁贵广   《智能系统学报》2021,16(6):1143-1150
跨模态图像文本检索的任务对于理解视觉和语言之间的对应关系很重要,大多数现有方法利用不同的注意力模块挖掘区域到词和词到区域的对齐来探索细粒度的跨模态关联。然而,现有的方法没有考虑到基于双重注意力会导致对齐不一致的问题。为此,本文提出了一种一致性协议匹配方法,旨在利用一致性对齐来增强跨模态检索的性能。本文采用注意力实现跨模态关联对齐,并基于跨模态对齐结果设计了基于竞争性投票的跨模态协议,该协议衡量了跨模态对齐的一致性,可以有效提升跨模态图像文本检索的性能。在Flickr30K和MS COCO两个基准数据集上,本文通过大量的实验证明了所提出的方法的有效性。  相似文献   

9.
随着网络上图像和视频数据的快速增长,传统图像检索方法已难以高效处理海量数据。在面向大规模图像检索时,特征哈希与深度学习结合的深度哈希技术已成为发展趋势,为全面认识和理解深度哈希图像检索方法,本文对其进行梳理和综述。根据是否使用标签信息将深度哈希方法分为无监督、半监督和监督深度哈希方法,根据无监督和半监督深度哈希方法的主要研究点进一步分为基于卷积神经网络(convolutional neural networks,CNN)和基于生成对抗网络(generative adversarial networks,GAN)的无监督/半监督深度哈希方法,根据数据标签信息差异将监督深度哈希方法进一步分为基于三元组和基于成对监督信息的深度哈希方法,根据各种方法使用损失函数的不同对每类方法中一些经典方法的原理及特性进行介绍,对各种方法的优缺点进行分析。通过分析和比较各种深度哈希方法在CIFAR-10和NUS-WIDE数据集上的检索性能,以及深度哈希算法在西安邮电大学图像与信息处理研究所(Center for Image and Information Processing,CⅡP)自建的两个特色数据库上的测试结果,对基于深度哈希的检索技术进行总结,分析了深度哈希的检索技术未来的发展前景。监督深度哈希的图像检索方法虽然取得了较高的检索精度。但由于监督深度哈希方法高度依赖数据标签,无监督深度哈希技术更加受到关注。基于深度哈希技术进行图像检索是实现大规模图像数据高效检索的有效方法,但存在亟待攻克的技术难点。针对实际应用需求,关于无监督深度哈希算法的研究仍需要更多关注。  相似文献   

10.
移动机器人主要依靠激光雷达采集的点云和摄像机采集的图像信息来感知周围环境.在极端天气或夜晚的情况下,摄像机采集图像会受到极大干扰;本文基于聚类典型相关分析(cluster-canonical correlation analysis,cluster–CCA)提出一种面向室外移动机器人的雷达图像跨模态检索技术,首先利用深度学习网络提取点云和图像的特征,然后使用聚类典型相关分析将两种模态的特征映射到子空间,最后计算欧氏距离进行检索,可以从图像数据库中检索得出与点云最相似的图像文件.本文所提出的方法在KITTI数据集上进行了验证,实现了从点云到图像的跨模态检索,结果验证了cluster–CCA在室外移动机器人雷达图像检索方面应用的有效性.  相似文献   

11.

A great many of approaches have been developed for cross-modal retrieval, among which subspace learning based ones dominate the landscape. Concerning whether using the semantic label information or not, subspace learning based approaches can be categorized into two paradigms, unsupervised and supervised. However, for multi-label cross-modal retrieval, supervised approaches just simply exploit multi-label information towards a discriminative subspace, without considering the correlations between multiple labels shared by multi-modalities, which often leads to an unsatisfactory retrieval performance. To address this issue, in this paper we propose a general framework, which jointly incorporates semantic correlations into subspace learning for multi-label cross-modal retrieval. By introducing the HSIC-based regularization term, the correlation information among multiple labels can be not only leveraged but also the consistency between the modality similarity from each modality is well preserved. Besides, based on the semantic-consistency projection, the semantic gap between the low-level feature space of each modality and the shared high-level semantic space can be balanced by a mid-level consistent one, where multi-label cross-modal retrieval can be performed effectively and efficiently. To solve the optimization problem, an effective iterative algorithm is designed, along with its convergence analysis theoretically and experimentally. Experimental results on real-world datasets have shown the superiority of the proposed method over several existing cross-modal subspace learning methods.

  相似文献   

12.
针对大多数跨模态哈希检索方法仅通过分解相似矩阵或标签矩阵,从而导致标签语义信息利用不充分、标签矩阵分解过程语义信息丢失以及哈希码鉴别能力差的问题,提出了一种语义嵌入重构的跨模态哈希检索方法。该方法首先通过最小化标签成对距离和哈希码成对距离之间的距离差,从而将标签矩阵的成对相似性嵌入哈希码;接着对标签矩阵分解并重构学得共同子空间,共同子空间再回归生成哈希码,从而将标签矩阵的类别信息嵌入哈希码,并有效地控制标签矩阵分解过程的语义信息丢失情况,进一步提高哈希码的鉴别能力。在公开的三个基准数据集上进行了多个实验,实验结果验证了该方法的有效性。  相似文献   

13.
With the advance of internet and multimedia technologies, large-scale multi-modal representation techniques such as cross-modal hashing, are increasingly demanded for multimedia retrieval. In cross-modal hashing, three essential problems should be seriously considered. The first is that effective cross-modal relationship should be learned from training data with scarce label information. The second is that appropriate weights should be assigned for different modalities to reflect their importance. The last is the scalability of training process which is usually ignored by previous methods. In this paper, we propose Multi-graph Cross-modal Hashing (MGCMH) by comprehensively considering these three points. MGCMH is unsupervised method which integrates multi-graph learning and hash function learning into a joint framework, to learn unified hash space for all modalities. In MGCMH, different modalities are assigned with proper weights for the generation of multi-graph and hash codes respectively. As a result, more precise cross-modal relationship can be preserved in the hash space. Then Nyström approximation approach is leveraged to efficiently construct the graphs. Finally an alternating learning algorithm is proposed to jointly optimize the modality weights, hash codes and functions. Experiments conducted on two real-world multi-modal datasets demonstrate the effectiveness of our method, in comparison with several representative cross-modal hashing methods.  相似文献   

14.
目的 基于哈希的跨模态检索方法因其检索速度快、消耗存储空间小等优势受到了广泛关注。但是由于这类算法大都将不同模态数据直接映射至共同的汉明空间,因此难以克服不同模态数据的特征表示及特征维度的较大差异性,也很难在汉明空间中同时保持原有数据的结构信息。针对上述问题,本文提出了耦合保持投影哈希跨模态检索算法。方法 为了解决跨模态数据间的异构性,先将不同模态的数据投影至各自子空间来减少模态“鸿沟”,并在子空间学习中引入图模型来保持数据间的结构一致性;为了构建不同模态之间的语义关联,再将子空间特征映射至汉明空间以得到一致的哈希码;最后引入类标约束来提升哈希码的判别性。结果 实验在3个数据集上与主流的方法进行了比较,在Wikipedia数据集中,相比于性能第2的算法,在任务图像检索文本(I to T)和任务文本检索图像(T to I)上的平均检索精度(mean average precision,mAP)值分别提升了6%和3%左右;在MIRFlickr数据集中,相比于性能第2的算法,优势分别为2%和5%左右;在Pascal Sentence数据集中,优势分别为10%和7%左右。结论 本文方法可适用于两个模态数据之间的相互检索任务,由于引入了耦合投影和图模型模块,有效提升了跨模态检索的精度。  相似文献   

15.
In this paper, we present a simple and effective topic correlation model (TCM) for cross-modal multimedia retrieval by jointly modeling the text and image components in multimedia documents. In this model, the image component is represented by the bag-of-features model based on local scale-invariant feature transform features, meanwhile the text component is described by a topic distribution learned from a latent topic model. Statistical correlations between these two mid-level features are investigated by mapping them into a semantic space. These cross-modality correlations are used to calculate the conditional probabilities of answers in one modality while given query in the other modality. The model is tested on three cross-modal retrieval benchmark problems including Wikipedia documents in both English and Chinese. Experimental results have demonstrated that the new TCM model achieves the best performance compared to recent state-of-the-art cross-modal retrieval models on the given benchmarks.  相似文献   

16.
针对现有哈希方法在特征学习过程中无法区分各区域特征信息的重要程度和不能充分利用标签信息来深度挖掘模态间相关性的问题,提出了自适应混合注意力深度跨模态哈希检索(AHAH)模型。首先,通过自主学习得到的权重将通道注意力和空间注意力有机结合来强化对特征图中相关目标区域的关注度,同时弱化对不相关目标区域的关注度;其次,通过对模态标签进行统计分析,并使用所提出的相似度计算方法将相似度量化为0~1的数字以更精细地表示模态间的相似性。在4个常用的数据集MIRFLICKR-25K、NUS-WIDE、MSCOCO和IAPR TC-12上,当哈希码长度为16 bit时,与最先进的方法多标签语义保留哈希(MLSPH)相比,所提方法的检索平均准确率均值(mAP)分别提高了2.25%、1.75%、6.8%和2.15%。此外,消融实验和效率分析也证明了所提方法的有效性。  相似文献   

17.
Kang  Peipei  Lin  Zehang  Yang  Zhenguo  Fang  Xiaozhao  Bronstein  Alexander M.  Li  Qing  Liu  Wenyin 《Applied Intelligence》2022,52(1):33-54

Cross-modal retrieval aims to retrieve related items across different modalities, for example, using an image query to retrieve related text. The existing deep methods ignore both the intra-modal and inter-modal intra-class low-rank structures when fusing various modalities, which decreases the retrieval performance. In this paper, two deep models (denoted as ILCMR and Semi-ILCMR) based on intra-class low-rank regularization are proposed for supervised and semi-supervised cross-modal retrieval, respectively. Specifically, ILCMR integrates the image network and text network into a unified framework to learn a common feature space by imposing three regularization terms to fuse the cross-modal data. First, to align them in the label space, we utilize semantic consistency regularization to convert the data representations to probability distributions over the classes. Second, we introduce an intra-modal low-rank regularization, which encourages the intra-class samples that originate from the same space to be more relevant in the common feature space. Third, an inter-modal low-rank regularization is applied to reduce the cross-modal discrepancy. To enable the low-rank regularization to be optimized using automatic gradients during network back-propagation, we propose the rank-r approximation and specify the explicit gradients for theoretical completeness. In addition to the three regularization terms that rely on label information incorporated by ILCMR, we propose Semi-ILCMR in the semi-supervised regime, which introduces a low-rank constraint before projecting the general representations into the common feature space. Extensive experiments on four public cross-modal datasets demonstrate the superiority of ILCMR and Semi-ILCMR over other state-of-the-art methods.

  相似文献   

18.
张成  万源  强浩鹏 《计算机应用》2021,41(9):2523-2531
跨模态哈希因其低存储花费和高检索效率得到了广泛的关注.现有的大部分跨模态哈希方法需要额外的手工标签来提供实例间的关联信息,然而,预训练好的深度无监督跨模态哈希方法学习到的深度特征同样能提供相似信息;且哈希码学习过程中放松了离散约束,造成较大的量化损失.针对以上两个问题,提出基于知识蒸馏的深度无监督离散跨模态哈希(DUD...  相似文献   

19.
Li  Xuan  Wu  Wei  Yuan  Yun-Hao  Pan  Shirui  Shen  Xiaobo 《Applied Intelligence》2022,52(13):14905-14917
Applied Intelligence - Cross-view hashing has shown great potential for large-scale retrieval due to its superiority in terms of computation and storage. In real-world applications, data emerges in...  相似文献   

20.
Wu  Jiagao  Weng  Weiwei  Fu  Junxia  Liu  Linfeng  Hu  Bin 《Neural computing & applications》2022,34(7):5397-5416
Neural Computing and Applications - With the explosive growth of multimodal data, cross-modal retrieval has drawn increasing research interests. Hashing-based methods have made great advancements...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号