首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
提出一种融合模糊语义概念和精细视觉特征的纹理图像检索方法.首先根据语言表达式和模糊语义概念对整个图像库进行快速有效的粗搜索,得到具有“软”边界的语义检索结果;然后根据视觉特征在语义检索结果中(而不是整个图像库)进行精细的检索.该方法很好地结合了基于内容的图像检索和基于语义的图像检索两者的优势,使得用户既可以根据语义概念对图像库进行快速浏览和检索,也能根据查询用例图像的视觉特征进行精细的匹配;另外,由粗到细的二阶段策略也明显地提高了其检索性能.在Brodatz纹理库中的实验结果表明,通过调整合适的语义检索边界,该方法的检索性能明显优于基于视觉特征的图像检索方法.  相似文献   

2.
针对素描图像和可见光图像存在较大模态差异这一问题,提出一种基于转换生成网络的素描人脸识别方法,该网络同时实现跨模态图像生成和素描人脸识别.转换生成网络由生成器、判别器和特征转换网络(S网络)组成.生成器生成图像,判别器使得生成图像具备两种模态信息,S网络提取高级语义特征来辅助生成图像和识别.使用端对端训练来更新模型参数...  相似文献   

3.
针对一个人的面部表情可以被分解为表情成分和中性成分,提出一种新颖的基于风格迁移的面部表情识别方法。该方法通过训练循环一致生成对抗网络(Cycle-GAN)得到不同的生成器;这些不同的生成器可将不同的表情迁移到中性,因此每个生成器对应一种不同的表情。在测试阶段,输入表情图像到上述训练好的生成器中。由于只有与输入的表情对应的生成器能迁移成中性表情,因此可以通过这种方式实现面部表情识别。实验结果表明:该新方法不仅在实验室条件下获得的面部表情数据集中表现突出,而且在自然条件下获得的面部表情数据集中也有非常高的识别率。  相似文献   

4.
图像语义的图形化标注和检索研究   总被引:1,自引:0,他引:1  
基于图像语义进行检索的目的是希望能够更好地从用户的角度出发,查找出与用户理解相一致的图像。针对目前图像语义检索过程中存在的问题,提出一个基于对象的图像语义内容标注模型和检索框架。首先利用分割算法获取图像中的语义对象区域,然后以MPEG-7标准中的语义描述方案为基础,利用图形化结构实现图像语义内容的标注。在检索过程中,用户把查询内容转化为图形化描述结构,通过提取该描述图的不同长度的路径信息形成查询文档,与图像库中的图像语义标注文档进行匹配实现图像检索。实验结果表明,提出的方法能够有效地实现基于语义的图像标注和检索,与全文检索相比,有较高的查全率和查准率。  相似文献   

5.
基于深度学习的方法已经在人脸表情识别中取得了重大进展,然而人脸表情数据库的规模普遍不大。为了解决数据量不足的问题,提出了一种静态图像数据增强方法。在StarGAN的基础上修改重构误差实现多风格人脸表情图像转换,利用生成器由某一表情下的面部图像生成同一人其他表情的面部图像。在CK+表情库上的实验表明,该方法有利于提高人脸表情识别模型的识别率和泛化能力,同时对解决数据量不平衡的问题也有借鉴作用。  相似文献   

6.
7.
文本生成图像旨在根据自然语言描述生成逼真的图像,是一个涉及文本与图像的跨模态分析任务。鉴于生成对抗网络具有生成图像逼真、效率高等优势,已经成为文本生成图像任务的主流模型。然而,当前方法往往将文本特征分为单词级和句子级单独训练,文本信息利用不充分,容易导致生成的图像与文本不匹配的问题。针对该问题,提出了一种耦合单词级与句子级文本特征的图像对抗级联生成模型(Union-GAN),在每个图像生成阶段引入了文本图像联合感知模块(Union-Block),使用通道仿射变换和跨模态注意力相结合的方式,充分利用了文本的单词级语义与整体语义信息,促使生成的图像既符合文本语义描述又能够保持清晰结构。同时联合优化鉴别器,将空间注意力加入到对应的鉴别器中,使来自文本的监督信号促使生成器生成更加相关的图像。在CUB-200-2011数据集上将其与AttnGAN等多个当前的代表性模型进行了对比,实验结果表明,Union-GAN的FID分数达到了13.67,与AttnGAN相比,提高了42.9%,IS分数达到了4.52,提高了0.16。  相似文献   

8.
文中提出一种融合深度典型相关分析和对抗学习的跨模态检索方法(DCCA-ACMR),该方法提高了无标签样本的利用率,能够学习到更有力的特征投影模型,进而提升了跨模态检索准确率.具体而言,在DCGAN框架下:1)在图像与文本两个单模态的表示层间增加深度典型相关分析约束,构建图文特征投影模型,充分挖掘样本对的语义关联性;2)以图文特征投影模型作为生成器,以模态特征分类模型作为判别器共同构成图文跨模态检索模型;3)利用有标签样本和无标签样本,在生成器和判别器的相互对抗中学习到样本的公共子空间表示.在Wikipedia和NUSWIDE-10k两个公开数据集上,采用平均准确率均值(mAP)作为评价指标对所提方法进行验证.图像检索文本和文本检索图像的平均mAP值在两个数据集上分别为0.556和0.563.实验结果表明,DCCA-ACMR优于现有的代表性方法.  相似文献   

9.
随着深度学习方法的不断发展,跨模态哈希检索技术也取得了长足的进步。但是,目前的跨模态哈希检索方法通常基于两种假设:a)相似文本描述的图像内容也相似;b)相同类别的图像有着较好的全局相似性。但是,真实数据集中的数据往往不能满足以上两种假设,导致了跨模态哈希检索模型性能的降低。针对以上两个问题,提出了一种基于文本引导对抗哈希的跨模态检索方法(text-guided adversarial hashing for cross-modal retrieval, TAH),此方法在构建的网络结构基础上,将文本哈希码作为训练图像网络的基础,并将图像的局部特征与全局特征结合用于表示图像内容。此外,还针对性地提出了文本模态内全局一致性损失、模态间局部与全局一致性损失和分类对抗损失用于训练跨模态网络。实验证明,TAH可以在三个数据集中取得良好的检索性能。  相似文献   

10.
文本-图像行人检索旨在从行人数据库中查找符合特定文本描述的行人图像.近年来受到学术界和工业界的广泛关注.该任务同时面临两个挑战:细粒度检索以及图像与文本之间的异构鸿沟.部分方法提出使用有监督属性学习提取属性相关特征,在细粒度上关联图像和文本.然而属性标签难以获取,导致这类方法在实践中表现不佳.如何在没有属性标注的情况下提取属性相关特征,建立细粒度的跨模态语义关联成为亟待解决的关键问题.为解决这个问题,融合预训练技术提出基于虚拟属性学习的文本-图像行人检索方法,通过无监督属性学习建立细粒度的跨模态语义关联.第一,基于行人属性的不变性和跨模态语义一致性提出语义引导的属性解耦方法,所提方法利用行人的身份标签作为监督信号引导模型解耦属性相关特征.第二,基于属性之间的关联构建语义图提出基于语义推理的特征学习模块,所提模块通过图模型在属性之间交换信息增强特征的跨模态识别能力.在公开的文本-图像行人检索数据集CUHK-PEDES和跨模态检索数据集Flickr30k上与现有方法进行实验对比,实验结果表明了所提方法的有效性.  相似文献   

11.
12.
为了消除自然语言对构件文本信息描述的二义性以及增强术语间的语义关系,文中采用领域本体的思想,给出了一个基于人工智能领域本体的软件构件聚类模型和基于该模型的聚类算法。该模型通过分析领域的共同概念,形成领域本体知识库,提供领域内一致认可的术语,用于匹配对构件文本描述所使用的自然语言。给出的算法通过与基于传统空间向量的K—Means算法分析比较,验证了该算法是有效的,实现了对软件构件更合理的聚类,提高了构件检索的效率和准确性。  相似文献   

13.
表情符作为一种新兴的网络语言,受到了越来越多的微博用户的青睐。微博中出现的表情符形象直观地表达了博主的情绪,对情绪分析起着至关重要的作用。首先对大量中文微博中表情符的使用特点、分布情况和情绪表达特点进行了统计分析。然后,人工选取具有代表性且情感倾向明确的表情符作为六类基本情绪的种子表情符。根据目标表情符和六类情绪的种子表情符在微博文本中的共现情况,为其建立六维情绪向量,并将其应用于微博情绪分析。在两个数据集上的实验结果表明,本文建立的表情符情绪向量有效地提高了微博情绪识别的精度。  相似文献   

14.
The use of emoticons, particularly on smart mobile devices, has become increasingly popular among young people in recent years. The emoticon is not just a form of information transmission; it also represents a new way of life and a new trend among young people. The aim of this study was to understand the role of emoticons in the daily lives of Chinese young people, investigate how mediated communication has been improved by smart mobile devices and discover users’ preferred types of emoticon. First, a review of the literature on emoticons was conducted to explore the relationship between the evolution of the emoticon and young people’s everyday lives. Next, in-depth interviews were conducted, which yielded novel findings regarding emoticon use. Finally, a questionnaire was used to obtain data on emoticon use for statistical analysis. The findings revealed four key dimensions of emoticon use that were relevant to the research questions: accuracy, sociability, efficiency and enjoyment. These four dimensions help us to better understand the development of emoticons and their influence on the lifestyles of young people. The findings are expected to provide inspiration and suggest directions for improving the design of smart communication.  相似文献   

15.
16.
With the rising popularity of social media in the context of environments based on the Internet of things (IoT), semantic information has emerged as an important bridge to connect human intelligence with heterogeneous media big data. As a critical tool to improve media big data retrieval, semantic fusion encounters a number of challenges: the manual method is inefficient, and the automatic approach is inaccurate. To address these challenges, this paper proposes a solution called CSF (Crowdsourcing Semantic Fusion) that makes full use of the collective wisdom of social users and introduces crowdsourcing computing to semantic fusion. First, the correlation of cross-modal semantics is mined and the semantic objects are normalized for fusion. Second, we employ the dimension reduction and relevance feedback approaches to reduce non-principal components and noise. Finally, we research the storage and distribution mechanism. Experiment results highlight the efficiency and accuracy of the proposed approach. The proposed method is an effective and practical cross-modal semantic fusion and distribution mechanism for heterogeneous social media, provides a novel idea for social media semantic processing, and uses an interactive visualization framework for social media knowledge mining and retrieval to improve semantic knowledge and the effect of representation.  相似文献   

17.

A great many of approaches have been developed for cross-modal retrieval, among which subspace learning based ones dominate the landscape. Concerning whether using the semantic label information or not, subspace learning based approaches can be categorized into two paradigms, unsupervised and supervised. However, for multi-label cross-modal retrieval, supervised approaches just simply exploit multi-label information towards a discriminative subspace, without considering the correlations between multiple labels shared by multi-modalities, which often leads to an unsatisfactory retrieval performance. To address this issue, in this paper we propose a general framework, which jointly incorporates semantic correlations into subspace learning for multi-label cross-modal retrieval. By introducing the HSIC-based regularization term, the correlation information among multiple labels can be not only leveraged but also the consistency between the modality similarity from each modality is well preserved. Besides, based on the semantic-consistency projection, the semantic gap between the low-level feature space of each modality and the shared high-level semantic space can be balanced by a mid-level consistent one, where multi-label cross-modal retrieval can be performed effectively and efficiently. To solve the optimization problem, an effective iterative algorithm is designed, along with its convergence analysis theoretically and experimentally. Experimental results on real-world datasets have shown the superiority of the proposed method over several existing cross-modal subspace learning methods.

  相似文献   

18.
为了有效地获取到更有区别性的跨模态表示,提出了一种基于多负例对比机制的跨模态表示学习方法——监督对比的跨模态表示学习(supervised contrastive cross-modal representation learning,SCCMRL),并将其应用于视觉模态和听觉模态上。SCCMRL分别通过视觉编码器和音频编码器提取得到视听觉特征,利用监督对比损失让样本数据与其多个负例进行对比,使得相同类别的视听觉特征距离更近,不同类别的视听觉特征距离更远。此外,该方法还引入了中心损失和标签损失来进一步保证跨模态表示间的模态一致性和语义区分性。为了验证SCCMRL方法的有效性,基于SCCMRL方法构建了相应的跨模态检索系统,并结合Sub_URMP和XmediaNet数据集进行了跨模态检索实验。实验结果表明,SCCMRL方法相较于当前常用的跨模态检索方法取得了更高的mAP值,同时验证了多负例对比机制下的跨模态表示学习具有可行性。  相似文献   

19.
Wang  Yanmei 《Multimedia Tools and Applications》2020,79(27-28):19151-19166

Microblog (such as Weibo) is an integrated social platform of vital importance in the internet age. Because of its diversity, subjectivity and timeliness, microblog is popular among public. In order to perform sentiment classification on microblog posts and overcome the limitation of text information, a fine-grained sentiment analysis method is proposed, in which emoticon attributes are considered. Firstly, the microblog texts are pre-processed to remove some stop words and noise information such as links. Then the data is matched in the sentiment lexicon, and when the first matching succeeds, the second matching is performed in the emoticon dictionary. The emoticons in the emoticon dictionary are transformed into vector form. Through these matching, the emotional features are vectorized and other text features are considered. Finally, the iterative-based naive Bayesian classification method is used for sentiment classification. The experiment results show that emoticons have obvious effect on facilitating the sentiment classification of microblog posts, and the proposed sentiment classification method achieved better than average results in term of classification accuracy compared with state-of-art techniques.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号