首页 | 本学科首页   官方微博 | 高级检索  
     


Multimodal concept fusion using semantic closeness for image concept disambiguation
Authors:Ahmad Adel Abu-Shareha  Rajeswari Mandava  Latifur Khan  Dhanesh Ramachandram
Affiliation:1. School of Computer Science, Universiti Sains Malaysia, Penang, Malaysia
2. Department of Computer Science, University of Texas at Dallas, Richardson, TX, 750830688, USA
Abstract:In this paper we show how to resolve the ambiguity of concepts that are extracted from visual stream with the help of identified concepts from associated textual stream. The disambiguation is performed at the concept-level based on semantic closeness over the domain ontology. The semantic closeness is a function of the distance between the concept to be disambiguated and selected associated concepts in the ontology. In this process, the image concepts will be disambiguated with any associated concept from the image and/or the text. The ability of the text concepts to resolve the ambiguity in the image concepts is varied. The best talent to resolve the ambiguity of an image concept occurs when the same concept(s) is stated clearly in both image and text, while, the worst case occurs when the image concept is an isolated concept that has no semantically close text concept. WordNet and the image labels with selected senses are used to construct the domain ontology used in the disambiguation process. The improved accuracy, as shown in the results, proves the ability of the proposed disambiguation process.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号