首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We investigate the possibility of using Semantic Web data to improve hypertext Web search. In particular, we use relevance feedback to create a ‘virtuous cycle’ between data gathered from the Semantic Web of Linked Data and web-pages gathered from the hypertext Web. Previous approaches have generally considered the searching over the Semantic Web and hypertext Web to be entirely disparate, indexing, and searching over different domains. While relevance feedback has traditionally improved information retrieval performance, relevance feedback is normally used to improve rankings over a single data-set. Our novel approach is to use relevance feedback from hypertext Web results to improve Semantic Web search, and results from the Semantic Web to improve the retrieval of hypertext Web data. In both cases, an evaluation is performed based on certain kinds of informational queries (abstract concepts, people, and places) selected from a real-life query log and checked by human judges. We evaluate our work over a wide range of algorithms and options, and show it improves baseline performance on these queries for deployed systems as well, such as the Semantic Web Search engine FALCON-S and Yahoo! Web search. We further show that the use of Semantic Web inference seems to hurt performance, while the pseudo-relevance feedback increases performance in both cases, although not as much as actual relevance feedback. Lastly, our evaluation is the first rigorous ‘Cranfield’ evaluation of Semantic Web search.  相似文献   

2.
3.
We present a new text-to-image re-ranking approach for improving the relevancy rate in searches. In particular, we focus on the fundamental semantic gap that exists between the low-level visual features of the image and high-level textual queries by dynamically maintaining a connected hierarchy in the form of a concept database. For each textual query, we take the results from popular search engines as an initial retrieval, followed by a semantic analysis to map the textual query to higher level concepts. In order to do this, we design a two-layer scoring system which can identify the relationship between the query and the concepts automatically. We then calculate the image feature vectors and compare them with the classifier for each related concept. An image is relevant only when it is related to the query both semantically and content-wise. The second feature of this work is that we loosen the requirement for query accuracy from the user, which makes it possible to perform well on users’ queries containing less relevant information. Thirdly, the concept database can be dynamically maintained to satisfy the variations in user queries, which eliminates the need for human labor in building a sophisticated initial concept database. We designed our experiment using complex queries (based on five scenarios) to demonstrate how our retrieval results are a significant improvement over those obtained from current state-of-the-art image search engines.  相似文献   

4.
In recent years feedback approaches have been used in relating low-level image features with concepts to overcome the subjective nature of the human image interpretation. Generally, in these systems when the user starts with a new query, the entire prior experience of the system is lost. In this paper, we address the problem of incorporating prior experience of the retrieval system to improve the performance on future queries. We propose a semi-supervised fuzzy clustering method to learn class distribution (meta knowledge) in the sense of high-level concepts from retrieval experience. Using fuzzy rules, we incorporate the meta knowledge into a probabilistic feature relevance feedback approach to improve the retrieval performance. Results on synthetic and real databases show that our approach provides better retrieval precision compared to the case when no retrieval experience is used.  相似文献   

5.
6.
Time plays important roles in Web search, because most Web pages contain temporal information and a lot of Web queries are time-related. How to integrate temporal information in Web search engines has been a research focus in recent years. However, traditional search engines have little support in processing temporal-textual Web queries. Aiming at solving this problem, in this paper, we concentrate on the extraction of the focused time for Web pages, which refers to the most appropriate time associated with Web pages, and then we used focused time to improve the search efficiency for time-sensitive queries. In particular, three critical issues are deeply studied in this paper. The first issue is to extract implicit temporal expressions from Web pages. The second one is to determine the focused time among all the extracted temporal information, and the last issue is to integrate focused time into a search engine. For the first issue, we propose a new dynamic approach to resolve the implicit temporal expressions in Web pages. For the second issue, we present a score model to determine the focused time for Web pages. Our score model takes into account both the frequency of temporal information in Web pages and the containment relationship among temporal information. For the third issue, we combine the textual similarity and the temporal similarity between queries and documents in the ranking process. To evaluate the effectiveness and efficiency of the proposed approaches, we build a prototype system called Time-Aware Search Engine (TASE). TASE is able to extract both the explicit and implicit temporal expressions for Web pages, and calculate the relevant score between Web pages and each temporal expression, and re-rank search results based on the temporal-textual relevance between Web pages and queries. Finally, we conduct experiments on real data sets. The results show that our approach has high accuracy in resolving implicit temporal expressions and extracting focused time, and has better ranking effectiveness for time-sensitive Web queries than its competitor algorithms.  相似文献   

7.
In this paper, we present a new method for fuzzy query processing for document retrieval based on extended fuzzy concept networks. In an extended fuzzy concept network, there are four kinds of fuzzy relationships between concepts, i.e., fuzzy positive association, fuzzy negative association, fuzzy generalization, and fuzzy specialization. An extended fuzzy concept network can be modeled by a relation matrix and a relevance matrix, where the elements in a relation matrix represent the fuzzy relationships between concepts, and the elements in a relevance matrix indicate the degrees of relevance between concepts. The implicit fuzzy relationships between concepts can be inferred by the transitive closure of the relation matrix. The implicit degrees of relevance between concepts also can be inferred by the transitive closure of the relevance matrix. The proposed method allows the users to perform positive queries, negative queries, generalization queries, and specialization queries. The proposed method allows the users to perform fuzzy queries in a more flexible and more intelligent manner.  相似文献   

8.
Content based image retrieval via a transductive model   总被引:1,自引:0,他引:1  
Content based image retrieval plays an important role in the management of a large image database. However, the results of state-of-the-art image retrieval approaches are not so satisfactory for the well-known gap between visual features and semantic concepts. Therefore, a novel transductive learning scheme named random walk with restart based method (RWRM) is proposed, consisting of three major components: pre-filtering processing, relevance score calculation, and candidate ranking refinement. Firstly, to deal with the problem of large computation cost involved in a large image database, a pre-filtering processing is utilized to filter out the most irrelevant images while keeping the most relevant images according to the results of a manifold ranking algorithm. Secondly, the relevance between a query image and the remaining images are obtained with respect to the probability density estimation. Finally, a transductive learning model, namely a random walk with restart model, is utilized to refine the ranking taking into account both the pairwise information of unlabeled images and the relevance scores between query image and unlabeled images. Experiments conducted on a typical Corel dataset demonstrate the effectiveness of the proposed scheme.  相似文献   

9.
Nowadays, due to the rapid growth of digital technologies, huge volumes of image data are created and shared on social media sites. User-provided tags attached to each social image are widely recognized as a bridge to fill the semantic gap between low-level image features and high-level concepts. Hence, a combination of images along with their corresponding tags is useful for intelligent retrieval systems, those are designed to gain high-level understanding from images and facilitate semantic search. However, user-provided tags in practice are usually incomplete and noisy, which may degrade the retrieval performance. To tackle this problem, we present a novel retrieval framework that automatically associates the visual content with textual tags and enables effective image search. To this end, we first propose a probabilistic topic model learned on social images to discover latent topics from the co-occurrence of tags and image features. Moreover, our topic model is built by exploiting the expert knowledge about the correlation between tags with visual contents and the relationship among image features that is formulated in terms of spatial location and color distribution. The discovered topics then help to predict missing tags of an unseen image as well as the ones partially labeled in the database. These predicted tags can greatly facilitate the reliable measure of semantic similarity between the query and database images. Therefore, we further present a scoring scheme to estimate the similarity by fusing textual tags and visual representation. Extensive experiments conducted on three benchmark datasets show that our topic model provides the accurate annotation against the noise and incompleteness of tags. Using our generalized scoring scheme, which is particularly advantageous to many types of queries, the proposed approach also outperforms state-of-the-art approaches in terms of retrieval accuracy.  相似文献   

10.
We present an approach for image retrieval using a very large number of highly selective features and efficient learning of queries. Our approach is predicated on the assumption that each image is generated by a sparse set of visual “causes” and that images which are visually similar share causes. We propose a mechanism for computing a very large number of highly selective features which capture some aspects of this causal structure (in our implementation there are over 46,000 highly selective features). At query time a user selects a few example images, and the AdaBoost algorithm is used to learn a classification function which depends on a small number of the most appropriate features. This yields a highly efficient classification function. In addition we show that the AdaBoost framework provides a natural mechanism for the incorporation of relevance feedback. Finally we show results on a wide variety of image queries.  相似文献   

11.
Image retrieval using nonlinear manifold embedding   总被引:1,自引:0,他引:1  
Can  Jun  Xiaofei  Chun  Jiajun 《Neurocomputing》2009,72(16-18):3922
The huge number of images on the Web gives rise to the content-based image retrieval (CBIR) as the text-based search techniques cannot cater to the needs of precisely retrieving Web images. However, CBIR comes with a fundamental flaw: the semantic gap between high-level semantic concepts and low-level visual features. Consequently, relevance feedback is introduced into CBIR to learn the subjective needs of users. However, in practical applications the limited number of user feedbacks is usually overwhelmed by the large number of dimensionalities of the visual feature space. To address this issue, a novel semi-supervised learning method for dimensionality reduction, namely kernel maximum margin projection (KMMP) is proposed in this paper based on our previous work of maximum margin projection (MMP). Unlike traditional dimensionality reduction algorithms such as principal component analysis (PCA) and linear discriminant analysis (LDA), which only see the global Euclidean structure, KMMP is designed for discovering the local manifold structure. After projecting the images into a lower dimensional subspace, KMMP significantly improves the performance of image retrieval. The experimental results on Corel image database demonstrate the effectiveness of our proposed nonlinear algorithm.  相似文献   

12.
The exponential growth of information on the Web has introduced new challenges for building effective search engines. A major problem of web search is that search queries are usually short and ambiguous, and thus are insufficient for specifying the precise user needs. To alleviate this problem, some search engines suggest terms that are semantically related to the submitted queries so that users can choose from the suggestions the ones that reflect their information needs. In this paper, we introduce an effective approach that captures the user's conceptual preferences in order to provide personalized query suggestions. We achieve this goal with two new strategies. First, we develop online techniques that extract concepts from the web-snippets of the search result returned from a query and use the concepts to identify related queries for that query. Second, we propose a new two-phase personalized agglomerative clustering algorithm that is able to generate personalized query clusters. To the best of the authors' knowledge, no previous work has addressed personalization for query suggestions. To evaluate the effectiveness of our technique, a Google middleware was developed for collecting clickthrough data to conduct experimental evaluation. Experimental results show that our approach has better precision and recall than the existing query clustering methods.  相似文献   

13.
Most Web search engines use the content of the Web documents and their link structures to assess the relevance of the document to the user’s query. With the growth of the information available on the web, it becomes difficult for such Web search engines to satisfy the user information need expressed by few keywords. First, personalized information retrieval is a promising way to resolve this problem by modeling the user profile by his general interests and then integrating it in a personalized document ranking model. In this paper, we present a personalized search approach that involves a graph-based representation of the user profile. The user profile refers to the user interest in a specific search session defined as a sequence of related queries. It is built by means of score propagation that allows activating a set of semantically related concepts of reference ontology, namely the ODP. The user profile is maintained across related search activities using a graph-based merging strategy. For the purpose of detecting related search activities, we define a session boundary recognition mechanism based on the Kendall rank correlation measure that tracks changes in the dominant concepts held by the user profile relatively to a new submitted query. Personalization is performed by re-ranking the search results of related queries using the user profile. Our experimental evaluation is carried out using the HARD 2003 TREC collection and showed that our session boundary recognition mechanism based on the Kendall measure provides a significant precision comparatively to other non-ranking based measures like the cosine and the WebJaccard similarity measures. Moreover, results proved that the graph-based search personalization is effective for improving the search accuracy.  相似文献   

14.
We have developed a novel system for content-based image retrieval in large, unannotated databases. The system is called PicSOM, and it is based on tree structured self-organizing maps (TS-SOMs). Given a set of reference images, PicSOM is able to retrieve another set of images which are similar to the given ones. Each TS-SOM is formed with a different image feature representation like color, texture, or shape. A new technique introduced in PicSOM facilitates automatic combination of responses from multiple TS-SOMs and their hierarchical levels. This mechanism adapts to the user's preferences in selecting which images resemble each other. Thus, the mechanism implements a relevance feedback technique on content-based image retrieval. The image queries are performed through the World Wide Web and the queries are iteratively refined as the system exposes more images to the user.  相似文献   

15.
In this paper, we propose a multimodal query suggestion method for video search which can leverage multimodal processing to improve the quality of search results. When users type general or ambiguous textual queries, our system MQSS provides keyword suggestions and representative image examples in an easy-to-use dropdown manner which can help users specify their search intent more precisely and effortlessly. It is a powerful complement to initial queries. After the queries are formulated as multimodal query (i.e., text, image), the new queries are input to individual search models, such as text-based, concept-based and visual example-based search model. Then we apply multimodal fusion method to aggregate the above-mentioned several search results. The effectiveness of MQSS is demonstrated by evaluations over a web video data set.  相似文献   

16.
17.
Data integration systems on the Deep Web offer a transparent means to query multiple data sources at once. Result merging– the generation of an overall ranked list of results from different sources in response to a query– is a key component of a data integration system. In this work we present a result merging model, called Active Relevance Weight Estimation model. Different from the existing techniques for result merging, we estimate the relevance of a data source in answering a query at query time. The relevances for a set of data sources are expressed with a (normalized) weighting scheme: the larger the weight for a data source the more relevant the source is in answering a query. We estimate the weights of a data source in each subset of the data sources involved in a training query. Because an online query may not exactly match any training query, we devise methods to obtain a subset of training queries that are related to the online query. We estimate the relevance weights of the online query from the weights of this subset of training queries. Our experiments show that our method outperforms the leading merging algorithms with comparable response time.  相似文献   

18.
Many recent state-of-the-art image retrieval approaches are based on Bag-of-Visual-Words model and represent an image with a set of visual words by quantizing local SIFT (scale invariant feature transform) features. Feature quantization reduces the discriminative power of local features and unavoidably causes many false local matches between images, which degrades the retrieval accuracy. To filter those false matches, geometric context among visual words has been popularly explored for the verification of geometric consistency. However, existing studies with global or local geometric verification are either computationally expensive or achieve limited accuracy. To address this issue, in this paper, we focus on partial duplicate Web image retrieval, and propose a scheme to encode the spatial context for visual matching verification. An efficient affine enhancement scheme is proposed to refine the verification results. Experiments on partial-duplicate Web image search, using a database of one million images, demonstrate the effectiveness and efficiency of the proposed approach. Evaluation on a 10-million image database further reveals the scalability of our approach.  相似文献   

19.
针对自动图像标注中底层特征和高层语义之间的鸿沟问题,提出一种基于随机点积图的图像标注改善算法。该算法首先采用图像底层特征对图像候选标注词建立语义关系图,然后利用随机点积图对其进行随机重构,从而挖掘出训练图像集中丢失的语义关系,最后采用重启式随机游走算法,实现图像标注改善。该算法结合了图像的底层特征与高层语义,有效降低了图像集规模变小对标注的影响。在3种通用图像库上的实验证明了该算法能够有效改善图像标注,宏F值与微平均F值最高分别达到0.784与0.743。  相似文献   

20.
Series feature aggregation for content-based image retrieval   总被引:1,自引:0,他引:1  
Feature aggregation is a critical technique in content-based image retrieval (CBIR) systems that employs multiple visual features to characterize image content. Most previous feature aggregation schemes apply parallel topology, e.g., the linear combination scheme, which suffer from two problems. First, the function of individual visual feature is limited since the ranks of the retrieved images are determined only by the combined similarity. Second, the irrelevant images seriously affect the retrieval performance of feature aggregation scheme since all images in a collection will be ranked. To address these problems, we propose a new feature aggregation scheme, series feature aggregation (SFA). SFA selects relevant images using visual features one by one in series from the images highly ranked by the previous visual feature. The irrelevant images will be effectively filtered out by individual visual features in each stage, and the remaining images are collectively described by all visual features. Experiments, conducted with IAPR TC-12 benchmark image collection (ImageCLEF2006) that contains over 20,000 photographic images and defined queries, have shown that the proposed SFA can outperform conventional parallel feature aggregation schemes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号