首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Similarity queries on complex objects are usually translated into searches among their feature vectors. This paper studies indexing techniques for very high-dimensional (e.g., in hundreds) vectors that are sparse or quasi-sparse, i.e., vectors each having only a small number (e.g., ten) of non-zero or significant values. Based on the R-tree, the paper introduces the xS-tree that uses lossy compression of bounding regions to guarantee a reasonable minimum fan-out within the allocated storage space for each node. In addition, the paper studies the performance and scalability of the xS-tree via experiments. Received: 3 May 1999 / Accepted: 23 October 2000 Published online: 27 April 2001  相似文献   

2.
Spatial indexing of high-dimensional data based on relative approximation   总被引:2,自引:0,他引:2  
We propose a novel index structure, the A-tree (approximation tree), for similarity searches in high-dimensional data. The basic idea of the A-tree is the introduction of virtual bounding rectangles (VBRs) which contain and approximate MBRs or data objects. VBRs can be represented quite compactly and thus affect the tree configuration both quantitatively and qualitatively. First, since tree nodes can contain a large number of VBR entries, fanout becomes large, which increases search speed. More importantly, we have a free hand in arranging MBRs and VBRs in the tree nodes. Each A-tree node contains an MBR and its children VBRs. Therefore, by fetching an A-tree node, we can obtain information on the exact position of a parent MBR and the approximate position of its children. We have performed experiments using both synthetic and real data sets. For the real data sets, the A-tree outperforms the SR-tree and the VA-file in all dimensionalities up to 64 dimensions, which is the highest dimension in our experiments. Additionally, we propose a cost model for the A-tree. We verify the validity of the cost model for synthetic and real data sets. Edited by T. Sellis. Received: December 8, 2000 / Accepted: March 20, 2002 Published online: September 25, 2002  相似文献   

3.
Discovery of a perceptual distance function for measuring image similarity   总被引:3,自引:0,他引:3  
For more than a decade, researchers have actively explored the area of image/video analysis and retrieval. Yet one fundamental problem remains largely unsolved: how to measure perceptual similarity between two objects. For this purpose, most researchers employ a Minkowski-type metric. Unfortunately, the Minkowski metric does not reliably find similarities in objects that are obviously alike. Through mining a large set of visual data, our team has discovered a perceptual distance function. We call the discovered function the dynamic partial function (DPF). When we empirically compare DPF to Minkowski-type distance functions in image retrieval and in video shot-transition detection using our image features, DPF performs significantly better. The effectiveness of DPF can be explained by similarity theories in cognitive psychology.  相似文献   

4.
In order to get useful information from various kinds of information sources, we first apply a searching process with query statements to retrieve candidate data objects (called a hunting process in this paper) and then apply a browsing process to check the properties of each object in detail by visualizing candidates. In traditional information retrieval systems, the hunting process determines the quality of the result, since there are only a few candidates left for the browsing process. In order to retrieve data from widely distributed digital libraries, the browsing process becomes very important, since the properties of data sources are not known in advance. After getting data from various information sources, a user checks the properties of data in detail using the browsing process. The result can be used to improve the hunting process or for selecting more appropriate visualization parameters. Visualization relationships among data are very important, but will become too time-consuming if the amount of data in the candidate set is large, for example, over one hundred objects. One of the important problems in handling information retrieval from a digital library is to create efficient and powerful visualization mechanisms for the browsing process. One promising way to solve the visualization problem is to map each candidate data object into a location in three-dimensional (3D) space using a proper distance definition. In this paper, we will introduce the functions and organization of a system having a browsing navigator to achieve an efficient browsing process in 3D information search space. This browsing navigator has the following major functions: ?1. Selection of features which determine the distance for visualization, in order to generate a uniform distribution of candidate data objects in the resulting space. ?2. Calculation of the location of the data objects in 2D space using the selected features. ?3. Construction of 3D browsing space by combining 2D spaces, in order to find the required data objects easily. ?4. Generation of the oblique views of 3D browsing space and data objects by reducing the overlap of data objects in order to make navigation easy for the user in 3D space. ?Examples of this browsing navigator applied to book data are shown. Received: 15 December 1997 / Revised: June 1999  相似文献   

5.
Symbolic images are composed of a finite set of symbols that have a semantic meaning. Examples of symbolic images include maps (where the semantic meaning of the symbols is given in the legend), engineering drawings, and floor plans. Two approaches for supporting queries on symbolic-image databases that are based on image content are studied. The classification approach preprocesses all symbolic images and attaches a semantic classification and an associated certainty factor to each object that it finds in the image. The abstraction approach describes each object in the symbolic image by using a vector consisting of the values of some of its features (e.g., shape, genus, etc.). The approaches differ in the way in which responses to queries are computed. In the classification approach, images are retrieved on the basis of whether or not they contain objects that have the same classification as the objects in the query. On the other hand, in the abstraction approach, retrieval is on the basis of similarity of feature vector values of these objects. Methods of integrating these two approaches into a relational multimedia database management system so that symbolic images can be stored and retrieved based on their content are described. Schema definitions and indices that support query specifications involving spatial as well as contextual constraints are presented. Spatial constraints may be based on both locational information (e.g., distance) and relational information (e.g., north of). Different strategies for image retrieval for a number of typical queries using these approaches are described. Estimated costs are derived for these strategies. Results are reported of a comparative study of the two approaches in terms of image insertion time, storage space, retrieval accuracy, and retrieval time. Received June 12, 1998 / Accepted October 13, 1998  相似文献   

6.
This paper presents a tunable content-based music retrieval (CBMR) system suitable the for retrieval of music audio clips. The audio clips are represented as extracted feature vectors. The CBMR system is expert-tunable by altering the feature space. The feature space is tuned according to the expert-specified similarity criteria expressed in terms of clusters of similar audio clips. The main goal of tuning the feature space is to improve retrieval performance, since some features may have more impact on perceived similarity than others. The tuning process utilizes our genetic algorithm. The R-tree index for efficient retrieval of audio clips is based on the clustering of feature vectors. For each cluster a minimal bounding rectangle (MBR) is formed, thus providing objects for indexing. Inserting new nodes into the R-tree is efficiently performed because of the chosen Quadratic Split algorithm. Our CBMR system implements the point query and the n-nearest neighbors query with the O(logn) time complexity. Different objective functions based on cluster similarity and dissimilarity measures are used for the genetic algorithm. We have found that all of them have similar impact on the retrieval performance in terms of precision and recall. The paper includes experimental results in measuring retrieval performance, reporting significant improvement over the untuned feature space.  相似文献   

7.
P so that each point of P is seen by at least one guard. We introduce and explore the edge-covering problem; the guards are required to observe the edges of P; metaphorically the paintings on the walls of the art gallery, and not necessarily every interior point. We compare minimum edge and interior covers for a given polygon and analyze the bounds and complexity for the edge-covering problem. We also introduce and analyze a restricted edge covering problem, where full visibility of each edge from at least one guard is required. For this problem we present an algorithm that computes a set of regions where a minimum set of guards must be located. The algorithm can also deal with the external visibility of a set of polygons.  相似文献   

8.
Query by video clip   总被引:15,自引:0,他引:15  
Typical digital video search is based on queries involving a single shot. We generalize this problem by allowing queries that involve a video clip (say, a 10-s video segment). We propose two schemes: (i) retrieval based on key frames follows the traditional approach of identifying shots, computing key frames from a video, and then extracting image features around the key frames. For each key frame in the query, a similarity value (using color, texture, and motion) is obtained with respect to the key frames in the database video. Consecutive key frames in the database video that are highly similar to the query key frames are then used to generate the set of retrieved video clips. (ii) In retrieval using sub-sampled frames, we uniformly sub-sample the query clip as well as the database video. Retrieval is based on matching color and texture features of the sub-sampled frames. Initial experiments on two video databases (basketball video with approximately 16,000 frames and a CNN news video with approximately 20,000 frames) show promising results. Additional experiments using segments from one basketball video as query and a different basketball video as the database show the effectiveness of feature representation and matching schemes.  相似文献   

9.
A Two-Stage Framework for Polygon Retrieval   总被引:4,自引:0,他引:4  
  相似文献   

10.
Abstract. For some multimedia applications, it has been found that domain objects cannot be represented as feature vectors in a multidimensional space. Instead, pair-wise distances between data objects are the only input. To support content-based retrieval, one approach maps each object to a k-dimensional (k-d) point and tries to preserve the distances among the points. Then, existing spatial access index methods such as the R-trees and KD-trees can support fast searching on the resulting k-d points. However, information loss is inevitable with such an approach since the distances between data objects can only be preserved to a certain extent. Here we investigate the use of a distance-based indexing method. In particular, we apply the vantage point tree (vp-tree) method. There are two important problems for the vp-tree method that warrant further investigation, the n-nearest neighbors search and the updating mechanisms. We study an n-nearest neighbors search algorithm for the vp-tree, which is shown by experiments to scale up well with the size of the dataset and the desired number of nearest neighbors, n. Experiments also show that the searching in the vp-tree is more efficient than that for the -tree and the M-tree. Next, we propose solutions for the update problem for the vp-tree, and show by experiments that the algorithms are efficient and effective. Finally, we investigate the problem of selecting vantage-point, propose a few alternative methods, and study their impact on the number of distance computation. Received June 9, 1998 / Accepted January 31, 2000  相似文献   

11.
A sweeping operation called polygonal extrusion is defined to improve the modeling power of CSG-based modeling. It is assumed that a 2D cross-sectional polygon (sweeping polygon) moves in space while its containing plane is kept orthogonal to the tangent direction of the trajectory curve, a planar polygonal chain having no self-intersections. The objective of the paper is to compute the boundary of the swept volume of the sweeping polygon as a set of polygons (or triangles). The most significant challenge to accomplishing this objective is the problem of trimming the swept volume. To solve the trimming problem, 2D-curve offsetting methods are employed. Two algorithms are presented for polygonal extrusion that are based on different offsetting methods, the Voronoi diagram and PWID offset. The proposed algorithms have been implemented and tested with various examples. Published online: 28 January 2003  相似文献   

12.
We present an efficient and accurate method for retrieving images based on color similarity with a given query image or histogram. The method matches the query against parts of the image using histogram intersection. Efficient searching for the best matching subimage is done by pruning the set of subimages using upper bound estimates. The method is fast, has high precision and recall and also allows queries based on the positions of one or more objects in the database image. Experimental results showing the efficiency of the proposed search method, and high precision and recall of retrieval are presented. Received: 20 January 1997 / Accepted: 5 January 1998  相似文献   

13.
The polygon package is a set of procedures which manipulate geometric objects in the 2D plane. The operations that can be performed on these objects, regarded as polygons, include intersection, union, asymmetric difference, inflation and deflation. Polygons may have both straight and circular arc edges and include holes within their boundaries. Using a binary tree of bounding boxes as a sorting aid in the centrally important procedure for calculating edge intersections results in a fast, efficient package.  相似文献   

14.
In content-based image retrieval systems, the content of an image such as color, shapes and textures are used to retrieve images that are similar to a query image. Most of the existing work focus on the retrieval effectiveness of using content for retrieval, i.e., study the accuracy (in terms of recall and precision) of using different representations of content. In this paper, we address the issue of retrieval efficiency, i.e., study the speed of retrieval, since a slow system is not useful for large image databases. In particular, we look at using the shape feature as the content of an image, and employ the centroid–radii model to represent the shape feature of objects in an image. This facilitates multi-resolution and similarity retrievals. Furthermore, using the model, the shape of an object can be transformed into a point in a high-dimensional data space. We can thus employ any existing high-dimensional point index as an index to speed up the retrieval of images. We propose a multi-level R-tree index, called the Nested R-trees (NR-trees) and compare its performance with that of the R-tree. Our experimental study shows that NR-trees can reduce the retrieval time significantly compared to R-tree, and facilitate similarity retrieval. We note that our NR-trees can also be used to index high-dimensional point data commonly found in many other applications.  相似文献   

15.
16.
Optical character reader (OCR) misrecognition is a serious problem when OCR-recognized text is used for retrieval purposes in digital libraries. We have proposed fuzzy retrieval methods that, instead of correcting the errors manually, assume that errors remain in the recognized text. Costs are thereby reduced. The proposed methods generate multiple search terms for each input query term by referring to confusion matrices, which store all characters likely to be misrecognized and the respective probability of each misrecognition. The proposed methods can improve recall rates without decreasing precision rates. However, a few million search terms are occasionally generated in English-text fuzzy retrieval, giving an intolerable effect on retrieval speed. Therefore, this paper presents two remedies to reduce the number of generated search terms while maintaining retrieval effectiveness. One remedy is to restrict the number of errors included in each expanded search term, while the other is to introduce another validity value different to our conventional one. Experimental results indicate that the former remedy reduced the number of terms to about 50 and the latter to not more than 20. Received: 18 December 1998 / Revised: 31 May 1999  相似文献   

17.
Object detection and location from remote sensing (RS) images is challenging, computationally expensive, and labor intense. Benefiting from research on convolutional neural networks (CNNs), the performance in this field has improved in the recent years. However, object detection methods based on CNNs require a large number of images with annotation information for training. For object location, these annotations must contain bounding boxes. Furthermore, objects in RS images are usually small and densely co-located, leading to a high cost of manual annotation. We tackle the problem of weakly supervised object detection under such conditions, aiming to learn detectors with only image-level annotations, i.e., without bounding box annotations. Based on the fact that the feature maps of a CNN are localizable, we hierarchically fuse the location information from the shallow feature map with the class activation map to obtain accurate object locations. In order to mitigate the loss of small or densely distributed objects, we introduce a divergent activation module and a similarity module into the network. The divergent activation module is used to improve the response strength of the low-response areas in the shallow feature map. Densely distributed objects in RS images, such as aircraft in an airport, often exhibit a certain similarity. The similarity module is used to improve the feature distribution of the shallow feature map and to suppress background noise. Comprehensive experiments on a public dataset and a self-assembled dataset (which we made publicly available) show the superior performance of our method compared to state-of-the-art object detectors.  相似文献   

18.
NeTra: A toolbox for navigating large image databases   总被引:17,自引:0,他引:17  
We present here an implementation of NeTra, a prototype image retrieval system that uses color, texture, shape and spatial location information in segmented image regions to search and retrieve similar regions from the database. A distinguishing aspect of this system is its incorporation of a robust automated image segmentation algorithm that allows object- or region-based search. Image segmentation significantly improves the quality of image retrieval when images contain multiple complex objects. Images are segmented into homogeneous regions at the time of ingest into the database, and image attributes that represent each of these regions are computed. In addition to image segmentation, other important components of the system include an efficient color representation, and indexing of color, texture, and shape features for fast search and retrieval. This representation allows the user to compose interesting queries such as “retrieve all images that contain regions that have the color of object A, texture of object B, shape of object C, and lie in the upper of the image”, where the individual objects could be regions belonging to different images. A Java-based web implementation of NeTra is available at http://vivaldi.ece.ucsb.edu/Netra.  相似文献   

19.
Approximate similarity retrieval with M-trees   总被引:4,自引:0,他引:4  
Motivated by the urgent need to improve the efficiency of similarity queries, approximate similarity retrieval is investigated in the environment of a metric tree index called the M-tree. Three different approximation techniques are proposed, which show how to forsake query precision for improved performance. Measures are defined that can quantify the improvements in performance efficiency and the quality of approximations. The proposed approximation techniques are then tested on various synthetic and real-life files. The evidence obtained from the experiments confirms our hypothesis that a high-quality approximated similarity search can be performed at a much lower cost than that needed to obtain the exact results. The proposed approximation techniques are scalable and appear to be independent of the metric used. Extensions of these techniques to the environments of other similarity search indexes are also discussed. Received July 7, 1998 / Accepted October 13, 1998  相似文献   

20.
Fast retrieval methods are critical for many large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sublinear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several data sets, and show that it enables accurate and fast performance for several vision problems, including example-based object classification, local feature matching, and content-based retrieval.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号