首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
For automatically mining the underlying relationships between different famous persons in daily news, for example, building a news person based network with the faces as icons to facilitate face-based person finding, we need a tool to automatically label faces in new images with their real names. This paper studies the problem of linking names with faces from large-scale news images with captions. In our previous work, we proposed a method called Person-based Subset Clustering which is mainly based on face clustering for all face images derived from the same name. The location where a name appears in a caption, as well as the visual structural information within a news image provided informative cues such as who are really in the associated image. By combining the domain knowledge from the captions and the corresponding image we propose a novel cross-modality approach to further improve the performance of linking names with faces. The experiments are performed on the data sets including approximately half a million news images from Yahoo! news, and the results show that the proposed method achieves significant improvement over the clustering-only methods.  相似文献   

2.
In this paper, we address the problem of visual instance mining, which is to automatically discover frequently appearing visual instances from a large collection of images. We propose a scalable mining method by leveraging the graph structure with images as vertices. Different from most existing approaches that focus on either instance-level similarities or image-level context properties, our method captures both information. In the proposed framework, the instance-level information is integrated during the construction of a sparse instance graph based on the similarity between augmented local features, while the image-level context is explored with a greedy breadth-first search algorithm to discover clusters of visual instances from the graph. This framework can tackle the challenges brought by small visual instances, diverse intra-class variations, as well as noise in large-scale image databases. To further improve the robustness, we integrate two techniques into the basic framework. First, to better cope with the increasing noise of large databases, weak geometric consistency is adopted to efficiently combine the geometric information of local matches into the construction of the instance graph. Second, we propose the layout embedding algorithm, which leverages the algorithm originally designed for graph visualization to fully explore the image database structure. The proposed method was evaluated on four annotated data sets with different characteristics, and experimental results showed the superiority over state-of-the-art algorithms on all data sets. We also applied our framework on a one-million Flickr data set and proved its scalability.  相似文献   

3.
This paper proposes a graph‐based approach for semisupervised clustering based on pairwise relations among instances. In our approach, the entire data set is represented as an edge‐weighted graph by mapping each data element (instance) as a vertex and connecting the instances by edges with their similarities. In order to reflect pairwise constraints on the clustering process, the graph is modified by contraction as it is known from general graph theory and the graph Laplacian in spectral graph theory. The graph representation enables us to deal with pairwise constraints as well as pairwise similarities over the same unified representation. By exploiting the constraints as well as similarities among instances, the entire data set is projected onto a subspace via the modified graph, and data clustering is conducted over the projected representation. The proposed approach is evaluated over several real‐world data sets. The results are encouraging and show that it is worthwhile to pursue the proposed approach.  相似文献   

4.
We introduce a new approach for finding overlapping clusters given pairwise similarities of objects. In particular, we relax the problem of correlation clustering by allowing an object to be assigned to more than one cluster. At the core of our approach is an optimization problem in which each data point is mapped to a small set of labels, representing membership in different clusters. The objective is to find a mapping so that the given similarities between objects agree as much as possible with similarities taken over their label sets. The number of labels can vary across objects. To define a similarity between label sets, we consider two measures: (i) a 0–1 function indicating whether the two label sets have non-zero intersection and (ii) the Jaccard coefficient between the two label sets. The algorithm we propose is an iterative local-search method. The definitions of label set similarity give rise to two non-trivial optimization problems, which, for the measures of set-intersection and Jaccard, we solve using a greedy strategy and non-negative least squares, respectively. We also develop a distributed version of our algorithm based on the BSP model and implement it using a Pregel framework. Our algorithm uses as input pairwise similarities of objects and can thus be applied when clustering structured objects for which feature vectors are not available. As a proof of concept, we apply our algorithms on three different and complex application domains: trajectories, amino-acid sequences, and textual documents.  相似文献   

5.
An efficient algorithm for discovering frequent subgraphs   总被引:8,自引:0,他引:8  
Over the years, frequent itemset discovery algorithms have been used to find interesting patterns in various application areas. However, as data mining techniques are being increasingly applied to nontraditional domains, existing frequent pattern discovery approaches cannot be used. This is because the transaction framework that is assumed by these algorithms cannot be used to effectively model the data sets in these domains. An alternate way of modeling the objects in these data sets is to represent them using graphs. Within that model, one way of formulating the frequent pattern discovery problem is that of discovering subgraphs that occur frequently over the entire set of graphs. We present a computationally efficient algorithm, called FSG, for finding all frequent subgraphs in large graph data sets. We experimentally evaluate the performance of FSG using a variety of real and synthetic data sets. Our results show that despite the underlying complexity associated with frequent subgraph discovery, FSG is effective in finding all frequently occurring subgraphs in data sets containing more than 200,000 graph transactions and scales linearly with respect to the size of the data set.  相似文献   

6.
Image annotation has been an active research topic in recent years due to its potential impact on both image understanding and web image search. In this paper, we propose a graph learning framework for image annotation. First, the image-based graph learning is performed to obtain the candidate annotations for each image. In order to capture the complex distribution of image data, we propose a Nearest Spanning Chain (NSC) method to construct the image-based graph, whose edge-weights are derived from the chain-wise statistical information instead of the traditional pairwise similarities. Second, the word-based graph learning is developed to refine the relationships between images and words to get final annotations for each image. To enrich the representation of the word-based graph, we design two types of word correlations based on web search results besides the word co-occurrence in the training set. The effectiveness of the proposed solution is demonstrated from the experiments on the Corel dataset and a web image dataset.  相似文献   

7.
We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.  相似文献   

8.
This paper proposes a new method for finding principal curves from data sets. Motivated by solving the problem of highly curved and self-intersecting curves, we present a bottom-up strategy to construct a graph called a principal graph for representing a principal curve. The method initializes a set of vertices based on principal oriented points introduced by Delicado, and then constructs the principal graph from these vertices through a two-layer iteration process. In inner iteration, the kernel smoother is used to smooth the positions of the vertices. In outer iteration, the principal graph is spanned by minimum spanning tree and is modified by detecting closed regions and intersectional regions, and then, new vertices are inserted into some edges in the principal graph. We tested the algorithm on simulated data sets and applied it to image skeletonization. Experimental results show the effectiveness of the proposed algorithm.  相似文献   

9.
Many mal-practices in stock market trading—e.g., circular trading and price manipulation—use the modus operandi of collusion. Informally, a set of traders is a candidate collusion set when they have “heavy trading” among themselves, as compared to their trading with others. We formalize the problem of detection of collusion sets, if any, in the given trading database. We show that naïve approaches are inefficient for real-life situations. We adapt and apply two well-known graph clustering algorithms for this problem. We also propose a new graph clustering algorithm, specifically tailored for detecting collusion sets. A novel feature of our approach is the use of Dempster–Schafer theory of evidence to combine the candidate collusion sets detected by individual algorithms. Treating individual experiments as evidence, this approach allows us to quantify the confidence (or belief) in the candidate collusion sets. We present detailed simulation experiments to demonstrate effectiveness of the proposed algorithms.  相似文献   

10.
In this paper, we address the multiple target tracking problem as a maximum a posteriori problem. We adopt a graph representation of all observations over time. To make full use of the visual observations from the image sequence, we introduce both motion and appearance likelihood. The multiple target tracking problem is formulated as finding multiple optimal paths in the graph. Due to the noisy foreground segmentation, an object may be represented by several foreground regions and similarly one foreground region may correspond to multiple objects. To deal with this problem, we propose merge, split and mean shift operations to generate new hypotheses to the measurement graph. The proposed approach uses a sliding window framework, that aggregates information across a fixed number of frames. Experimental results on both indoor and outdoor data sets are reported. Furthermore, we provide a comparison between the proposed approach with the existing methods that do not merge/split detected blobs.  相似文献   

11.
We propose a multivariate feature selection method that uses proximity graphs for assessing the quality of feature subsets. Initially, a complete graph is built, where nodes are the samples, and edge weights are calculated considering only the selected features. Next, a proximity graph is constructed on the basis of these weights and different fitness functions, calculated over the proximity graph, to evaluate the quality of the selected feature set. We propose an iterative methodology on the basis of a memetic algorithm for exploring the space of possible feature subsets aimed at maximizing a quality score. We designed multiple local search strategies, and we used an adaptive strategy for automatic balancing between the global and local search components of the memetic algorithm. The computational experiments were carried out using four well‐known data sets. We investigate the suitability of three different proximity graphs (minimum spanning tree, k‐nearest neighbors, and relative neighborhood graph) for the proposed approach. The selected features have been evaluated using a total of 49 classification methods from an open‐source data mining and machine learning package (WEKA). The computational results show that the proposed adaptive memetic algorithm can perform better than traditional genetic algorithms in finding more useful feature sets. Finally, we establish the competitiveness of our approach by comparing it with other well‐known feature selection methods.  相似文献   

12.
Robust self-tuning semi-supervised learning   总被引:3,自引:0,他引:3  
Fei  Changshui 《Neurocomputing》2007,70(16-18):2931
We investigate the issue of graph-based semi-supervised learning (SSL). The labeled and unlabeled data points are represented as vertices in an undirected weighted neighborhood graph, with the edge weights encoding the pairwise similarities between data objects in the same neighborhood. The SSL problem can be then formulated as a regularization problem on this graph. In this paper we propose a robust self-tuning graph-based SSL method, which (1) can determine the similarities between pairwise data points automatically; (2) is not sensitive to outliers. Promising experimental results are given for both synthetic and real data sets.  相似文献   

13.
Extracting influential nodes on a social network for information diffusion   总被引:1,自引:0,他引:1  
We address the combinatorial optimization problem of finding the most influential nodes on a large-scale social network for two widely-used fundamental stochastic diffusion models. The past study showed that a greedy strategy can give a good approximate solution to the problem. However, a conventional greedy method faces a computational problem. We propose a method of efficiently finding a good approximate solution to the problem under the greedy algorithm on the basis of bond percolation and graph theory, and compare the proposed method with the conventional method in terms of computational complexity in order to theoretically evaluate its effectiveness. The results show that the proposed method is expected to achieve a great reduction in computational cost. We further experimentally demonstrate that the proposed method is much more efficient than the conventional method using large-scale real-world networks including blog networks.  相似文献   

14.
Personal name disambiguation is an important task in social network extraction, evaluation and integration of ontologies, information retrieval, cross‐document coreference resolution and word sense disambiguation. We propose an unsupervised method to automatically annotate people with ambiguous names on the Web using automatically extracted keywords. Given an ambiguous personal name, first, we download text snippets for the given name from a Web search engine. We then represent each instance of the ambiguous name by a term‐entity model (TEM), a model that we propose to represent the Web appearance of an individual. A TEM of a person captures named entities and attribute values that are useful to disambiguate that person from his or her namesakes (i.e., different people who share the same name). We then use group average agglomerative clustering to identify the instances of an ambiguous name that belong to the same person. Ideally, each cluster must represent a different namesake. However, in practice it is not possible to know the number of namesakes for a given ambiguous personal name in advance. To circumvent this problem, we propose a novel normalized cuts‐based cluster stopping criterion to determine the different people on the Web for a given ambiguous name. Finally, we annotate each person with an ambiguous name using keywords selected from the clusters. We evaluate the proposed method on a data set of over 2500 documents covering 200 different people for 20 ambiguous names. Experimental results show that the proposed method outperforms numerous baselines and previously proposed name disambiguation methods. Moreover, the extracted keywords reduce ambiguity of a name in an information retrieval task, which underscores the usefulness of the proposed method in real‐world scenarios.  相似文献   

15.
Robust tracking of multiple people in video sequences is a challenging task. In this paper, we present an algorithm for tracking faces of multiple people even in cases of total occlusion. Faces are detected first; then a model for each person is built. The models are handed over to the tracking module which is based on the mean shift algorithm, where each face is represented by the non-parametric distribution of the colors in the face region. The mean shift tracking algorithm is robust to partial occlusion and rotation, and is computationally efficient, but it does not deal with the problem of total occlusion. Our algorithm overcomes this problem by detecting the occlusion using an occlusion grid, and uses a non-parametric distribution of the color of the occluded person's cloth to distinguish that person after the occlusion ends. Our algorithm uses the speed and the trajectory of each occluded person to predict the locations that should be searched after occlusion ends. It integrates multiple features to handle tracking multiple people in cases of partial and total occlusion. Experiments on a large set of video clips demonstrate the robustness of the algorithm, and its capability to correctly track multiple people even when faces are temporarily occluded by other faces or by other objects in the scene.  相似文献   

16.
17.
针对目前基于监督学习的关系抽取方法需要标注大量训练数据和预先定义关系类型,提出了一种基于词语共现信息构建关联网络并在关联网络上进行图聚类分析的人物关系提取方法。首先,从新闻标题数据获得关联度较高的500个人物对用于关系抽取研究;然后,抓取关联人物对所在新闻数据,对其进行预处理,并利用词频-逆向文档频率(TF-IDF)得到人物对共现句子中的关键词;其次,基于词语共现信息得到词语之间的关联,进而建立关键词关联网络;最后,利用对关联网络进行图聚类分析以获得人物关系。在关系抽取的实验中,与传统基于词语共现和模式匹配的中文实体关系提取方法相比,所提方法在准确率、召回率和平衡F分数(F-score)上分别提升了5.5,3.7和4.4个百分点。实验结果表明,所提算法能够在没有标注训练数据的条件下,有效地从新闻数据中抽取丰富且高质量的人物关系数据。  相似文献   

18.
近年来,子图查询作为图数据库管理的一项重要课题受到国内外学者的广泛关注。在现实应用中大部分图数据是频繁更新的,而现有方法对图数据的频繁更新的维护代价较高。子图查询本身就是NP完全问题,在动态图数据上子图查询问题就变得更加困难。针对上述问题,提出了支持动态图数据的子图查询方法。该方法首先构造出每张图的拓扑层次序列作为索引,在序列中加入标号以便数据更新后对索引进行维护,再根据序列间的匹配关系过滤出候选集合,最后采用图同构算法验证候选集中的图,最终得到结果集合。该方法的索引构造简单且体积小,并且在图数据库更新后无需重构索引,不仅支持动态图数据上的子图查询,在静态图数据上也表现出良好的性能。  相似文献   

19.
连玮 《计算机应用》2012,32(9):2564-2567
针对旋转不变的弹性点匹配问题,提出一种基于图匹配的算法。对两点集分别构造边集合,然后定向的形状上下文距离和边长度的差别被用于度量两点集的边之间的相似性。基于边的相似性,点对应关系通过求解一个图匹配问题而恢复。实验结果表明该算法可以获得很好的配准结果并且鲁棒、高效。  相似文献   

20.
提出一种在大规模微博短文本数据集中自动发现新闻话题的方法。该方法在微博数据预处理之后,综合TF-IDF、文档频率增长率和命名实体识别等几个因素抽取微博数据中的主题词。根据主题词之间的语义关系来构建主题词的语义共现图,计算出语义共现图的连通子图,把每个不连通的簇集看成一个新闻话题。在新浪微博数据集上进行实验,实现了对微博中新闻话题的识别。该方法能较好检测出当前时间的热门话题,能够在一定程度上有效地避免错误传播,实验结果验证了该方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号