首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Graph edit distance from spectral seriation   总被引:3,自引:0,他引:3  
This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.  相似文献   

2.
This paper presents a novel unified hierarchical structure for scalable edit propagation. Our method is based on the key observation that in edit propagation, appearance varies very smoothly in those regions where the appearance is different from the user-specified pixels. Uniformly sampling in these regions leads to redundant computation. We propose to use a quadtree-based adaptive subdivision method such that more samples are selected in similar regions and less in those that are different from the user-specified regions. As a result, both the computation and the memory requirement are significantly reduced. In edit propagation, an edge-preserving propagation function is first built, and the full solution for all the pixels can be computed by interpolating from the solution obtained from the adaptively subdivided domain. Furthermore, our approach can be easily extended to accelerate video edit propagation using an adaptive octree structure. In order to improve user interaction, we introduce several new Gaussian Mixture Model (GMM) brushes to find pixels that are similar to the user-specified regions. Compared with previous methods, our approach requires significantly less time and memory, while achieving visually same results. Experimental results demonstrate the efficiency and effectiveness of our approach on high-resolution photographs and videos.  相似文献   

3.
Multimedia Tools and Applications - A significant issue in the domain of optical character recognition is handwritten text recognition. Here, two novel feature extraction techniques are proposed...  相似文献   

4.
Detecting duplicate entities in biological data is an important research task. In this paper, we propose a novel and context-sensitive Markov random field-based edit distance (MRFED) for this task. We apply the Markov random field theory to the Needleman–Wunsch distance and combine MRFED with TFIDF, a token-based distance algorithm, resulting in SoftMRFED. We compare SoftMRFED with other distance algorithms such as Levenshtein, SoftTFIDF, and Monge–Elkan for two matching tasks: biological entity matching and synonym matching. The experimental results show that SoftMRFED significantly outperforms the other edit distance algorithms on several test data collections. In addition, the performance of SoftMRFED is superior to token-based distance algorithms in two matching tasks.  相似文献   

5.
In this paper, we propose an edit propagation algorithm using quad-tree data structures for image manipulation. First, we use a quad-tree to adaptively group all pixels into clusters. Then, we build a manifold-preserving propagation function based on clusters using locally linear embedding for improved distance. Moreover, we employ an adaptive weight function built on cell corners instead of individual pixels. Because the number of corners is smaller than the number of individual pixels, it results in runtime performance improvement. Finally, the edits of all pixels can be computed by interpolating the edits solved from the clusters. Compared with previous approaches, our method requires less time without sacrificing the visualization quality. Experimental results demonstrate two applications of our algorithm: grayscale image colorization and color image recoloring.  相似文献   

6.
With the rapid growth of the availability and popularity of interpersonal and behavior-rich resources such as blogs and other social media avenues, emerging opportunities and challenges arise as people now can, and do, actively use computational intelligence to seek out and understand the opinions of others. The study of collective behavior of individuals has implications to business intelligence, predictive analytics, customer relationship management, and examining online collective action as manifested by various flash mobs, the Arab Spring (2011) and other such events. In this article, we introduce a nature-inspired theory to model collective behavior from the observed data on blogs using swarm intelligence, where the goal is to accurately model and predict the future behavior of a large population after observing their interactions during a training phase. Specifically, an ant colony optimization model is trained with behavioral trend from the blog data and is tested over real-world blogs. Promising results were obtained in trend prediction using ant colony based pheromone classier and CHI statistical measure. We provide empirical guidelines for selecting suitable parameters for the model, conclude with interesting observations, and envision future research directions.  相似文献   

7.
One major problem with purchasing through the Web is locating reliable suppliers that offer the exact product or service you need. In the usual approach, you access an indexing based search engine, specify keywords for the purchase, and initiate the search. The outcome is typically a list ranked according to keyword matches; useful, but not always helpful. Keyword matches provide only one ingredient to finding the right Web sites. The ranking should also consider the satisfaction of previous customers purchasing from those sites, customer profiles, and customer behavior. The Obelix search engine uses reconfigurable technology to apply customer satisfaction data obtained from the Internet service provider infrastructure to refine its search criteria. The Obelix system collects data about customer activities, calculates a customer satisfaction index, and updates the search engines with its findings  相似文献   

8.
9.
网络钓鱼是目前信息安全领域的一个研究热点,基于域名信息的钓鱼检测是使用较为广泛的一种方法.文章利用编辑距离寻找与已知正常域名相近的域名,根据域名信息提取域名单词最大匹配特征、域名分割特征和URL分割特征,利用这些特征训练贝叶斯分类器,根据给定特征属于哪一类的概率来判断此URL是否为钓鱼URL,实验结果表明该方法能有效提高判断准确性.  相似文献   

10.
11.
Due to increasing competition caused by globalization manufacturers have to reduce costs and at the same time provide better products to their customers’ individual needs. This can only be done, if the companies are able to understand the behavior of their customers and forecast the sales numbers for their individual products. One way to get a better prognosis of customer behavior patterns are observations on public market places. But the companies have to link together the observations with events influencing the decisions of customers. This can be done by using a decision support system which was developed for retailers in combination with a data warehouse. The experiences from this project can be transferred to manufacturing companies as well, helping them to achieve better planning data for the manufacturing process.  相似文献   

12.
Abstract

Cloud liquid water path (LWP) is an important parameter to validate forecasts obtained from circulation models of high spatial resolution. At present, it is not measured on a routine basis. Being part of the Advanced Very High Resolution Radiometer (AVHRR) Processing scheme Over cLoud, Land and Ocean (APOLLO) a parametrization scheme to derive optical thickness from cloud reflectance and, further, the LWP has been adopted and adjusted to AVHRR channel-1 counts. From these counts, top-of-atmosphere bidirectional reflectance data are obtained. Using APOLLO-derived fully cloudy pixels only, the directional hemispherical cloud reflectance is derived to which the parametrization scheme refers. The LWP of each fully cloudy pixel is determined. As a first application, the mean LWP of 63-5 × 63-5 km2 boxes is computed and compared to a 14-hour forecast of the LWP made with the Europa-Modell of the Deutsche Wetterdienst. The location of the clouds seems to be forecasted rather well by the model. However, the LWP computed by the model is higher (by a factor of 4 or 5) than that derived from AVHRR data using APOLLO. A first validation by means of aircraft measurements shows that the APOLLO-derived LWP is too low by about 50 per cent. This reduces the discrepancy to the computed data but the model predicted LWP still seems to be too high by a factor of 3.  相似文献   

13.
The goal of this study is to compare the influence of celebrity endorsements to online customer reviews on female shopping behavior. Based on AIDMA and AISAS models, we design an experiment to investigate consumer responses to search good and experience good respectively. The results revealed that search good (shoes) endorsed by a celebrity in an advertisement evoked significantly more attention, desire, and action from the consumer than did an online customer review. We also found that online customer reviews emerged higher than the celebrity endorsement on the scale of participants’ memory, search and share attitudes toward the experience good (toner). Implications for marketers as well as suggestions for future research are discussed.  相似文献   

14.
The adaptive constrained distance transformation (ACDT) is proposed to solve the vehicle path planning problem. The ACDT is a generalization of the constrained distance transformation. The incremental distance of the constrained distance transformation is modified into the incremental cost. By defining the incremental cost according to the incremental distance, the vehicle characteristics, and the local spatial properties of the terrain, the adaptive constrained distance transformation can be applied to solve the vehicle path planning problem that accounts for more than the Euclidean distance and hard constraints.  相似文献   

15.
A fully probabilistic approach to reconstructing Gaussian graphical models from distance data is presented. The main idea is to extend the usual central Wishart model in traditional methods to using a likelihood depending only on pairwise distances, thus being independent of geometric assumptions about the underlying Euclidean space. This extension has two advantages: the model becomes invariant against potential bias terms in the measurements, and can be used in situations which on input use a kernel- or distance matrix, without requiring direct access to the underlying vectors. The latter aspect opens up a huge new application field for Gaussian graphical models, as network reconstruction is now possible from any Mercer kernel, be it on graphs, strings, probabilities or more complex objects. We combine this likelihood with a suitable prior to enable Bayesian network inference. We present an efficient MCMC sampler for this model and discuss the estimation of module networks. Experiments depict the high quality and usefulness of the inferred networks.  相似文献   

16.
17.
Previous research has emphasized the virtues of customer insights as a key source of competitive advantage. The rise of customers’ social media use allows firms to collect customer data in an ever-increasing volume and variety. However, to date, little is known about the capabilities required of firms to turn social media data into valuable customer insights and exploit these insights to create added value for customers. Based on the dynamic capabilities perspective, in particular the concept of absorptive capacity (ACAP), the authors conducted multiple case studies of seven mid-sized and large B2C firms in Switzerland and Germany. The results provide an in-depth analysis of the underlying processes of ACAP as well as contingent factors – that is, physical, human and organizational resources that underpin the firms’ ACAP.  相似文献   

18.
We introduce a new approach to construct smooth piecewise curves representing realistic road paths. Given a GIS database of road networks in which sampled points are organized in 3D polylines, our method creates horizontal, then vertical curves, and finally combines them to produce 3D road paths. We first estimate the possibility of each point of being a junction between two separate primitive curve segments. Next, we design a tree‐traversal algorithm to expand sequences of local best fit primitives which are then merged together with respect to the G1 continuity constraint and civil engineering rules. We apply the Levenberg‐Marquardt method to minimize the error between the resulting curve and the sampled points while preserving the G1 continuity.  相似文献   

19.
Culberson and Rudnicki [J.C. Culberson, P. Rudnicki, A fast algorithm for constructing trees from distance matrices, Inform. Process. Lett. 30 (4) (1989) 215-220] gave an algorithm that reconstructs a degree d restricted tree from its distance matrix. According to their analysis, it runs in time O(dnlogdn) for topological trees. However, this turns out to be false; we show that the algorithm takes time in the topological case, giving tight examples.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号