首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
With the increasing popularity of information sharing and the growing number of social network users, relationship management is one of the key challenges which arise in the context of social networks. One particular relationship management task aims at identifying relationship types that are relevant between social network users and their contacts. Manually identifying relationship types is one possible solution, however it is a time-consuming and tedious task that requires constant maintenance. In this paper, we present a rule-based approach that sets the focus on published photos as a valuable source to identify relationship types. Our approach automatically generates relevant relationship discovery rules based on a crowdsourcing methodology that constructs useful photo datasets. Knowledge is first retrieved from these datasets and then used to create relationship discovery rules. The obtained set of rules is extended using a number of predefined common sense rules and then personalized using a rule mining algorithm. Experimental results demonstrate the correctness and the efficiency of the generated sets of rules to identify relationship types.  相似文献   

2.
Link prediction is a well-known task from the Social Network Analysis field that deals with the occurrence of connections in a network. It consists of using the network structure up to a given time in order to predict the appearance of links in a close future. The majority of previous work in link prediction is focused on the application of proximity measures (e.g., path distance, common neighbors) to non-connected pairs of nodes at present time in order to predict new connections in the future. New links can be predicted for instance by ordering the pairs of nodes according to their proximity scores. A limitation usually observed in previous work is that only the current state of the network is used to compute the proximity scores, without taking any temporal information into account (i.e., a static graph representation is adopted). In this work, we propose a new proximity measure for link prediction based on the concept of temporal events. In our work, we defined a temporal event related to a pair of nodes according to the creation, maintenance or interruption of the relationship between the nodes in consecutive periods of time. We proposed an event-based score which is updated along time by rewarding the temporal events observed between the pair of nodes under analysis and their neighborhood. The assigned rewards depend on the type of temporal event observed (e.g., if a link is conserved along time, a positive reward is assigned). Hence, the dynamics of links as the network evolves is used to update representative scores to pairs of nodes, rewarding pairs which formed or preserved a link and penalizing the ones that are no longer connected. In the performed experiments, we evaluated the proposed event-based measure in different scenarios for link prediction using co-authorship networks. Promising results were observed when the proposed measure was compared to both static proximity measures and a time series approach (a more competitive method) that also deploys temporal information for link prediction.  相似文献   

3.
A framework for joint community detection across multiple related networks   总被引:2,自引:0,他引:2  
Community detection in networks is an active area of research with many practical applications. However, most of the early work in this area has focused on partitioning a single network or a bipartite graph into clusters/communities. With the rapid proliferation of online social media, it has become increasingly common for web users to have noticeable presence across multiple web sites. This raises the question whether it is possible to combine information from several networks to improve community detection. In this paper, we present a framework that identifies communities simultaneously across different networks and learns the correspondences between them. The framework is applicable to networks generated from multiple web sites as well as to those derived from heterogeneous nodes of the same web site. It also allows the incorporation of prior information about the potential relationships between the communities in different networks. Extensive experiments have been performed on both synthetic and real-life data sets to evaluate the effectiveness of our framework. Our results show superior performance of simultaneous community detection over three alternative methods, including normalized cut and matrix factorization on a single network or a bipartite graph.  相似文献   

4.
The objective of this paper is to present and discuss a link mining algorithm called CorpInterlock and its application to the financial domain. This algorithm selects the largest strongly connected component of a social network and ranks its vertices using several indicators of distance and centrality. These indicators are merged with other relevant indicators in order to forecast new variables using a boosting algorithm. We applied the algorithm CorpInterlock to integrate the metrics of an extended corporate interlock (social network of directors and financial analysts) with corporate fundamental variables and analysts’ predictions (consensus). CorpInterlock used these metrics to forecast the trend of the cumulative abnormal return and earnings surprise of S&P 500 companies. The rationality behind this approach is that the corporate interlock has a direct effect on future earnings and returns because these variables affect directors and managers’ compensation. The financial analysts engage in what the agency theory calls the “earnings game”: Managers want to meet the financial forecasts of the analysts and analysts want to increase their compensation or business of the company that they follow. Following the CorpInterlock algorithm, we calculated a group of well-known social network metrics and integrated with economic variables using Logitboost. We used the results of the CorpInterlock algorithm to evaluate several trading strategies. We observed an improvement of the Sharpe ratio (risk-adjustment return) when we used “long only” trading strategies with the extended corporate interlock instead of the basic corporate interlock before the regulation Fair Disclosure (FD) was adopted (1998–2001). There was no major difference among the trading strategies after 2001. Additionally, the CorpInterlock algorithm implemented with Logitboost showed a significantly lower test error than when the CorpInterlock algorithm was implemented with logistic regression. We conclude that the CorpInterlock algorithm showed to be an effective forecasting algorithm and supported profitable trading strategies. A preliminary version of this paper was presented at the Link Analysis: Dynamics and Statics of Large Networks Workshop on the International Conference on Knowledge Discovery and Data Mining (KDD) 2006.  相似文献   

5.
链接预测研究如何利用网络中已有的信息预测可能存在的关系链接,目前已成为数据挖掘领域的热点研究问题之一。社会网络中普遍存在社团(community)结构,社团对链接的形成有重要的影响,但在大多数链接预测方法中未得到深入研究。针对这一现象,本文提出一种新的链接预测方法,采用社团信息改进节点对样本的描述并在监督学习框架中学习和预测。在现实数据集FaceBook和ACF中的实验结果表明,加入社团信息的链接预测方法获得了更高的准确率。  相似文献   

6.
Instagram is the fastest growing social network site globally. This study investigates motives for its use, and its relationship to contextual age and narcissism. A survey of 239 college students revealed that the main reasons for Instagram use are “Surveillance/Knowledge about others,” “Documentation,” “Coolness,” and “Creativity.” The next significant finding was a positive relationship between those who scored high in interpersonal interaction and using Instagram for coolness, creative purposes, and surveillance. Another interesting finding shows that there is a positive relationship between high levels of social activity (traveling, going to sporting events, visiting friends, etc.) and being motivated to use Instagram as a means of documentation. In reference to narcissism, there was a positive relationship between using Instagram to be cool and for surveillance. Theoretical contributions of this study relate to our understanding of uses and gratifications theory. This study uncovers new motives for social media use not identified in previous literature.  相似文献   

7.
基于OPNET的Link 16建模与仿真   总被引:1,自引:0,他引:1  
基于对Link16的研究,建立符合OSI分层标准的JTIDS端机模型和毁伤加载节点模型,仿真平台实现Link16基本功能,并重点解决:功能端机退出带来的网络拓扑结构变化问题;链路仿真和网络仿真跨平台间的接口问题;基于数据库的快速构建大型网络仿真场景的问题。针对不同设置,仿真实验将获得各种情况下的网络和端机性能指标,有效指导Link16网络设计和优化。  相似文献   

8.
Building trustworthy knowledge graphs for cyber–physical social systems (CPSS) is a challenge. In particular, current approaches relying on human experts have limited scalability, while automated approaches are often not validated by users resulting in knowledge graphs of questionable quality. This paper introduces a novel pervasive knowledge graph builder for mobile devices that brings together automation, experts’ and crowdsourced citizens’ knowledge. The knowledge graph grows via automated link predictions using genetic programming that are validated by humans for improving transparency and calibrating accuracy. The knowledge graph builder is designed for pervasive devices such as smartphones and preserves privacy by localizing all computations. The accuracy, practicality, and usability of the knowledge graph builder is evaluated in a real-world social experiment that involves a smartphone implementation and a Smart City application scenario. The proposed methodology of knowledge graph building outperforms a baseline method in terms of accuracy while demonstrating its efficient calculations on smartphones and the feasibility of the pervasive human supervision process in terms of high interactions throughput. These findings promise new opportunities to crowdsource and operate pervasive reasoning systems for cyber–physical social systems in Smart Cities.  相似文献   

9.
Cluster ranking with an application to mining mailbox networks   总被引:1,自引:1,他引:0  
We initiate the study of a new clustering framework, called cluster ranking. Rather than simply partitioning a network into clusters, a cluster ranking algorithm also orders the clusters by their strength. To this end, we introduce a novel strength measure for clusters—the integrated cohesion—which is applicable to arbitrary weighted networks. We then present a new cluster ranking algorithm, called C-Rank. We provide extensive theoretical and empirical analysis of C-Rank and show that it is likely to have high precision and recall. A main component of C-Rank is a heuristic algorithm for finding sparse vertex separators. At the core of this algorithm is a new connection between vertex betweenness and multicommodity flow. Our experiments focus on mining mailbox networks. A mailbox network is an egocentric social network, consisting of contacts with whom an individual exchanges email. Edges between contacts represent the frequency of their co–occurrence on message headers. C-Rank is well suited to mine such networks, since they are abundant with overlapping communities of highly variable strengths. We demonstrate the effectiveness of C-Rank on the Enron data set, consisting of 130 mailbox networks.  相似文献   

10.
《国际计算机数学杂志》2012,89(11):2233-2245
A data mining algorithm, such as Apriori, discovers a huge number of association rules (ARs) and therefore efficiently ranking all these rules is an important issue. This paper suggests a data envelopment analysis (DEA) method for ranking the discovered ARs using a maximum discrimination between the interestingness criteria defined for all ARs. It is shown that the proposed DEA model has a unique optimal solution which can be computed efficiently when the maximum discrimination between the criteria, the difference between DEA weights, is considered. The contribution of this study can be explained as follows: First, we show that using the conventional DEA model for ranking ARs may produce an invalid result because the weights corresponding to interestingness criteria would not discriminate between the criteria. This is investigated for a dataset consisting of 46 ARs with four criteria, namely support, confidence, itemset value and cross-selling. The paper also introduces the maximum discrimination between the weights of the criteria and obtains the optimal solution of the corresponding DEA model efficiently without the need of solving the related mathematical models. On the other hand, this model concludes less number of useful rule(s). A comparative analysis is then used to show the advantage of the proposed DEA method.  相似文献   

11.
Marco  Bram  Robert P.W.   《Pattern recognition》2005,38(12):2409-2418
A linear, discriminative, supervised technique for reducing feature vectors extracted from image data to a lower-dimensional representation is proposed. It is derived from classical linear discriminant analysis (LDA), extending this technique to cases where there is dependency between the output variables, i.e., the class labels, and not only between the input variables. (The latter can readily be dealt with in standard LDA.) The novel method is useful, for example, in supervised segmentation tasks in which high-dimensional feature vectors describe the local structure of the image.

The principal idea is that where standard LDA merely takes into account a single class label for every feature vector, the new technique incorporates class labels of its neighborhood in the analysis as well. In this way, the spatial class label configuration in the vicinity of every feature vector is accounted for, resulting in a technique suitable for, e.g. image data.

This extended LDA, that takes spatial label context into account, is derived from a formulation of standard LDA in terms of canonical correlation analysis. The novel technique is called the canonical contextual correlation projection (CCCP).

An additional drawback of LDA is that it cannot extract more features than the number of classes minus one. In the two-class case this means that only a reduction to one dimension is possible. Our contextual LDA approach can avoid such extreme deterioration of the classification space and retain more than one dimension.

The technique is exemplified on a pixel-based medical image segmentation problem in which it is shown that it may give significant improvement in segmentation accuracy.  相似文献   


12.
Determining the titles of Web pages is an important element in characterizing and categorizing the vast number of Web pages. There are a few approaches to automatically determining the titles of Web pages. As an R&D project for Naver, the operator of Naver (Korea’s largest portal site), we developed a new method that makes use of anchor texts and analysis of links among Web pages. In this paper, we describe our method and show experiment results of its performance.  相似文献   

13.
Increasing the capacity of wireless mesh networks has motivated numerous studies. In this context, the cross-layer optimization techniques involving joint use of routing and link scheduling are able to provide better capacity improvements. Most works in the literature propose linear programming models to combine both mechanisms. However, this approach has high computational complexity and cannot be extended to large-scale networks. Alternatively, algorithmic solutions are less complex and can obtain capacity values close to the optimal. Thus, we propose the REUSE algorithm, which combines routing and link scheduling and aims to increase throughput capacity in wireless mesh networks. Through simulations, the performance of the proposal is compared to a developed linear programming model, which provides optimal results, and to other proposed mechanisms in the literature that also deal with the problem algorithmically. We observed higher values of capacity in favor of our proposal when compared to the benchmark algorithms.  相似文献   

14.
Rajendra  Laxmi 《Neurocomputing》2007,70(16-18):2645
Line flow or real-power contingency selection and ranking is performed to choose the contingencies that cause the worst overloading problems. In this paper, a cascade neural network-based approach is proposed for fast line flow contingency selection and ranking. The developed cascade neural network is a combination of a filter module and a ranking module. All the contingency cases are applied to the filter module, which is trained to classify them either in critical contingency class or in non-critical contingency class using a modified BP algorithm. The screened critical contingencies are passed to the ranking module (four-layered feed-forward artificial neural network (ANN)) for their further ranking. Effectiveness of the proposed ANN-based method is demonstrated by applying it for contingency screening and ranking at different loading conditions for IEEE 14-bus system. Once trained, the cascade neural network gives fast and accurate screening and ranking for unknown patterns and is found to be suitable for on-line applications at energy management centre.  相似文献   

15.
Ranking importance of input parameters of neural networks   总被引:2,自引:0,他引:2  
Artificial neural networks have been used for simulation, modeling, and control purposes in many engineering applications as an alternative to conventional expert systems. Although neural networks usually do not reach the level of performance exhibited by expert systems, they do enjoy a tremendous advantage of very low construction costs. This paper addresses the issue of identifying important input parameters in building a multilayer, backpropagation network for a typical class of engineering problems. These problems are characterized by having a large number of input variables of varying degrees of importance; and identifying the important variables is a common issue since elimination of the unimportant inputs leads to a simplification of the problem and often a more accurate modeling or solution. We compare three different methods for ranking input importance: sensitivity analysis, fuzzy curves, and change of MSE (mean square error); and analyze their effectiveness. Simulation results based on experiments with simple mathematical functions as well as a real engineering problem are reported. Based on the analysis and our experience in building neural networks, we also propose a general methodology for building backpropagation networks for typical engineering applications.  相似文献   

16.
A new algorithm for ranking the input features and obtaining the best feature subset is developed and illustrated in this paper. The asymptotic formula for mutual information and the expectation maximisation (EM) algorithm are used to developing the feature selection algorithm in this paper. We not only consider the dependence between the features and the class, but also measure the dependence among the features. Even for noisy data, this algorithm still works well. An empirical study is carried out in order to compare the proposed algorithm with the current existing algorithms. The proposed algorithm is illustrated by application to a variety of problems.  相似文献   

17.
本文利用链路和节点发送缓存的状态信息对DSR协议进行优化和改进,提出了一种Performance-DSR(PDSR)协议。文中对PDSR协议的路由更新和路由选择机制进行了介绍,并对PDSR协议和DSR协议在节点不同移动速度下的性能进行了分析比较,结果表明PDSR协议比DSR协议更能适应网络拓扑变化快的MANET网络。  相似文献   

18.
本文利用链路和节点发送缓存的状态信息对DSR协议进行优化和改进,提出了一种Performance—DSR(PDSR)协议。文中对PDSR协议的路由更新和路由选择机制进行了介绍,并对PDSR协议和DSR协议在节点不同移动速度下的性能进行了分析比较,结果表明PDSR协议比DSR协议更能适应网络拓扑变化快的MANET网络。  相似文献   

19.
网页和纯文本结构差异性决定了传统的IR排序技术不能适应网络发展。为合理排序检索结果,引入了基于文献引文分析法原理的链接分析方法。该方法对被多个网页链接的网页赋予较高评价,同时考虑锚文本与查询词的相似度。源网页质量参差不齐,链向相同网页的锚文本质量也有优劣之分,但高质量源网页的锚文本不一定比质量低源网页的准确。对相似度高的锚文本加以修正,即通过计算查询词和锚文本相似度,对于相似度较高但源于PageRank值低的源网页的锚文本加以补偿,并重新排序查询结果。  相似文献   

20.
基于网页链接和内容分析的改进PageRank算法   总被引:9,自引:0,他引:9       下载免费PDF全文
结合网页链接分析和网页内容相关性分析提出一种改进的PageRank算法EPR(Extended PageRank),从分析网页内容相似性的角度解决相关性需求,从网页链接分析的角度解决权威性需求。算法为扩展PageRank提供了广阔的空间,并且实验证明,通过选择合适的参数EPR算法可以获得优于传统PageRank算法的排序结果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号