首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a new methodology for exploring and analyzing navigation patterns on a web site. The patterns that can be analyzed consist of sequences of URL categories traversed by users. In our approach, we first partition site users into clusters such that users with similar navigation paths through the site are placed into the same cluster. Then, for each cluster, we display these paths for users within that cluster. The clustering approach we employ is model-based (as opposed to distance-based) and partitions users according to the order in which they request web pages. In particular, we cluster users by learning a mixture of first-order Markov models using the Expectation-Maximization algorithm. The runtime of our algorithm scales linearly with the number of clusters and with the size of the data; and our implementation easily handles hundreds of thousands of user sessions in memory. In the paper, we describe the details of our method and a visualization tool based on it called WebCANVAS. We illustrate the use of our approach on user-traffic data from msnbc.com.  相似文献   

2.
There are many parameters that may affect the navigation behaviour of web users. Prediction of the potential next page that may be visited by the web user is important, since this information can be used for prefetching or personalization of the page for that user. One of the successful methods for the determination of the next web page is to construct behaviour models of the users by clustering. The success of clustering is highly correlated with the similarity measure that is used for calculating the similarity among navigation sequences. This work proposes a new approach for determining the next web page by extending the standard clustering with the content-based semantic similarity method. Semantics of web-pages are represented as sets of concepts, and thus, user session are modelled as sequence of sets. As a result, session similarity is defined as an alignment of two sequences of sets. The success of the proposed method has been shown through applying it on real life web log data.  相似文献   

3.
4.
集成Web使用挖掘和内容挖掘的用户浏览兴趣迁移挖掘算法   总被引:2,自引:0,他引:2  
提出了一种集成Web使用挖掘和内容挖掘的用户浏览兴趣迁移模式的模型和算法。介绍了Web页面及其聚类。通过替代用户事务中的页面为相应聚类的方法得到用户浏览兴趣序列。从用户浏览兴趣序列中得到用户浏览兴趣迁移模式。该模型对于网络管理者理解用户的行为特征和安排Web站点结构有较大的意义。  相似文献   

5.
We address the handling of time series search based on two important distance definitions: Euclidean distance and time warping distance. The conventional method reduces the dimensionality by means of a discrete Fourier transform. We apply the Haar wavelet transform technique and propose the use of a proper normalization so that the method can guarantee no false dismissal for Euclidean distance. We found that this method has competitive performance from our experiments. Euclidean distance measurement cannot handle the time shifts of patterns. It fails to match the same rise and fall patterns of sequences with different scales. A distance measure that handles this problem is the time warping distance. However, the complexity of computing the time warping distance function is high. Also, as time warping distance is not a metric, most indexing techniques would not guarantee any false dismissal. We propose efficient strategies to mitigate the problems of time warping. We suggest a Haar wavelet-based approximation function for time warping distance, called Low Resolution Time Warping, which results in less computation by trading off a small amount of accuracy. We apply our approximation function to similarity search in time series databases, and show by experiment that it is highly effective in suppressing the number of false alarms in similarity search.  相似文献   

6.
Abstract. The analysis of web usage has mostly focused on sites composed of conventional static pages. However, huge amounts of information available in the web come from databases or other data collections and are presented to the users in the form of dynamically generated pages. The query interfaces of such sites allow the specification of many search criteria. Their generated results support navigation to pages of results combining cross-linked data from many sources. For the analysis of visitor navigation behaviour in such web sites, we propose the web usage miner (WUM), which discovers navigation patterns subject to advanced statistical and structural constraints. Since our objective is the discovery of interesting navigation patterns, we do not focus on accesses to individual pages. Instead, we construct conceptual hierarchies that reflect the query capabilities used in the production of those pages. Our experiments with a real web site that integrates data from multiple databases, the German SchulWeb, demonstrate the appropriateness of WUM in discovering navigation patterns and show how those discoveries can help in assessing and improving the quality of the site. Received June 21, 1999 / Accepted December 24, 1999  相似文献   

7.
This study introduces a simple but effective visualization system that allows decision makers to easily identify groups of visitors with different sequential navigation patterns. In particular, navigation sequences of visitors are encoded as an order-dependent format so that early visited pages have more weights in the clustering process. Experimental results on a real-world dataset show that Markov state-transition diagrams with transition probabilities based on the proposed scheme can be very useful for developing Web marketing programs tailored to visitors' preferences and interests.  相似文献   

8.
通过定义类别聚类密度、类别复杂度以及类别清晰度三个指标,从语料库信息度量的角度研究多种代表性的中文分词方法在隐含概率主题模型LDA下对文本分类性能的影响,定量、定性地分析不同分词方法在网页和学术文献等不同类型文本的语料上进行分类的适用性及影响分类性能的原因。结果表明:三项指标可以有效指明分词方法对语料在分类时产生的影响,Ik Analyzer分词法和ICTCLAS分词法分别受类别复杂度和类别聚类密度的影响较大,二元分词法受三个指标的作用相当,使其对于不同语料具有较好的适应性。对于学术文献类型的语料,使用二元分词法时的分类效果较好,F1值均在80%以上;而网页类型的语料对于各种分词法的适应性更强。本文尝试通过对语料进行信息度量而非单纯的实验来选择提高该语料分类性能的最佳分词方法,以期为网页和学术文献等不同类型的文本在基于LDA模型的分类系统中选择合适的中文分词方法提供参考。  相似文献   

9.
Huge amounts of various web items (e.g., images, keywords, and web pages) are being made available on the Web. The popularity of such web items continuously changes over time, and mining for temporal patterns in the popularity of web items is an important problem that is useful for several Web applications; for example, the temporal patterns in the popularity of web search keywords help web search enterprises predict future popular keywords, thus enabling them to make price decisions when marketing search keywords to advertisers. However, the presence of millions of web items makes it difficult to scale up previous techniques for this problem. This paper proposes an efficient method for mining temporal patterns in the popularity of web items. We treat the popularity of web items as time-series and propose a novel measure, a gap measure, to quantify the dissimilarity between the popularity of two web items. To reduce the computational overhead for this measure, an efficient method using the Discrete Fourier Transform (DFT) is presented. We assume that the popularity of web items is not necessarily periodic. For finding clusters of web items with similar popularity trends, we show the limitations of traditional clustering approaches and propose a scalable, efficient, density-based clustering algorithm using the gap measure. Our experiments using the popularity trends of web search keywords obtained from the Google Trends web site illustrate the scalability and usefulness of the proposed approach in real-world applications.  相似文献   

10.
对模糊C均值算法进行了改进,采用更适合遥感图像的Mahalanobis距离代替欧氏距离,并在聚类中加入了先验信息。在聚类过程中,未标签的样本通过与已标签的样本进行相似性比较来提高算法的准确性。实验表明,改进的算法能有效提高算法准确度。  相似文献   

11.
提出一种基于本体的网络会话表示方法,即语义会话,和一种会话聚类和可视化方法。会话聚类方面基于用户浏览网站的公共路径提出一种语义会话间的相似性度量——语义公共路径相似性度量(SMSCP),并且使用改进的kmedoids聚类算法衡量其有效性。在聚类结果可视化方面应用层云表来展示聚类结果。实验表明文中的聚类方法和可视化方法具有更好的有效性及可理解性。  相似文献   

12.
Web sites contain an ever increasing amount of information within their pages. As the amount of information increases so does the complexity of the structure of the web site. Consequently it has become difficult for visitors to find the information relevant to their needs. To overcome this problem various clustering methods have been proposed to cluster data in an effort to help visitors find the relevant information. These clustering methods have typically focused either on the content or the context of the web pages. In this paper we are proposing a method based on Kohonen’s self-organizing map (SOM) that utilizes both content and context mining clustering techniques to help visitors identify relevant information quicker. The input of the content mining is the set of web pages of the web site whereas the source of the context mining is the access-logs of the web site. SOM can be used to identify clusters of web sessions with similar context and also clusters of web pages with similar content. It can also provide means of visualizing the outcome of this processing. In this paper we show how this two-level clustering can help visitors identify the relevant information faster. This procedure has been tested to the access-logs and web pages of the Department of Informatics and Telecommunications of the University of Athens.  相似文献   

13.
为解决传统聚类算法初始中心易陷入局部最优、耗时长的问题,提出一种改进的K-means聚类优化算法。该算法引入最大最小距离和加权欧氏距离,从剩余聚类点距离均值和出发,避免孤立点和边缘数据的影响。利用比重法对主成分进行改进,以由此获得的特征影响因子作为初始特征权重,构建一种加权欧氏距离度量。根据特征贡献率对聚类的影响,筛选具有代表性的特征因子凸显聚类效果,最终合成汽车行驶工况,分析瞬时油耗。结果表明,所提算法构建行驶工况的速度-加速度联合分布差异值仅为105%,比传统K-means聚类省时44.2%,行驶工况拟合度较高,能反映实际车辆的运行特征及油耗。   相似文献   

14.
合适的距离度量函数对于聚类结果有重要的影响。针对大规模高维数据集,使用增量式聚类算法进行距离度量的选择分析。SpFCM算法是将大规模数据集分成小样本进行增量分批聚类,可在有限的计算机内存中获得较好的聚类结果。在传统的SpFCM算法的基础上,使用不同的距离度量函数来衡量样本之间的相似性,以得出不同的距离度量对SpFCM算法的影响。在不同的大规模高维数据集中,使用欧氏距离、余弦距离、相关系数距离和扩展的杰卡德距离来计算距离。实验结果表明,后3个距离度量相对于欧氏距离可以很大程度地提高聚类效果,其中相关系数距离可以得到较好的结果,余弦距离和扩展的杰卡德距离效果比较一般。  相似文献   

15.
Correlation-Based Web Document Clustering for Adaptive Web Interface Design   总被引:2,自引:2,他引:2  
A great challenge for web site designers is how to ensure users' easy access to important web pages efficiently. In this paper we present a clustering-based approach to address this problem. Our approach to this challenge is to perform efficient and effective correlation analysis based on web logs and construct clusters of web pages to reflect the co-visit behavior of web site users. We present a novel approach for adapting previous clustering algorithms that are designed for databases in the problem domain of web page clustering, and show that our new methods can generate high-quality clusters for very large web logs when previous methods fail. Based on the high-quality clustering results, we then apply the data-mined clustering knowledge to the problem of adapting web interfaces to improve users' performance. We develop an automatic method for web interface adaptation: by introducing index pages that minimize overall user browsing costs. The index pages are aimed at providing short cuts for users to ensure that users get to their objective web pages fast, and we solve a previously open problem of how to determine an optimal number of index pages. We empirically show that our approach performs better than many of the previous algorithms based on experiments on several realistic web log files. Received 25 November 2000 / Revised 15 March 2001 / Accepted in revised form 14 May 2001  相似文献   

16.
基于Web使用挖掘技术的聚类算法改进   总被引:1,自引:0,他引:1  
Web使用挖掘中的聚类算法可以聚集相似特性的用户和页面,以便从中提取有用的感兴趣的信息.通过深入分析基于Hamming距离的聚类算法,指出其中存在的不合理性和低效性,然后根据这些不足引入了加权的bipartite图来表示整个数据集,修改了Hamming距离计算公式以便更准确地描述两对象间的相似度,并对算法进行了改进.实验结果表明,改进的算法是准确且高效的.  相似文献   

17.
Users of web sites often do not know exactly which information they are looking for nor what the site has to offer. The purpose of their interaction is not only to fulfill but also to articulate their information needs. In these cases users need to pass through a series of pages before they can use the information that will eventually answer their questions. Current systems that support navigation predict which pages are interesting for the users on the basis of commonalities in the contents or the usage of the pages. They do not take into account the order in which the pages must be visited. In this paper we propose a method to automatically divide the pages of a web site on the basis of user logs into sets of pages that correspond to navigation stages. The method searches for an optimal number of stages and assigns each page to a stage. The stages can be used in combination with the pages’ topics to give better recommendations or to structure or adapt the site. The resulting navigation structures guide the users step by step through the site providing pages that do not only match the topic of the user’s search, but also the current stage of the navigation process.  相似文献   

18.
This paper presents an approach based on information retrieval and clustering techniques for automatically enhancing the navigation structure of a Web site for improving navigability. The approach increments the set of navigation links provided in each page of the site with a semantic navigation map, i.e., a set of links enabling navigating from a given page to other pages of the site showing similar or related content. The approach uses Latent Semantic Indexing to compute a dissimilarity measure between the pages of the site and a graph-theoretic clustering algorithm to group pages showing similar or related content according to the calculated dissimilarity measure. AJAX code is finally used to extend each Web page with an associated semantic navigation map. The paper also presents a prototype of a tool developed to support the approach and the results from a case study conducted to assess the validity and feasibility of the proposal.  相似文献   

19.
基于归一化编辑距离和谱聚类的轨迹模式学习方法   总被引:6,自引:0,他引:6  
针对欧氏距离和Hausdorff距离等在描述目标运动轨迹差异性时度量不够准确的问题,提出一种基于归一化编辑距离和谱聚类的轨迹分布模式学习方法.首先对目标的运动轨迹进行矢量量化编码;然后采用归一化的编辑距离来度量轨迹编码序列之间的差异,得到归一化编辑距离矩阵;再通过该矩阵进行谱聚类来提取轨迹的分布模式;最后利用所提取的轨迹分布模式确定整条轨迹及其局部是否异常.通过仿真和真实场景的实验验证了该方法的有效性.  相似文献   

20.
Dynamic time warping (DTW), which finds the minimum path by providing non-linear alignments between two time series, has been widely used as a distance measure for time series classification and clustering. However, DTW does not account for the relative importance regarding the phase difference between a reference point and a testing point. This may lead to misclassification especially in applications where the shape similarity between two sequences is a major consideration for an accurate recognition. Therefore, we propose a novel distance measure, called a weighted DTW (WDTW), which is a penalty-based DTW. Our approach penalizes points with higher phase difference between a reference point and a testing point in order to prevent minimum distance distortion caused by outliers. The rationale underlying the proposed distance measure is demonstrated with some illustrative examples. A new weight function, called the modified logistic weight function (MLWF), is also proposed to systematically assign weights as a function of the phase difference between a reference point and a testing point. By applying different weights to adjacent points, the proposed algorithm can enhance the detection of similarity between two time series. We show that some popular distance measures such as DTW and Euclidean distance are special cases of our proposed WDTW measure. We extend the proposed idea to other variants of DTW such as derivative dynamic time warping (DDTW) and propose the weighted version of DDTW. We have compared the performances of our proposed procedures with other popular approaches using public data sets available through the UCR Time Series Data Mining Archive for both time series classification and clustering problems. The experimental results indicate that the proposed approaches can achieve improved accuracy for time series classification and clustering problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号