首页 | 本学科首页   官方微博 | 高级检索  
     

一种高效的检测相似重复记录的方法
引用本文:邱越峰,田增平,季文赟,周傲英.一种高效的检测相似重复记录的方法[J].计算机学报,2001,24(1):69-77.
作者姓名:邱越峰  田增平  季文赟  周傲英
作者单位:复旦大学计算机科学系,
摘    要:如何消除数据库中的重复信息是数据质量研究中的一个热课题。文中提出了一种高效的基于N-Gram的检测相似重复记录的方法,主要工作有:(1)提出了一种高效的基于N-Gram的聚类算法,该算法能适应常见的拼写错误从而较好地聚类相似重复记录,复杂度仅为O(N);同时提出该算法的改进形式,使其在检测的同时能自动校正单词的插入、删除错误、提高检测精度。(2)采用了一种高效的应用无关的Pair-wise比较算法,该算法以单词间的编辑距离为基础,通过计算两记录中单间的编辑距离来判断记录的相似与否。(3)给出了一种改进的优先队列算法来准确地聚类相似重复记录,该算法使用固定大小的优先队列顺序扫描已排序的记录,通过比较当前记录和队列中记录的距离来聚类相似重复记录,此外,该文构造了合适的实验环境并作了大量的算法实验,在此基础上,文中分析了大量、翔实的实验结果从而验证了算法的科学性。

关 键 词:信息集成  相似重复记录  聚类  数据质量  数据库
修稿时间:1999年10月18

An Efficient Approach for Detecting Approximately Duplicate Database Records
QIU Yue,Feng,TIAN Zeng,Ping,JI Wen,Yun,ZHOU Ao,Ying.An Efficient Approach for Detecting Approximately Duplicate Database Records[J].Chinese Journal of Computers,2001,24(1):69-77.
Authors:QIU Yue  Feng  TIAN Zeng  Ping  JI Wen  Yun  ZHOU Ao  Ying
Abstract:Eliminating duplications in large databases has drawn great attention, and it gradually becomes a hot issue in the research of data quality. In this paper, the problem is studied and an efficient N Gram based approach for detecting approximately duplicate database records is proposed. The contributions of this paper are: (1) An efficient N Gram based clustering algorithm is proposed, which can tolerate the most common types of spelling mistakes and cluster those approximately duplicated records together. Besides, an improved N Gram based algorithm is proposed as well, which could not only detect those duplications but also revise most of the insertion and deletion spelling mistakes in the words of records automatically. The advantage is that the N Gram based algorithm is only with the computing complexity of O(N). (2) A very efficient application independent Pair Wise comparison algorithm based on the edit distance is exploited. It verifies whether two records are approximately duplicate records via computing the edit distance between each pair of words in the two records. (3) For detecting approximately duplicate records, an improved algorithm that employs the priority queue is presented. It uses a priority queue to scan all sorted records sequentially, and makes those approximately duplicate records cluster together through comparing the distance between current record and the corresponding record in the priority queue. Furthermore, an effective experimental environment is set up and a lot of algorithm tests are carried out. Plenty of results are produced through a great deal of different actual experiments. The corresponding aborative analysis is also presented here. Based on all of the experiments and analysis in this paper, the efficiency, rationality and scientificity of the N Gram based approximately duplicate record detecting approach is validated.
Keywords:information integration  approximately duplicated records    N    Gram  Pair  wise  clustering  priority queue
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号