排序方式: 共有138条查询结果,搜索用时 187 毫秒
1.
2.
Real-world Data is Dirty: Data Cleansing and The Merge/Purge Problem 总被引:22,自引:0,他引:22
The problem of merging multiple databases of information about common entities is frequently encountered in KDD and decision support applications in large commercial and government organizations. The problem we study is often called the Merge/Purge problem and is difficult to solve both in scale and accuracy. Large repositories of data typically have numerous duplicate information entries about the same entities that are difficult to cull together without an intelligent equational theory that identifies equivalent items by a complex, domain-dependent matching process. We have developed a system for accomplishing this Data Cleansing task and demonstrate its use for cleansing lists of names of potential customers in a direct marketing-type application. Our results for statistically generated data are shown to be accurate and effective when processing the data multiple times using different keys for sorting on each successive pass. Combing results of individual passes using transitive closure over the independent results, produces far more accurate results at lower cost. The system provides a rule programming module that is easy to program and quite good at finding duplicates especially in an environment with massive amounts of data. This paper details improvements in our system, and reports on the successful implementation for a real-world database that conclusively validates our results previously achieved for statistically generated data. 相似文献
3.
4.
5.
随着RFID技术的发展,RFID应用正无所不在。通过对RFID数据的深入处理和分析,可以发现更复杂的复合事件和隐含知识,从而有效地支持事件监控、事件预警等先进应用。由于RFID的特殊性,依靠现有的主动数据库技术和数据流管理技术难以实现高效的RFID事件检测和处理。分析了RFID数据的特点,归纳和总结了RFID复杂事件处理的最新技术,讨论了一些亟待解决的新问题,主要有RFID数据清洗方法、以数据为中心的检测技术、以事件为中心的检测技术,以及复杂事件处理系统等,并对今后的研究重点进行了展望。 相似文献
6.
7.
一种可靠的数据仓库中ETL策略与架构设计 总被引:16,自引:0,他引:16
作为数据仓库系统的关键部件,ETL完成数据抽取、清洗、转换和装载的工作,它是构建数据仓库的重要环节,同时也是构建数据仓库过程中出现问题最多的环节,所以针对这点,该文给出了一个可靠的同时易于扩展的ETL策略和架构。文章首先简单地介绍了数据仓库技术和ETL技术,包括ETL的相关概念、ETL在数据仓库中的功能和重要地位;然后重点介绍了这种ETL的具体策略和架构设计。 相似文献
8.
9.
一种大数据量的相似记录检测方法 总被引:9,自引:0,他引:9
大数据量的相似重复记录检测是数据清洗中的一个重要问题,提出一种基于q-gram层次空间的聚类检测方法:它首先将数据映射成q-gram空间中的点,并根据q-gram空间中的相似性度量采用层次聚类方法将相似的重复记录检测出来.它克服了传统的“排序&合并”方法由于字符位置敏感不能将相似记录字符串排在邻近位置的不足和大数量外排序引起I/O代价过大的问题.理论分析和实验表明,方法不仅具有好的检测精度,且有好的伸缩性,能够有效地解决大数据量的相似重复记录检测. 相似文献
10.