首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
概率XMI、是描述不确定数据的有效方式,Dcwcy编码是一种重要的XMI、文档关键字索引编码技术。在概率XML大文档关键字索引检索过程中,频繁地比较关键字索引Dewey编码非常耗时。针对上述问题,对概率XML文档进行分区,并设计了适合概率XML文档特点的关键字索引的Dewey编码策略,提出了一种概率XML文档Top-k关键字并行检索算法PTKS(Parallcl Top-k Keyword Scarch Algorithm)。实验证明,P"I'KS提高了概率XM工文档关键字检索的时间效率,尤其在文档结构复杂度高的情况下检索效率提高更加显著。  相似文献   

2.
基于结构与文本关键词相关度的XML网页分类研究   总被引:9,自引:0,他引:9  
针对XML网页特点,提出了计算XML文档结构相似性、文档关键词出现的位置以及关键词频度的方法,根据计算的结果提取XML网页特征,同时设计了一种基于支持向量机的XML网页多类分类算法.算法通过XML文档的训练样本集为每一类文档建立基于相似公共特征的聚类核,计算测试样本中的文档与每个聚类核的相似度,判断该文档的所属类.实验证明该分类算法具有比较高的分类查全率和查准率,能够较好地解决XML文档同时属于多个类的问题.  相似文献   

3.
The flexibility of XML data model allows a more natural representation of uncertain data compared with the relational model. Matching twig pattern against XML data is a fundamental problem in querying information from XML documents. For a probabilistic XML document, each twig answer has a probabilistic value because of the uncertainty of data. The twig answers that have small probabilistic value are useless to the users, and usually users only want to get the answers with the k largest probabilistic values. To this end, existing algorithms for ordinary XML documents cannot be directly applicable due to the need for handling probability distributional nodes and efficient calculation of top-k probabilities of answers in probabilistic XML. In this paper, we address the problem of finding twig answers with top-k probabilistic values against probabilistic XML documents directly. We propose a new encoding scheme called PEDewey for probabilistic XML in this paper. Based on this encoding scheme, we then design two algorithms for finding answers of top-k probabilities for twig queries. One is called ProTJFast, to process probabilistic XML data based on element streams in document order, and the other is called PTopKTwig, based on the element streams ordered by the path probability values. Experiments have been conducted to study the performance of these algorithms.  相似文献   

4.
Transforming paper documents into XML format with WISDOM++   总被引:1,自引:1,他引:0  
The transformation of scanned paper documents to a form suitable for an Internet browser is a complex process that requires solutions to several problems. The application of an OCR to some parts of the document image is only one of the problems. In fact, the generation of documents in HTML format is easier when the layout structure of a page has been extracted by means of a document analysis process. The adoption of an XML format is even better, since it can facilitate the retrieval of documents in the Web. Nevertheless, an effective transformation of paper documents into this format requires further processing steps, namely document image classification and understanding. WISDOM++ is a document processing system that operates in five steps: document analysis, document classification, document understanding, text recognition with an OCR, and transformation into HTML/XML format. The innovative aspects described in the paper are: the preprocessing algorithm, the adaptive page segmentation, the acquisition of block classification rules using techniques from machine learning, the layout analysis based on general layout principles, and a method that uses document layout information for conversion to HTML/XML formats. A benchmarking of the system components implementing these innovative aspects is reported. Received June 15, 2000 / Revised November 7, 2000  相似文献   

5.
XML文档相似性的仿真研究   总被引:1,自引:0,他引:1  
XML文档相似性的计算是XML文档分类中的一个难题。文中描述了一种基于结构的方法,通过序列化模式挖掘方法,挖掘出两个文档之间的最大相似路径,从而可以通过计算最大相似的路径的节点数目和所有路径的节点数目的比值,得到两个文档之间的相似度。文章提出了一种新的最小化XML文档的方法,并且综合考虑了文档节点的语义相似度和结构相似度,从而进一步地提高了计算文档相似度的精度。实验表明,该方法有着良好的应用前景。  相似文献   

6.
概率XML文件是概率数据的网络数据交换和表示标准,元素取值及其概率的查询与计算是概率XML文件的重要研究内容.概率XML文件树是一种有效的概率XML文件的数据模型,定义了概率XML文件树的基本路径和扩展路径,提出了根据可能世界原理将概率XML文件树分解为普通子XML树的集合的算法,根据路径分析原理将概率XML文件树分解为子概率XML树的集合的算法和相应的查询与计算结点及结点集合概率的算法,并通过实验进行了比较分析.实验结果表明:这两种方法是有效的;与前一种方法比较,后一种方法适合较大的概率XML文件树、结点及结点集合的概率的查询,计算过程较简单.  相似文献   

7.
Measuring the structural similarity between an XML document and a DTD has many relevant applications that range from document classification and approximate structural queries on XML documents to selective dissemination of XML documents and document protection. The problem is harder than measuring structural similarity among documents, because a DTD can be considered as a generator of documents. Thus, the problem is to evaluate the similarity between a document and a set of documents. An effective structural similarity measure should face different requirements that range from considering the presence and absence of required elements, as well as the structure and level of the missing and extra elements to vocabulary discrepancies due to the use of synonymous or syntactically similar tags. In the paper, starting from these requirements, we provide a definition of the measure and present an algorithm for matching a document against a DTD to obtain their structural similarity. Finally, experimental results to assess the effectiveness of the approach are presented.  相似文献   

8.
马行健 《计算机安全》2009,(10):69-73,79
针对普通XML文档的机密性和完整性需求,首先介绍了XML文档加密与签名的原理和语法结构,然后研究了XML文档加密与签名的类型,在此基础上用java语言实现XML文档的加密和数字签名,最后提出了XML文档的安全交换模型。实验表明,该方法能够有效地确保XML文档的机密性和完整性,是实现XML文档安全交换的一种很有发展前景的保障技术。  相似文献   

9.
Using structural similarity for clustering XML documents   总被引:2,自引:2,他引:0  
In this paper, we describe a method for clustering XML documents. Its goal is to group documents sharing similar structures. Our approach is two-step. We first automatically extract the structure from each XML document to be classified. This extracted structure is then used as a representation model to classify the corresponding XML document. The idea behind the clustering is that if XML documents share similar structures, they are more likely to correspond to the structural part of the same query. Finally, for the experimentation purpose, we tested our algorithms on both real (ACM SIGMOD Record corpus) and synthetic data. The results clearly demonstrate the interest of our approach.  相似文献   

10.
XML documents have recently become ubiquitous because of their varied applicability in a number of applications. Classification is an important problem in the data mining domain, but current classification methods for XML documents use IR-based methods in which each document is treated as a bag of words. Such techniques ignore a significant amount of information hidden inside the documents. In this paper we discuss the problem of rule based classification of XML data by using frequent discriminatory substructures within XML documents. Such a technique is more capable of finding the classification characteristics of documents. In addition, the technique can also be extended to cost sensitive classification. We show the effectiveness of the method with respect to other classifiers. We note that the methodology discussed in this paper is applicable to any kind of semi-structured data. Editors: Hendrik Blockeel, David Jensen and Stefan Kramer An erratum to this article is available at .  相似文献   

11.
随着XML技术的发展,如何利用现有的数据库技术存储和查询XML文档已成为XML数据管理领域研究的热点问题。本文介绍了一种新的文档编码方法,以及基于这种编码方式提出了一种新的XML文档存储方法。方法按照文档中结点类型将XML文档树型结构分解为结点,分别存储到对应的关系表中,这种方法能够将任意结构的文档存储到一个固定的关系模式中。同时为了便于实现数据的查询,将文档中出现的简单路径模式也存储为一个表。这种新的文档存储方法能够有效地支持文档的查询操作,并能根据结点的编码信息实现原XML文档的正确恢复。最后,对本文提出的存储方法和恢复算法进行了实验验证。  相似文献   

12.
一种基于RDBMS的XML数据的存储方法   总被引:1,自引:0,他引:1  
XML作为一种数据交换的标准在互联网上推出,使得XML数据和数据库的相互交换成为必要:一是因为WEB中大量的多样化数据需要进行有效的存储和管理;二是因为在现有的数据库中存储有大量的数据并且需要将这些数据转换为XML发布到WEB中。论文提出了一个基于关系数据库的数据转换框架,基于数据的完整性讨论XML数据存储策略。建立一个XML通用数据模型,把文档树分解成多个节点,根据一定的映射规则存储到关系表中,从而不用考虑文档的模式信息(DTD、XMLSchema)。最后通过一个具体的文档实例来说明这种策略的有效性。  相似文献   

13.
As probabilistic data management is becoming one of the main research focuses and keyword search is turning into a more popular query means, it is natural to think how to support keyword queries on probabilistic XML data. With regards to keyword query on deterministic XML documents, ELCA (Exclusive Lowest Common Ancestor) semantics allows more relevant fragments rooted at the ELCAs to appear as results and is more popular compared with other keyword query result semantics (such as SLCAs). In this paper, we investigate how to evaluate ELCA results for keyword queries on probabilistic XML documents. After defining probabilistic ELCA semantics in terms of possible world semantics, we propose an approach to compute ELCA probabilities without generating possible worlds. Then we develop an efficient stack-based algorithm that can find all probabilistic ELCA results and their ELCA probabilities for a given keyword query on a probabilistic XML document. Finally, we experimentally evaluate the proposed ELCA algorithm and compare it with its SLCA counterpart in aspects of result probability, time and space efficiency, and scalability.  相似文献   

14.
提出一种比较XML文档这种半结构化数据流的模糊技术,并在此基础之上进行分类,主要包括基于结构的文档分类以及基于内容的文档分类。该方法建立在XML文档片段的平面编码基础之上,将XML文档表示成模糊包的形式,使用比较函数,计算出它们结构的相似性。在对XML文档进行基于结构的分类以后,可以进一步考虑其内容,以获得更细的分类。  相似文献   

15.
吕锋  余丽 《微机发展》2007,17(6):53-55
文中介绍了三种常用的Web数据抽取的方法:直接解析HTML文档的方法,基于XML的方法(也称作为分析HTML层次结构的方法)以及基于概念建模的方法。重点研究其中的基于XML的数据抽取方法,基本做法是将原始的HTML文档通过一个过滤器检查并修改HTML文档的语法结构,从而形成一篇基于XML的XHTML,然后利用XML工具来处理这些HTML文档。实现了从非结构化的HTML文档向结构化的XML文档转化的预处理过程,给在Web挖掘中使用传统的数据抽取方法进行数据抽取创造了有利条件。  相似文献   

16.
XML文档架构与关系数据模型间的映射研究   总被引:6,自引:2,他引:6  
XML逐渐成为Internet上数据描述和交换的标准。随着Web上大量数据用XML文档表示出来,有必要对这些XML文档进行操纵管理。为了结合关系数据库系统强大的数据操纵能力,论文在对XML文档的逻辑结构进行简要介绍的基础上,就XML文档特别是结构化XML文档与关系数据模型数据之间的互动映射作了深入探讨,特别是在数据结构和数据完整性约束条件的映射关系上作了更深一层的研究,提出了一系列基于XML本身的映射规则。  相似文献   

17.
本文对细粒度XML文档上的BIBA严格完整性策略进行了研究。通过对XML文档的结构约束进行分析,建立了XML文档上的完整性约束规则,从而将BIBA严格完整性策略扩展到了XML文档上;建立一个包含完整性属性的XML文档模型,对XML文档的结构特点进行了分析;提出了完整性标签传播规则,以支持部分标记完整性的XML文档;最后对完整性策略的实现机制进行了讨论。  相似文献   

18.
为解决XML文档对动态性表示不足的问题,通过对XML文档加入时间信息进行建模,提出2种基于时间序列的XML文档频繁变化结构挖掘算法FCSBF和FCSDF,实现对动态XML文档频繁变化结构的高效挖掘。在此基础上提出一种针对动态XML文档的聚类新方法,实验结果证明,该方法能够对动态XML文档进行有效的聚类。  相似文献   

19.
XML文档的查询索引是当前研究的热点.该文探讨XML文档的索引技术,包括索引结构的设计等问题,给出了一个高效的XML索引方法,采用独特的编码方法,对XML文档及其遵循的DTD同时建立索引,有效支持内容和结构的双重检索;该方法结合了区间编码、倒排表和路径索引的思想,利用DTD结构信息来提高查询的效率.实验结果表明,本文提出的方法可以有效地降低建立XML数据索引的代价,能够缩短查询的响应时间.  相似文献   

20.
XML文档近似连接操作是在两个XML文档集合中发现近似的XML文档,其在基于XML数据的信息集成、XML数据清洗等系统中有着广泛的应用.然而,目前XML文档近似连接操作的一个显著问题在于:当文档之间存在较大差异时,存在大量的重复计算,降低了处理效率.对于这个问题,提出了基于聚类的XML文档近似连接方法,基本思想是为每个XML文档建立一个索引,如果两个数据集中若干文档的索引较相似,可以把它们组成一簇,然后在每一簇中执行近似连接.而不在任何簇中的文档,则无需对其进行任何计算.实验结果表明,提出的方法在保证正确率的前提下具有高效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号